A new artificial intelligence created by researchers at the Massachusetts Institute of Technology pulls off a staggering feat: by analyzing only a short audio clip of a person's voice, it reconstructs what they might look like in real life.
麻省理工學院的研究人員創造的一種新的人工智能取得了驚人的成就:通過僅分析一個人聲音的短片段,可以重建他們在現實生活中的樣子。

The AI's results aren't perfect, but they're pretty good - a remarkable and somewhat terrifying example of how a sophisticated AI can make incredible inferences from tiny snippets of data.
人工智能的結果并不完美,但它們已經相當不錯了 - 這是一個細思恐極例子,說明復雜的人工智能如何從微小的數據片段中做出令人難以置信的推斷。

In a paper published this week to the preprint server arXiv, the team describes how it used trained a generative adversarial network to analyze short voice clips and "match several biometric characteristics of the speaker," resulting in "matching accuracies that are much better than chance."
在本周發布給預打印服務器arXiv的一篇論文中,該團隊描述了如何使用經過訓練的生成對抗網絡來分析短語音片段并“匹配說話者的幾種生物特征”,從而使“匹配準確性大大提高”。

圖片來源:視覺中國

That's the carefully-couched language of the researchers. In practice, the Speech2Face algorithm seems to have an uncanny knack for spitting out rough likenesses of people based on nothing but their speaking voices.
這是由研究人員精心打造的語言系統。在實際操作中,Speech2Face算法似乎有一個神秘的技巧,它只能根據他們的說話聲音產生人們大概的肖像。

The MIT researchers urge caution on the project's GitHub page, acknowledging that the tech raises worrisome questions about privacy and discrimination.
麻省理工學院的研究人員敦促對該項目的GitHub頁面提出警告,承認該技術引發了關于隱私和歧視的問題令人擔憂。

"Although this is a purely academic investigation, we feel that it is important to explicitly discuss in the paper a set of ethical considerations due to the potential sensitivity of facial information," they wrote, suggesting that "any further investigation or practical use of this technology will be carefully tested to ensure that the training data is representative of the intended user population."
“雖然這是純粹的學術調查,但我們認為,由于面部信息的潛在敏感性,在文章中明確討論一系列的道德因素很重要,”他們寫道,“這表明‘對此進行任何進一步調查或實際應用我們都會對此進行嚴謹的技術測試,以確保實際數據能夠代表預期的用戶群。’”

 

翻譯:進擊的Meredith