2023年经济学人 人工智能语音的诈骗风险(在线收听

 

    Culture

    文艺版块

    Johnson

    约翰逊专栏

    Speak easy

    悄声说话

    Technology is making it possible to clone voices and cause trouble.

    科技能够克隆语音并制造麻烦。

    Jennifer Destefano answered a call from a number she did not recognise.

    詹妮弗·德斯特凡诺接了一个陌生号码打来的电话。

    “Mom, I messed up,” her daughter’s voice told her, sobbing. “These bad men have me.”

    “妈妈,我完了,”她女儿的声音一边抽泣着一边跟她说,“这些坏人把我抓了。”

    A man proceeded to demand money, or he would drug her daughter and leave her in Mexico.

    接着一名男子向她索要钱财,否则他会给她女儿下药,然后把她留在墨西哥。

    But while she kept him on the phone, friends managed to reach her daughter, only to discover that she was, in fact, free and well on a skiing trip in Arizona.

    但是她一边和男子保持通话,一边让朋友们设法联系到了她女儿,结果却发现,她女儿正好好地在亚利桑那州滑雪旅行。

    The voice used on the phone was a fake.

    电话中的声音是伪造的。

    Ms DeStefano, still shaken, told this story to a US Senate subcommittee hearing on artificial intelligence in June.

    今年6月,仍感到后怕的德斯特凡诺在美国参议院人工智能小组委员会听证会上讲述了这个经历。

    The dangers that voice-cloning technology pose are only now starting to be uttered aloud.

    语音克隆技术带来的危险直到现在才被大声表达出来。

    In recent months, most of the attention paid to artificial intelligence (AI) has gone to so-called “large-language models” like ChatGPT, which churn out text.

    近几个月来,人们对人工智能的大部分注意力都集中在ChatGPT这种可大量生成文本的“大型语言模型”上。

    But voice cloning’s implications will also be profound.

    但语音克隆的影响也将是深远的。

    A brief sample of a voice can be used to train an AI model, which can then speak any given text sounding like that person.

    一个简短的声音样本可以被用来训练人工智能模型,然后这个模型就可以用这个声音说出任何给定的文本。

    Apple is expected to include the feature for iPhones in its new operating system, iOS 17, due to be released in September.

    苹果预计将在iPhone的新操作系统iOS 17中加入这一功能,新系统将在9月发布It is advertised as helping people who may be in danger of losing their voice, for example to a degenerative disease such as ALS.

    这种技术被宣传为可以帮助那些可能面临失声危险的人,例如患有渐冻症等退化性疾病的人。

    For those eager to try voice cloning now, ElevenLabs, an AI startup, offers users the chance to create their own clones in minutes.

    如果很想现在尝试一下语音克隆,人工智能初创公司ElevenLabs可以让用户在几分钟内创建自己的语音克隆。

    The results are disturbingly accurate.

    克隆结果准确得令人不安。

    When generating a playback, the system offers a slider that allows users to choose between variability and stability.

    在生成回放录音时,系统提供了一个滑块,允许用户在可变性和稳定性之间进行选择。

    Select more variability, and the audio will have a lifelike intonation, including pauses and stumbles like “er…”

    如果选择更多的可变性,音频就会有真人般的语调,包括停顿和“呃”等结结巴巴的声音。

    Choose “stability”, and it will come across more like a calm and dispassionate newsreader.

    如果选择“稳定性”,音频给人的感觉更像是一个冷静、不动感情的新闻播报员。

    Taylor Jones, a linguist and consultant, took a careful look at the quality of ElevenLabs’s clone of his voice in a YouTube video.

    语言学家兼顾问泰勒·琼斯在一个YouTube视频中,仔细审视了ElevenLabs对他的语音克隆的质量。

    Using statistical tests he showed that there were a few things off in “his” pronunciation of certain vowels.

    通过统计测试,他发现“他的”某些元音发音有一些不对劲的地方。

    But a lower-tech test, a “conversation” with his own mother, fooled the woman who raised him.

    但在科技含量没那么高的测试中,即与自己的母亲进行“对话”,克隆语音骗过了这个从小养育他的女性。

    (“Don’t you ever do that again,” she warned.)(“别再干这种事了。”她警告说。)

    Johnson repeated the experiment with his own mother, who did not miss a beat in replying to clone-Johnson.

    笔者和自己的母亲重复了这个实验,母亲在回答克隆语音时没有一丝怀疑。

    For several years, customers have been able to identify themselves over the phone to their bank and other companies using their voice.

    数年来,客户通过他们的声音而在电话中向银行和其他公司表明自己的身份。

    This was a security upgrade, not a danger.

    这是一种安全升级,而不是危险。

    Not even a gifted mimic could fool the detection system.

    即使是有天赋的模仿者也无法骗过探测系统。

    But the advent of cloning will force adaptation, for example by including voice as only one of several identification factors (and thus undercutting the convenience), in order to prevent fraud.

    但克隆的出现将迫使人们适应新情况,例如,将语音仅作为多个身份识别因素之一(这样会削弱便利性),以防止欺诈。

    Creative industries could face disruption too.

    创意产业也可能面临颠覆。

    Voice actors’ skills, trained over a lifetime, can be ripped off in a matter of seconds.

    配音演员靠毕生训练而习得的技巧,在几秒钟内就可以被夺走。

    The Telegraph, a British broadsheet, recently reported on actors who had mistakenly signed away rights to their voices, making it possible to clone them for nothing.

    英国大报《每日电讯报》最近报道,演员不小心签字让出了自己的声音权,使得克隆他们的声音可以不用花一分钱。

    New contracts will be needed in future.

    未来将需要新的合同。

    But some actors may, in fact, find cloning congenial.

    但事实上,一些演员可能会觉得克隆声音很让他们称心。

    Val Kilmer, who has lost much of his voice to throat cancer, was delighted to have his voice restored for “Top Gun: Maverick”.

    瓦尔·基尔默因喉癌失去了大部分声音,他很高兴在《壮志凌云:特立独行》中恢复了嗓音。

    Others may be spared heading to the studio for retakes.

    其他演员可能免去了再前往录音室重新拍摄的麻烦。

    It is the middling professional, not the superstar, who is most threatened.

    受到最大威胁的不是超级巨星,而是中等职业人。

    Another industry that will have to come to grips with the rise of clones is journalism.

    另一个必须应对克隆崛起的行业是新闻业。

    On-the-sly recordings—such as Donald Trump’s boast of grabbing women by a certain private body part—have long been the stuff of blockbuster scoops.

    偷录的录音--比如唐纳德·特朗普吹嘘摸了女性的某个私密身体部位--长期以来一直是重磅独家新闻。

    Now who will trust a story based on an audio clip?

    现在,谁会因为音频片段而相信一个故事呢?

    Slightly easier to manage might be the false positives: recordings purporting to be someone but which are fakes.

    更容易处理的情况可能是假阳性:录音声称是某人,但实际上是伪造的。

    Sophisticated forensic techniques could be of use here, proving a clip to be AI, say, in a courtroom.

    复杂的法医技术可能在此有用武之地,比如在法庭上证明一段音频是人工智能合成的。

    The opposite problem—the false negatives—will arise when public figures deny authentic recordings.

    当公众人物否认真实的录音是真实的时,就会出现相反的问题,即假阴性。

    Proving that a clip is genuine is hard, perhaps even impossible.

    要证明一段音频是真实的非常困难,甚至是无法做到的。

    Journalists will need to show how they obtained and stored audio files—unless, as so often, they have promised a source anonymity.

    记者需要表明他们是如何获得和存储音频文件的,除非像通常情况那样,记者向消息来源承诺匿名。

    During his first presidential run, Mr Trump did more than anyone to popularise the term “fake news”—and that was well before voice cloning, deepfake videos, artificially generated images and the like were widespread.

    在特朗普的第一次总统竞选期间,在普及“假新闻”这一术语方面他比任何人的贡献都大,但那还是在语音克隆、深度假视频、人工生成图像等成为普遍现象的很久之前。

    Now, ever more people caught up in wrongdoing will be tempted by the defence, “It wasn’t me.”

    现在,越来越多被卷入不法行为的人会说“不是我干的”。

    Many people will have even more reason to believe them.

    许多人将有更多的理由去相信他们。

  原文地址:http://www.tingroom.com/lesson/jjxrhj/2023jjxr/565767.html