2023年经济学人 人工智能机器中的魂灵(在线收听) |
Culture 文艺板块 Johnson 约翰逊专栏 The ghost in the AI machine AI机器中的魂灵 Talking about artificial intelligence in human terms is natural—but wrong. 用谈论人的方式来谈论人工智能是很自然的,但这是错误的。 My love’s like a red, red rose. 我的爱人像一朵红红的玫瑰。 It is the east, and Juliet is the sun. 那是东方,而朱丽叶就是太阳。 Life is a highway, I wanna ride it all night long. 人生如大路,我想彻夜飞驰其上。 Metaphor is a powerful and wonderful tool. 比喻是一种强大而奇妙的工具。 Explaining one thing in terms of another can be both illuminating and pleasurable, if the metaphor is apt. 如果比喻恰当的话,用一件事来解释另一件事既能带来启发,又令人感到愉悦。 But that “if” is important. 但这个“如果”很重要。 Metaphors can be particularly helpful in explaining unfamiliar concepts: imagining the Einsteinian model of gravity (heavy objects distort space-time) as something like a bowling ball on a trampoline, for example. 在解释不熟悉的概念时,比喻尤其有用:例如,把爱因斯坦的重力模型(重物会扭曲时空)想象成蹦床上的保龄球。 But metaphors can also be misleading: picturing the atom as a solar system helps young students of chemistry, but the more advanced learn that electrons move in clouds of probability, not in neat orbits as planets do. 但比喻也可能具有误导性:把原子想象成一个太阳系有助于低年级化学学生的理解,但高年级的学生会了解到,电子在充满概率性的迷雾中运动,而不是像行星那样在规整的轨道上运动。 What may be an even more misleading metaphor—for artificial intelligence (AI)—seems to be taking hold. 一个可能更具误导性的比喻--对人工智能的比喻--似乎正在开始占上风。 AI systems can now perform staggeringly impressive tasks, and their ability to reproduce what seems like the most human function of all, namely language, has ever more observers writing about them. 人工智能系统现在可以执行极其令人惊艳的任务,语言似乎是最具人类本性的功能,而它们能够再次生成语言的这一能力让越来越多的观察家写下了关于它们的文章。 When they do, they are tempted by an obvious (but obviously wrong) metaphor, which portrays AI programmes as conscious and even intentional agents. 当观察家们写这些文章时,他们会受到一个明显的(但明显错误的)比喻的诱惑,即将人工智能程序描述为有意识的、甚至是有意图的行为主体。 After all, the only other creatures which can use language are other conscious agents—that is, humans. 毕竟,唯一能够使用语言的其他生物就是其他有意识的行为主体,也就是人类。 Take the well-known problem of factual mistakes in potted biographies, the likes of which ChatGPT and other large language models (LLMs) churn out in seconds. 以人物生平简介中的著名的事实性错误问题为例,ChatGPT和其他大型语言模型在短短几秒钟内就能炮制出这样的生平简介。 Incorrect birthplaces, non-existent career moves, books never written: one journalist at The Economist was alarmed to learn that he had recently died. 错误的出生地、不存在的职业变动、从未写过的书:《经济学人》的一名记者很震惊地得知自己最近被去世了。 In the jargon of AI engineers, these are “hallucinations”. 用人工智能工程师的行话来说,这些都是“幻觉”。 In the parlance of critics, they are “lies”. 用批评者的用语说,这些是“谎言”。 “Hallucinations” might be thought of as a forgiving euphemism. “幻觉”可能被认为是一种带有宽恕性的委婉说法。 Your friendly local AI is just having a bit of a bad trip; leave him to sleep it off and he’ll be back to himself in no time. 你那友好的当地人工智能只是脑子发了点昏,让他睡一觉,他很快就会清醒过来的。 For the “lies” crowd, though, the humanising metaphor is even more profound: the AI is not only thinking, but has desires and intentions. 然而,对于“谎言”派来说,这个拟人化的比喻有更深刻的含义:人工智能不仅在思考,而且有欲望和意图。 A lie, remember, is not any old false statement. 请记住,谎言不只是任何虚假的陈述。 It is one made with the goal of deceiving others. 而且还要以欺骗他人为目的。 ChatGPT has no such goals at all. ChatGPT根本没有这样的目的。 Humans’ tendency to anthropomorphise things they don’t understand is ancient, and may confer an evolutionary advantage. 人类把自己不理解的东西拟人化的倾向自古有之,这可能会带来进化优势。 If, on spying a rustling in the bushes, you infer an agent (whether predator or spirit), no harm is done if you are wrong. 如果在侦察到灌木丛沙沙作响时,你推断灌木丛里有一个行为主体(无论是捕食者还是鬼魂),如果你推断错了,也不会造成任何伤害。 If you assume there is nothing in the undergrowth and a leopard jumps out, you are in trouble. 但如果你假定灌木丛里什么都没有,然后一只豹子跳了出来,那么你就有麻烦了。 The all-too-human desire to smack or yell at a malfunctioning device comes from this ingrained instinct to see intentionality everywhere. 对出故障的设备拍打或大喊大叫的愿望是一种人之常情,这种愿望就来自于这种根深蒂固的本能,即随处可看见意图。 It is an instinct, however, that should be overridden when writing about AI. 然而,在写关于人工智能的文章时,这种本能应该被压倒。 These systems, including those that seem to converse, merely take input and produce output. 这些系统,包括那些似乎能与人对话的系统,只是接受了输入并产生输出。 At their most basic level, they do nothing more than turn strings like 0010010101001010 into 1011100100100001 based on a set of instructions. 在最基本的层次上,它们只不过是根据一组指令,将0010010101001010之类的字符串转换为1011100100100001。 Other parts of the software turn those 0s and 1s into words, giving a frightening—but false—sense that there is a ghost in the machine. 软件的其他部分再将这些0和1转换为单词,给人一种可怕但错误的感觉:机器内部有一个魂灵。 Whether they can be said to “think” is a matter of philosophy and cognitive science, since plenty of serious people see the brain as a kind of computer. 是否可以说它们在“思考”是哲学和认知科学上的问题,因为许多严肃的学者将大脑视为一种计算机。 But it is safer to call what LLMs do “pseudo-cognition”. 但将大型语言模型所做的事称为“伪认知”是更为安全的说法。 Even if it is hard on the face of it to distinguish the output from human activity, they are fundamentally different under the surface. 即使表面上很难区分机器产出和人类活动,二者在表面之下是有根本区别的。 Most importantly, cognition is not intention. 最重要的是,认知不是意图。 Computers do not have desires. 计算机没有愿望。 It can be tough to write about machines without metaphors. 描写机器时不用比喻会很困难。 People say a watch “tells” the time, or that a credit-card reader which is working slowly is “thinking” while they wait awkwardly at the checkout. 人们说手表“会告诉”时间,或者当人们在收银台尴尬地等待反应很慢的信用卡读卡器时,人们会说读卡器在“思考”。 Even when machines are said to “generate” output, that cold-seeming word comes from an ancient root meaning to give birth. 即使当人们说机器“产生”输出时,这个看起来冷冰冰的词其实也源于一个古老的词根,意思是生孩子。 But AI is too important for loose language. 但人工智能太重要了,不能用不严谨的语言。 If entirely avoiding human-like metaphors is all but impossible, writers should offset them, early, with some suitably bloodless phrasing. 如果完全避免拟人比喻几乎是不可能的,那么写作者们应该及早用一些恰当的冷血措辞来抵消比喻。 “An LLM is designed to produce text that reflects patterns found in its vast training data,” or some such explanation, will help readers take any later imagery with due scepticism. “大型语言模型用于生成文本,这种文本反映了从其海量训练数据中发现的模式”,或某种类似的解释,这种说法将帮助读者对之后出现的任何意象持以适当的怀疑态度。 Humans have evolved to spot ghosts in machines. 人类通过进化而能够识别出机器中的魂灵。 Writers should avoid ushering them into that trap. 写作者应该避免将人们带入这个陷阱。 Better to lead them out of it. 不如带领他们走出陷阱。 |
原文地址:http://www.tingroom.com/lesson/jjxrhj/2023jjxr/565764.html |