2023年经济学人 生成式人工智能给经理提出了难题(在线收听

Business

商业版块

Bartleby

巴托比专栏

Models and management

人工智能模型与管理

Generative AI asks particularly difficulty questions of bosses.

生成式人工智能向老板提出了尤为困难的问题。

The remarkable capabilities of generative artificial intelligence (AI) are clear the moment you try it.

生成式人工智能的非凡能力在你试着用它的那一刻就清晰地显现了。

But remarkableness is also a problem for managers.

但能力非凡也是经理面临的一个问题。

Working out what to do with a new technology is harder when it can affect so many activities; when its adoption depends not just on the abilities of machines but also on pesky humans; and when it has some surprising flaws.

当一项新科技可以影响如此多的活动,当采用这一科技不仅取决于机器的能力,还取决于讨厌的人类,当这一科技也存在一些令人惊讶的缺陷时,找出应对新科技的办法就变得更加困难。

Study after study rams home the potential of large language models (LLMs), which power AIs like ChatGPT, to improve all manner of things.

一项又一项研究强调了大型语言模型(LLM)改善各种情况的潜力(这种模型驱动了ChatGPT等人工智能)。

LLMs can save time, by generating meeting summaries, analysing data or drafting press releases.

大型语言模型可以通过生成会议摘要、分析数据或起草新闻稿来节省时间。

They can sharpen up customer service.

也可以提高客服水平。

They cannot put up IKEA bookshelves—but nor can humans.

但不可以把宜家的书架拼起来,虽然这一点人类也做不到。

AI can even boost innovation.

人工智能甚至可以推动创新。

Karan Girotra of Cornell University and his co-authors compared the idea-generating abilities of the latest version of ChatGPT with those of students at an elite university.

康奈尔大学的卡兰·吉罗特拉及其合著者将最新版本的ChatGPT的创意生成能力与精英大学的学生进行了比较。

A lone human can come up with about five ideas in 15 minutes; arm the human with the AI and the number goes up to 200.

一个人可以在15分钟内独自想出大约5个创意,给这个人配备上人工智能,这个数字就会上升到200。

Crucially, the quality of these ideas is better, at least judged by purchase-intent surveys for new product concepts.

关键是这些创意的质量更高,至少从对新产品概念的购买意向调查结果来看是这样。

Such possibilities can paralyse bosses; when you can do everything, it’s easy to do nothing.

这种可能性会让老板们陷入瘫痪,当你什么都能做的时候,很容易就什么都不做。

LLMs’ ease of use also has pluses and minuses.

大型语言模型的易用性有优点,也有缺点。

On the plus side, more applications for generative AI can be found if more people are trying it.

从好的方面来说,如果有更多的人在尝试用人工智能,人们就可以发现更多生成式人工智能的应用方式。

Familiarity with LLMs will make people better at using them.

对大型语言模型越熟悉,就越能让人们更好地使用它们。

Reid Hoffman, a serial AI investor (and a guest on this week’s final episode of “Boss Class”, our management podcast), has a simple bit of advice: start playing with it.

雷德·霍夫曼多次投资了人工智能(他也是本刊的管理学播客《老板课堂》本周最后一集的嘉宾),他有一个简单的建议:先玩玩AI。

If you asked ChatGPT to write a haiku a year ago and have not touched it since, you have more to do.

如果你在一年前让ChatGPT写了一首俳句,之后就再也没碰过它,那么你还可以用它做更多的事。

Familiarity may also counter the human instinct to be wary of automation.

培养熟悉度也可能让人克服人类对自动化保持警惕的本能。

A paper by Siliang Tong of Nanyang Technological University and his co-authors that was published in 2021, before generative AI was all the rage, captured this suspicion neatly.

南洋理工大学的童思亮及其合著者于2021年发表了一篇论文,当时生成式人工智能还未风靡,而这篇论文巧妙地捕捉到了人们对自动化的怀疑情绪。

It showed that AI-generated feedback improved employee performance more than feedback from human managers.

研究表明,人工智能生成的反馈比人类经理给出的反馈更能提高员工的绩效。

However, disclosing that the feedback came from a machine had the opposite effect: it undermined trust, stoked fears of job insecurity and hurt performance.

然而,告诉人们反馈来自一台机器却产生了相反的效果:信任遭到破坏,人们对工作不稳定的担忧加剧,绩效也受到破坏。

Exposure to LLMs could soothe concerns. Or not.

接触大型语言模型可能会缓解人们的担忧。但也可能不会

Complicating things are flaws in the technology.

让事情变得更复杂的是技术上的缺陷。

The Cambridge Dictionary has named “hallucinate” as its word of the year, in tribute to the tendency of LLMs to spew out false information.

《剑桥词典》将“产生幻觉”评为年度词汇,表明大型语言模型有编造虚假信息的倾向。

The models are evolving rapidly and ought to get better on this score, at least.

这些模型正在迅速进化,至少在这方面应该会有所改进。

But some problems are baked in, according to a new paper by R. Thomas McCoy of Princeton University and his co-authors.

但根据普林斯顿大学的R. 托马斯·麦考伊及其合著者,有些问题是模型自带的。

Because off-the-shelf models are trained on internet data to predict the next word in an answer on a probabilistic basis, they can be tripped up by surprising things.

因为非专门用途的模型是根据互联网数据进行训练的,在概率的基础上预测答案中的下一个单词,因此它们可能会因为一些让人意想不到的事情而翻车。

Get GPT-4, the LLM behind ChatGPT, to multiply a number by 9/5 and add 32, and it does well; ask it to multiply the same number by 7/5 and add 31, and it does considerably less well.

让GPT-4(ChatGPT内部的大型语言模型)将一个数字乘以9/5再加上32,它算得很准确,让它将相同的数字乘以7/5再加上31,结果就差远了。

The difference is explained by the fact that the first calculation is how you convert Celsius to Fahrenheit, and therefore common on the internet; the second is rare and so does not feature much in the training data.

两次计算的区别在于,第一次计算是将摄氏度换算成华氏度,在互联网上很常见,而第二次计算很少见,因此在培训数据中没有太多内容。

Such pitfalls will exist in proprietary models, too.

这样的陷阱也存在于专门用途模型中。

On top of all this is a practical problem: it is hard for firms to keep track of employees’ use of AI.

这带来的最大的一个问题是很实际的:公司很难跟踪员工使用人工智能的情况。

Confidential data might be uploaded and potentially leak out in a subsequent conversation.

机密数据可能会被上传,并可能在随后的对话中泄露。

Earlier this year Samsung, an electronics giant, clamped down on usage of ChatGPT by employees after engineers reportedly shared source code with the chatbot.

今年早些时候,在工程师与聊天机器人分享了源代码后,电子巨头三星公司取缔了员工使用ChatGPT的行为。

This combination of superpowers, simplicity and stumbles is a messy one for bosses to navigate.

这种超能力、简易性、磕磕绊绊的组合让老板们难以驾驭人工智能。

But it points to a few rules of thumb.

但它指出了几条经验法则。

Be targeted.

要有针对性。

Some consultants like to talk about the “lighthouse approach”—picking a contained project that has signalling value to the rest of the organisation.

一些顾问喜欢谈论“灯塔方法”,即选择一个对组织其他部门来说有借鉴意义的受控项目来使用AI。

Rather than banning the use of LLMs, have guidelines on what information can be put into them.

与其禁止使用大型语言模型,不如制定指导方针,告诉人们可以在其中输入哪些信息。

Be on top of how the tech works: this is not like driving a car and not caring what is under the hood.

要掌握科技的工作原理:这不像开车,不用关心引擎盖下面是什么。

Above all, use it yourself.

最重要的是,要自己去使用它。

Generative AI may feel magical.

生成式人工智能可能会让人觉得它很神奇。

But it is hard work to get right.

但要正确使用它,却并不容易。

  原文地址:http://www.tingroom.com/lesson/jjxrhj/2023jjxr/565540.html