-
(单词翻译:双击或拖选)
Artificial intelligence can be found in many places. How safe is the technology?
NPR's A Martinez speaks with Jack2 Clark, co-founder of artificial intelligence company Anthropic, about AI safety concerns.
MICHEL MARTIN, HOST:
So let's talk about artificial intelligence now, or AI. It can be found in all sorts of places today, including phones, weapons and household smart devices. But there are already concerns about whether the capabilities4 of the technology have outstripped5 any guardrails to prevent misuse6. We reached out to one company, Anthropic, that says it's working to make AI safer. Our colleague, A Martínez, spoke7 earlier with the company's co-founder, Jack Clark, and A asked him to describe his own concerns with AI.
JACK CLARK: It's an amazing time in AI right now where systems are getting better far more quickly than our ability to evaluate them. So is our AI system or any AI system a nice AI system or a bad AI system? It's actually hard to tell. There's room for greater government involvement, greater civil society involvement, greater academic involvement in the development of AI because people are nervous because AI is being developed by a very small set of private sector8 actors.
A MART?NEZ, HOST:
What do you mean, though, government involvement? Isn't that one of the biggest dangers is that our lawmakers barely have any understanding of what AI is?
CLARK: People are waking up, including lawmakers, to how AI has a role in national security. It has a role in geopolitics. We've seen AI in various forms being used in the war in the Ukraine. So I think that what you're seeing among policymakers is a pretty rapid desire to get up to speed on where it is, and they're much more engaged now than they've ever been.
MART?NEZ: So considering how fast things do move, especially with AI, right now in May of 2023, what would be your No. 1 concern?
CLARK: My No. 1 concern about AI right now is AI systems can do more things than their creators know that they can do. It's kind of like if we were in the business of making cars, after you release the car, someone discovers it can fly or go underwater, and you had no idea as the car manufacturer. That's where AI is today. Systems get released. Then some 17-year-old with a laptop discovers that the system can do a completely wild thing that its creators did not anticipate.
MART?NEZ: So if that's the case, what would be an easy way to try and tamp9 that down, or at least just figure out a way where it doesn't move as quickly?
CLARK: So there is one exciting thing happening. In August this year in Las Vegas, there's a hacking10 conference called DEF CON3. And at that conference, Google, Microsoft, OpenAI, Anthropic - my company - and many others are going to have their systems be red-teamed by thousands and thousands of hackers11. We think a future thing that policymakers might want you to do is before you release the system, have it get attacked by people trying to misuse it and trying to break it, and then you can learn from that. And you have this kind of build-it, break-it, fix-it dynamic.
MART?NEZ: Now, you used to work at OpenAI, which created ChatGPT, and then you left to found Anthropic. And you left to create what your company calls a safer ChatGPT. So what exactly does that look like?
CLARK: So one thing we've done is we've tried to find ways to make safety more at the core of our technology. So something that we've released this week is the so-called constitution behind our language model, Claude, for ways that the AI system should behave. And we've done that because, otherwise, AI systems learn values by interacting with people, and it's really hard to figure out what the values are that they've learned.
MART?NEZ: Jack, the debate around artificial intelligence feels very much like the now. But I think sometimes we are so in the now that we don't see the next. Are we in the right place right now in these discussions that we're having?
CLARK: Something which most technologists say privately12 when you talk about AI policy, in two or three years the systems are going to be far more powerful, and the problems are going to be far weirder13, and we can't really anticipate them today. So I think when people are regulating this technology, they're treating it like a normal technology which evolves relatively14 slowly and relatively predictably. This technology evolves very quickly and relatively unpredictably. So if anything, my main takeaway is the future is going to be a lot weirder than the present, and we should have our minds kind of pointed15 towards that, as well as dealing16 with these challenges we have today.
MART?NEZ: That's Anthropic co-founder Jack Clark. Jack, thanks a lot.
CLARK: Thanks very much.
1 transcript | |
n.抄本,誊本,副本,肄业证书 | |
参考例句: |
|
|
2 jack | |
n.插座,千斤顶,男人;v.抬起,提醒,扛举;n.(Jake)杰克 | |
参考例句: |
|
|
3 con | |
n.反对的观点,反对者,反对票,肺病;vt.精读,学习,默记;adv.反对地,从反面;adj.欺诈的 | |
参考例句: |
|
|
4 capabilities | |
n.能力( capability的名词复数 );可能;容量;[复数]潜在能力 | |
参考例句: |
|
|
5 outstripped | |
v.做得比…更好,(在赛跑等中)超过( outstrip的过去式和过去分词 ) | |
参考例句: |
|
|
6 misuse | |
n.误用,滥用;vt.误用,滥用 | |
参考例句: |
|
|
7 spoke | |
n.(车轮的)辐条;轮辐;破坏某人的计划;阻挠某人的行动 v.讲,谈(speak的过去式);说;演说;从某种观点来说 | |
参考例句: |
|
|
8 sector | |
n.部门,部分;防御地段,防区;扇形 | |
参考例句: |
|
|
9 tamp | |
v.捣实,砸实 | |
参考例句: |
|
|
10 hacking | |
n.非法访问计算机系统和数据库的活动 | |
参考例句: |
|
|
11 hackers | |
n.计算机迷( hacker的名词复数 );私自存取或篡改电脑资料者,电脑“黑客” | |
参考例句: |
|
|
12 privately | |
adv.以私人的身份,悄悄地,私下地 | |
参考例句: |
|
|
13 weirder | |
怪诞的( weird的比较级 ); 神秘而可怕的; 超然的; 古怪的 | |
参考例句: |
|
|
14 relatively | |
adv.比较...地,相对地 | |
参考例句: |
|
|
15 pointed | |
adj.尖的,直截了当的 | |
参考例句: |
|
|
16 dealing | |
n.经商方法,待人态度 | |
参考例句: |
|
|