【英语语言学习】未来无人驾驶的车辆(在线收听

 This is Weekend Edition from NPR News. I am Scott Simon.

Imagine yourself in the future for a moment riding in a driverless car. You see 10 pedestrians stroll into the street just a few yards ahead of you. The car's going too fast to brake and miss them, so would you steer your car to try to miss them and possibly injure yourself? But if it's a driverless car, would you even get to make that choice? We're going to talk now to somebody who studies some of the ethical questions that are raised by autonomous vehicles. Patrick Lin, associate professor of philosophy at Cal Poly in San Luis Obispo, Calif., thanks very much for being with us.
PATRICK LIN: You're welcome. Glad to be here, Scott.
SIMON: There was a survey put up by MIT that asked questions along these lines, right?
LIN: Right, right.
SIMON: What did you notice in the survey when you looked at it?
LIN: Well, you know, so that's not the first survey done on this topic. There have been other surveys. And they had similar results, which is that people are split on the idea of how a driverless or autonomous car should behave. I think the one thing that stood out to me is that there's going to be a lot more work needed in this field here. One problem with surveys is that what people say in surveys isn't necessarily how they would actually choose in real life.
SIMON: Yeah.
LIN: They might not always know what it is they want.
SIMON: Yeah, I mean, 'cause it does seem to me just anecdotally that probably not a month goes by we don't read about some traffic accident where somebody said, you know, I just couldn't stop. They pulled into the lane, they walked across the street. And I must say, as a rule, society doesn't blame them for making an unethical choice to save their own life, even if the crash results in killing others.
LIN: Right. If it's a human-driven car, what you have there is just an accident. It's a reflex. Maybe you have bad reflexes. But we understand that that's just a reflex, it's not premeditated. But when you're talking about how we ought to program a robot car, now you're talking about pre-scripting the accident, right? So this is a difference between an accidental accident and a deliberate accident. So there's a big difference there legally and ethically.
SIMON: Would somebody get into a driverless car if they thought the algorithms of that car would essentially say, I'm not going to let you run into that school bus and kill people, you're going to die instead?
LIN: I think they would. So, for instance, anytime you get in a driven car by someone else, you're at risk. Studies have shown that if you're a human driver and you're about to be in a crash, you're going to reflexively turn away from the crash. This usually means that you expose your passengers to that accident. But that doesn't paralyze us when we step into a car.
SIMON: At the same time, though, Professor, I mean, I think it's going to be hard for people to think of an algorithm making that decision for us.
LIN: That's right. I mean, it's a weird thing to think about. But that's exactly what we're doing when we're creating robots and artificial intelligence. They're taking over human roles, from being our chauffeur to our stock market trader to our airline pilot to whatever. We've got to do some soul-searching. And then we have to ask, well, should robots and AI mimic humans - do what we do - or should they even do something differently? So robot ethics and human ethics could be two different things. But when we talk about programming cars or making any kind of robots, it's a good exercise in how humans behave and how we ought to behave.
SIMON: Patrick Lin is an associate professor of philosophy at Cal Poly in San Luis Obispo, Calif. Thanks so much for being with us.
LIN: You're welcome. Thanks for having me.
  原文地址:http://www.tingroom.com/lesson/yyxxa/380915.html