Global Survey Reveals Who We'd Prefer to Sacrifice on the Bumper of a Self-driving Car

Steph Willems
by Steph Willems

In 2014, as publications and automakers began making greater noise about autonomous vehicles, researchers at MIT’s Media Lab issued some questions to the public. The institute’s Moral Machines experiment offered up a series of scenarios in which a self-driving car that has lost its brakes has to hit one of two targets, then asked the respondents which of the two targets they’d prefer to see the car hit.

Four years later, the results are in. If our future vehicles are to drive themselves, they’ll need to have moral choices programmed into their AI-controlled accident avoidance systems. And now we know exactly who the public would like to see fall under the wheels of these cars.

However, there’s a problem: agreement on who to sacrifice differs greatly from country to country.


Published in the journal Nature, the results of the online questionnaire say a lot about the mindset in different countries, though there’s still agreement among nations on certain moral basics.

MIT’s Moral Machine experiment riffed on the classic “Trolly Problem,” a moral exercise in which people are asked to put themselves in the shoes of a bystander witnessing a runaway trolly careening towards five persons lying (or tied to) the tracks. A switch is nearby, which the bystander could pull to send the trolley down a second set of tracks, straight towards a single prone person. You’d be signing the death warrant of one human, but saving five lives in the process.

What do you do, Jack?

In the updated scenario (nine scenarios, to be exact), respondents in 130 countries were forced to make a moral choice in who or what to sacrifice. If it came down to a choice of hitting an animal or a human, humans vastly prefer the car swerve out of the way of the wayward human, squishing the animal. Easy stuff.

The same goes, in general, for sparing the young over the elderly, and for sparing more pedestrians at the expense of fewer pedestrians. Of all objects to be avoided at all costs, a stroller ranked highest, followed by a girl, a boy, and a pregnant woman. Saving pedestrians is slightly more popular than prioritizing the lives of passengers.

The globe apparently couldn’t come to a consensus over whether they’d spare a large woman over a thin one, though we collectively seem to value the lives of large men slightly less than thin, angular, sexy ones. The homeless get a bum rap in these results, as do criminals (unfortunately, your car isn’t likely to know just which pedestrian is a serial killer or rapist). Interestingly, respondents were more likely to spare the life of a dog over the life of a criminal. Cats were ranked least important, overall.

Of course, these results are all tabulated from numerous countries. Break the responses down into individual countries, and religious and cultural norms enter the fray.

In Asian countries like Japan, China, Taiwan, and South Korea, respondents were much more likely to place less emphasis on saving the young over the elderly. Taiwan and China were nearly tied as the countries most likely to spare the elderly. Scandinavians were slightly predisposed to this response, too. Western European (France, UK) and North American respondents were far more likely to single out the old as a sacrificial lamb.

Similar aberrations were seen when dealing with numbers — ie, killing fewer pedestrians vs. killing greater numbers of pedestrians. Respondents from countries that are more collectivist in nature, like those in Asia, placed less emphasis on saving more lives vs. fewer. Japan led the way in that regard, followed by Taiwan, China, and South Korea (in descending order). The Netherlands hit the median, so to speak. Among the “save more people” crowd, France placed the most emphasis on prioritizing a higher number of saved lives, followed close behind by Israel, the UK, Canada, and the United States.

“The results showed that participants from individualistic cultures … placed a stronger emphasis on sparing more lives given all the other choices—perhaps, in the authors’ views, because of the greater emphasis on the value of each individual,” wrote MIT Technology Review.

Cultural groupings seem to disappear when it comes to passengers vs. pedestrians. By a far greater margin than any other country, China placed greater emphasis on sparing the lives of passengers over that of pedestrians, though Estonia, France, Taiwan, and the U.S. mildly fall on the passenger side as well. Israel and Canada were essentially neutral on it, with neither side prioritized. More so than any other country, Japan prioritized the saving of pedestrians over passengers. Western European and Scandinavian countries, as well as Singapore and South Korea, fell on the “pedestrians over passengers” side.

The authors of the paper don’t want their results to decide which people an AI-controlled vehicle should run down in a given country; rather, their aim is to inform lawmakers and companies of how the public might react to choices made by a programmed driverless car. Above all else, the MIT researchers want companies to start thinking about ethics and AI.

“More people have started becoming aware that AI could have different ethical consequences on different groups of people,” said author Edmond Awad. “The fact that we see people engaged with this—I think that that’s something promising.”

[Source: MIT Technology Review]

Steph Willems
Steph Willems

More by Steph Willems

Comments
Join the conversation
3 of 30 comments
  • TimK TimK on Oct 27, 2018

    Now the real question: what software engineer, division manager, or CEO is going to affix their signature approving a system that can autonomously kill people? Who is going to give that authority to a $5 CPU chip?

    • Stuki Stuki on Oct 27, 2018

      Technically, it gets even more nebulous: The chip is unlikely to be able to explain why it did why it did. Noone explicitly programmed it to do so. Because the environment it operates in, unlike that of traditional software, is far to rich to be spanned by closed end approaches that can be backtracked and explained after the fact. That's the I part of AI. The human brain, in addition to working as an in-the-now decision making engine, is at least as focused on explaining, and even post-hoc rationalizing, why it did what it did. It evolved to be just as much a social creature as a control circuit for a set of limbs, after all. Furthermore, all humans are wired at least somewhat the same in that regard, even if there are cultural differences. So explanations tend to be fairly universally shared. Hence, a driver can reasonably be "tried" and "judged" by a reasonably coherent group of "peers," after the fact, for mishaps which may have happened. An AI of any complexity, is just a black box of various self propagated and reinforced weights given to who-knows-what, for who-knows-what reasons related to the environment, and the AIs experiences within it, in which the AI has been trained and rewarded for performance. Very high level priorities, like the ones in this questionnaire, can be hard coded for sure. But those kinds of stark choices are not what the AI will be faced with in a complex real world. Hence large parts of the "reasoning" that led to the decisions that looks to have increased the probability of an undesirable outcome, will remain opaque to investigating humans. "The Bot just looks like it decided to run you over....." It's like trying to figure out exactly which mosquito in Tokyo, is to blame for the Florida hurricane its wing flapping "caused." This is a very fundamental reason for why it's not "good enough" that AIs can be "demonstrated" to be "safer" drivers than humans in large population historical "studies." Accidents will always be with us. Any complex environment will have them. Hence, a mechanism to, at a minimum, somewhat attempt to explain them when they happen, even if not assigning blame, is an integral part of the broader traffic picture. Having things out there who may decide to do who-knows-what that kills you, for what appears to be no reason whatsoever, at any given time, just isn't going to be acceptable to people. So the AIs need to be much, much safer than human drivers before they are acceptable. Which is a big issue, as humans already cause so few accidents that it complicates training and testing AIs in the real world..... Which leads to the only really realistic approach: Make their environment more predictable. Which is another way of saying: Segregate them from open ended unconstrained interaction with unpredictable humans. Do that and, like trains and planes, you can make the machines perform feats of speed and efficiency that is simply not possible in "humanspace." Don't do it, and all you end up with, are "accidents" an unresolvable conflict.

  • Detroit-Iron Detroit-Iron on Oct 27, 2018

    I can't see the actual framing of the questions behind the paywall, but from what I read I don't understand the numbers question. Assuming all other things are equal why would you hit more people rather than less? I mean I would, but I am trying for the high score.

  • Rover Sig 2021 Jeep Grand Cherokee Limited, like my previous JGC's cheap to keep (essentially just oil, tires) until recent episode of clunking in front suspension at 50K miles led to $3000 of parts replaced over fives visits to two Jeep dealers which finally bought a quiet front end. Most expensive repair on any vehicle I've owned in the last 56 years.
  • Bob Hey Tassos, have you seen it with top down. It's a permanent roll bar so if it flips no problem. It's the only car with one permanently there. So shoots down your issue. I had a 1998 for 10 years it was perfect, but yes slow. Hardly ever see any of them anymore.
  • 3-On-The-Tree 2007 Toyota Sienna bedsides new plugs, flat tire on I-10 in van Horn Tx on the way to Fort Huachuca.2021 Tundra Crewmax no issues2021 Rav 4 no issues2010 Corolla I put in a alternator in Mar1985 Toyota Land Cruiser FJ60 280,000mi I put in a new radiator back in 08 before I deployed, did a valve job, new fuel and oil pump. Leaky rear main seal, transmission, transfer case. Rebuild carb twice, had a recall on the gas tank surprisingly in 2010 at 25 years later.2014 Ford F159 Ecoboost 3.5L by 80,000mi went through both turbos, driver side leaking, passenger side completely replaced. Rear min seal leak once at 50,000 second at 80,000. And last was a timing chain cover leak.2009 C6 Corvette LS3 Base, I put in a new radiator in 2021.
  • ChristianWimmer 2018 Mercedes A250 AMG Line (W177) - no issues or unscheduled dealer visits. Regular maintenance at the dealer once a year costs between 400,- Euros (standard service) to 1200,- Euros (major service, new spark plugs, brake pads + TÜV). Had one recall where they had to fix an A/C hose which might become loose. Great car and fun to drive and very economical but also fast. Recently gave it an “Italian tune up” on the Autobahn.
  • Bd2 Lexus is just a higher trim package Toyota. ^^
Next