It’s a crisp, fall afternoon, and you’re being driven through the city by your new autonomous car. As you marvel at how far technology has come, a group of children becomes detached from their tour group and crosses the road. In a stroke of bad luck, it seems that your car’s brake system has malfunctioned! You’d swerve out of the way, but a large dump truck is occupying the other lane, and a collision into said object will spell certain death for you and your fiancé in the passenger seat (did I mention she’s pregnant?) You’re now faced with a decision: do you save you and your loved, or the children who are (illegally, I might add,) crossing the street?
While the specificity of the above scenario may seem a bit extraneous, it is exactly the type of situation that researchers, programmers, and consumers alike need to consider within the coming years, as self-driving cars are steadily increasing in popularity across the nation. As their popularity and usage rises, the debate on the morality of autonomy intensifies, and now automakers and software designers must decide if human life is quantified for the sake of convenience.
I’ll throw it in reverse for a second. To say that self-driving cars are safe is an understatement: of the over 130 million miles of road driven by Tesla’s Autopilot, there has only been one fatality, and if you believe Elon Musk, it was simply the culmination of a series of rare circumstances. The nation’s roadways can and will be safer as autonomous vehicles become more and more prevalent, and while machine learning makes it possible for the software in these cars to predict and anticipate what decision it needs to make, it doesn’t make that decision any less consequential.
It’s not like the public’s not being polled either. Through a series of Amazon Mechanical Turk surveys, participants were asked to assess certain situations in which they must choose between saving pedestrians or the person in the car barreling towards them. In a hopefully unsurprising fashion, more than 75 percent of respondents decided that the needs of the many outweigh the needs of the few, and elected to “sacrifice” the person in the car. In other words, people overwhelming believe that self-driving cars should embrace a utilitarian mentality. That is, of course, until the situation turned into one of buying an autonomous vehicle. In that case, survey participants elected to select whichever car was going to protect them at all costs.
So, how do we progress from here?

The fact of the matter is, the choice to sacrifice your own life to save another predates the automobile by millennia, and our faith will have to be placed in the hands of an algorithm. There is some good news though; you get to help create it! The nerds over at MIT have created a “game” of sorts, in which visitors to the site will be faced with a number of situations that could potentially arise if the brakes of a self-driving car were to fail. Your results are then compared to other users, and you are given the chance to design your own situation if you feel the need to. The authors state that the site is their attempt at “building a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas,” and hope to learn more about not only how people view each situation, but machine learning in general. This author tended to steer more towards utilitarianism, opting to save as many as possible at the risk of my own, which I believe accurately reflects upon my “fight or flight” instincts.
Unless, of course, the person crossing the street was wearing a Cowboys jersey.
Sources: Moral Machine, The social dilemma of self driving cars, PBS, Science Magazine, Popular Mechanics