The driving force of morality

guest column

I recently attended a talk given by Dr. Sinnott-Armstrong about how we program morality into artificial intelligence. The main example given was self-driving cars, and the ethical dilemma prevalent in deciding whose lives take precedence in an emergency situation—the driver and other passengers in the car, or pedestrians and people in other cars. The conversation was especially relevant given that Mercedes, one of two companies currently developing a self-driving car, recently announced that their cars would be programmed to value the lives of the passengers inside the car over the lives of anyone else.

Now, on the one hand, that’s great. We can drive safely and free of fear that our cars will sacrifice our lives to avoid hitting a pedestrian.

On the other hand, it’s significantly less great if you’re a pedestrian.

Furthermore, not everyone is lucky enough to own a Mercedes. So is it right that the wealthiest subsection of the population can buy a car that will save their own lives at the cost of others? Is it ethical to have a car that will favor those with the resources to buy a Mercedes over those that don’t? And if not, what can we do about it?

But then again, there’s the question of how much it really matters in the long run. The major benefit of a self-driving car (besides no longer having to drive the car yourself) is the predicted decrease in the number of accidents. Assuming that machines are more logical, consider more possibilities, and act faster than humans do, they’re far less likely to get into accidents. In fact, of the 17 crashes Google’s self-driving fleet experienced before January 2016, all of them were caused by other cars. The one crash Google’s car did cause, one month later, was a minor crash that its driver also failed to step in and prevent, implying that having a human in charge of the wheel the entire time would not have been any safer.

According to Gary Shapiro, CEO of Consumer Technology Assn., self-driving cars have the potential to eliminate 90 percent of over 30,000 deaths and injuries caused by car accidents each year in America. So is it really worth worrying about the very low probability that (a) a Mercedes self-driving car will be involved in a life-threatening situation, and (b) there’s no other way to escape it besides causing fatalities in surrounding passengers?

Even the remaining ten percent of deaths and injuries, 3,000 or more in a year, is incredibly high. No one should have to be hurt or killed due to mishandling of one of the most useful machines in daily life. But while the ethics of accident management are important, how significant are they in comparison to the undeniable benefits of self-driving cars?

Self-driving cars don’t get distracted, or drunk, or fall asleep. They don’t get road rage, or forget to look for pedestrians in the crosswalks. They can calculate the stopping distance far better than humans can, and are more likely to communicate to avoid accidents. Furthermore, they can improve access to self-transportation for people with disabilities. Are all of these benefits worth setting aside because of the contention about accident ethics?

I don’t think they are. But I also don’t think Mercedes’s decision is just something people should ignore. It’s something to think about—if we were in that position, where we had to choose between our lives, and those of pedestrians or other cars, which would we choose to save?

Should we expect our cars to do the same?

Kat Hefter is a Trinity freshman.

Discussion

Share and discuss “The driving force of morality” on social media.