Walter Sinnott-Armstrong discusses artificial intelligence and morality

<p>Walter Sinnott-Armstrong examined how&nbsp;morality and&nbsp;artificial intelligence relate during a talk Monday evening.&nbsp;</p>

Walter Sinnott-Armstrong examined how morality and artificial intelligence relate during a talk Monday evening. 

How do we create artificial intelligence that serves mankind’s purposes? Walter Sinnott-Armstrong, Chauncey Stillman professor of practical ethics, led a discussion Monday on the subject.

Through an open discussion funded by the Future of Life Institute, Sinnott-Armstrong raised issues at the intersection of computer science and ethical philosophy. Among the tricky questions Sinnott-Armstrong tackled were programming artificial intelligence so that it would not eliminate the human race as well as the legal and moral issues involving self-driving cars. 

Sinnott-Armstrong noted that artificial intelligence and morality are not as irreconcilable as some might believe, despite one being regarded as highly structured and the other seen as highly subjective. He highlighted various uses for artificial intelligence in resolving moral conflicts, such as improving criminal justice and locating terrorists.

“You can’t tell a person to factor certain considerations out, but you can do that to a computer," he said. "There are a lot of advantages to these various uses and they’re clearly going to grow.”

He also discussed an application that he and a team of professors, graduate students and undergraduate students are currently developing—which will build human morality into artificial intelligence. By presenting users with various scenarios involving moral judgement, the application would observe how people determine which features of cases are morally relevant and then test the interaction of morally relevant features in complex cases.

These inputs would then serve as the foundation for an artificial intelligence with humans’ moral considerations programmed in, he explained. 

“Our goal is to create artificial intelligence that mimics human morality to avoid doomsdays and to improve our understanding of human moral thinking,” Sinnott-Armstrong said. 

He also discussed the legal and moral obstacles that could arise from the propagation of self-driving cars—which would create convenience, reduce resource inefficiency and enable individuals with physical disability to have better mobility.

However, they would also require the car’s programmers to choose between the lives of its passengers and pedestrians in the event of an accident. Sinnott-Armstrong argued that the United States should outlaw self-driving cars that favor the driver instead of pedestrians, noting that purchasing a car that is protective of pedestrians could clear one’s conscience. 

“I thought the project they’re working on is interesting, and it made me want to get more involved in that kind of stuff," first-year Miles Turpin said. "It’s cool to think that this kind of stuff is happening on campus.”

First-year Kameron Sedigh said the talk piqued his interest in issues surrounding artificial intelligence.

“No matter whether you’re really into it or if you’re like me and just here to see if you’re interested in it, it brought up a lot of questions for me," he said. "I expected myself to sit here in silence, but I ended up having to think and ask questions.”

Discussion

Share and discuss “Walter Sinnott-Armstrong discusses artificial intelligence and morality ” on social media.