Alexa, algorithms, robot doctors, oh my! Students discuss tech ethics at HackDuke

<p>Joanne Kim presenting at “Ethical Artificial Intelligence.”</p>

Joanne Kim presenting at “Ethical Artificial Intelligence.”

Amid the 24-hour coding frenzy of HackDuke, some students took a break to listen to presentations about technology ethics. 

HackDuke, a hackathon that emphasizes social change, featured a workshop called “Ethical Artificial Intelligence.” Members of Ethical Tech, an undergraduate student group, delivered presentations on the intersection of tech with medicine, surveillance and the criminal justice system.

“I am concerned about the lack of discussion of ethics,” senior Justin Sherman said, “not only at Duke but in general.” 

Sherman, the youngest policy fellow at think tank New America and a fellow at the Duke Center on Law and Technology at the Law School, said that he was motivated to found Ethical Tech with Cassi Carley, Trinity ‘11 and Graduate School ‘18, because of what they viewed as a lack of meaningful space to debate tech ethics.

Sherman asserted that democratically-elected rulers use some of the same digital tools as autocracies like Turkey and China, citing facial recognition in public places, IP blacklisting and hidden cameras and microphones as examples. Technology companies are happy to help for the sake of their bottom lines, he said, like in the case of IBM aiding Filipino President Rodrigo Duterte. 

Sophomore Kamran Kara-Pabani addressed the dangers of AI, arguing that risk-assessment algorithms—in use across the United States to judge the likelihood of recidivism—can perpetuate racial bias.

AI is coded by humans, he said, which means that the technology reflects human biases in its output. Variables that can be incorporated into algorithms—like total arrests in one’s home neighborhood or personal attitude toward police—can implicitly target certain races more than others. 

There is “high danger in blindly trusting algorithms,” Kara-Pabani said. 

He argued that that courts have an obligation to be forthcoming with their algorithms, and defendants must inspect these factors closely for evidence of bias. Until citizens have fully vetted these algorithms, Kara-Pabani explained, they cannot be taken at face value when individuals’ liberty is at stake. 

Although Sherman did not condemn AI in the public sphere, including voice-controlled assistants like Amazon’s Alexa or Apple’s Siri, he said that long-term exposure could make people too comfortable. 

“Smartphones aren’t harvesting your data, but the ubiquity makes people more accustomed to it,” he said. 

Sophomore Joanne Kim discussed privacy in medical settings, arguing that limited regulations do not protect a patient’s medical records in case of a data breach. Additionally, if morality is pitted against profit, doctors may be more likely to use untested yet cheaper technology, endangering the patient’s life in the process. 

Kim added that AI could also take over jobs traditionally held by humans, such as surgeries or physical examinations. 

She envisioned a future where AI handles the mundane yet provides space for humans to intervene in case of emergency, although this is not a certainty. 

“AI could alert doctors and nurses when something goes awry, but who’s to say that someday doctors won’t need to be there? We already trust the technology in our phones,” Kim said.

Discussion

Share and discuss “Alexa, algorithms, robot doctors, oh my! Students discuss tech ethics at HackDuke” on social media.