Faculty members across Duke’s computer science and electrical and computer engineering departments have been working to promote equity in their fields.
Equity issues in computing can arise at the ground level with the development of software. Just as an author’s beliefs and implicit biases can reveal themselves in literature, the way that a programmer crafts a predictive model can be shaped by their own beliefs and biases. The lack of diversity in the field means that algorithms can unintentionally perpetuate stereotypes.
This phenomenon — referred to as algorithmic bias — can have grave consequences given society’s reliance on predictive models in a number of critical areas. For instance, a 2019 study found that an algorithm used to estimate future medical costs for over 200 million hospital patients across the U.S. favored white patients over Black patients.
It is critical that high-stakes models do not place marginalized subgroups at an even further disadvantage, according to Jian Pei, professor of electrical and computer engineering.
Deconstructing these biases has been Pei’s principal mission. In tackling algorithmic bias, Pei adopts a three-pronged approach: raising awareness, effectively evaluating bias and implementing equitable technology.
“We should use case studies and examples to show in detail the potential for a lack of fairness, diversity and equity to happen in algorithm design. When people are aware of these topics, the next step is how we can measure them,” Pei said. “People don’t know how to use those concepts, or how to approach those concepts. It is important to build quantitative measures, such as how unfair, or how close to being fair [a model is], and what the possible trade off is.”
Pei explained that fairness should be a consideration at multiple phases of the data science life cycle, including how data is collected, sampled for analysis and evaluated post-processing. Adjusting results that appear to benefit certain subgroups over others is insufficient; instead, models should be designed to perform unbiased calculations.
“We need to be proactive and build the mechanisms to prevent future problems. This is not just fixing an existing problem, we need to improve the whole process going forward,” Pei said.
Pei is not alone in his efforts. Cynthia Rudin, Earl D. McLean, Jr. professor of computer science, electrical and computer engineering, statistical science and biostatistics & bioinformatics, has emphasized interpretability and transparency through her work. Rudin has received acclaim for her research examining biases in the black box models used in healthcare and criminal justice settings — algorithms whose internal workings are concealed.
“In criminal justice, we are trying to show that black box models can be replaced with interpretable models without loss of accuracy, which makes the models less susceptible to invisible biases and data entry mistakes that black boxes have,” Rudin wrote in an email.
As a professor, Rudin has approached equity in the classroom from several angles. For example, she has made her courses more accessible by posting material online, both for students unable to attend class and for members of the general public. Rudin also updated her curriculum last year to incorporate interpretability as a key concept.
“As soon as we start working on something high-stakes where mistakes matter, like healthcare or loan decisions, it really matters whether we understand what the predictive models are doing,” Rudin wrote. “I decided that if you are going to be an expert in machine learning, you really must know something about interpretability.
Rudin said that her students are “the first students to actually be taught this material in a classroom anywhere in the world.”
Get The Chronicle straight to your inbox
Signup for our weekly newsletter. Cancel at any time.
Gautam Sirdeshmukh is a Trinity senior and a staff reporter for the news department. He was previously the health & science news editor of The Chronicle's 117th volume.