Duke professors illustrate human error in research at town hall

In the wake of a $112.5 million settlement, the question of research integrity has been prevalent on the minds of Duke students and staff members. 

The Sept. 11 research town hall, an exploration into the limitations, potential for misuses and reproducibility of research data, made this abundantly clear. Each speaker brought to the discussion a new angle by which to approach the data driven—and also error-prone—human enterprise of science and research. 

“I think these two things come together: the technical and the human,” Vice President for Research Lawrence Carin said. “Scientific integrity is like a diet. It is not something you do every now and then. It is something that you have to think about all the time. And you have to work at all the time. And like a diet, it’s hard. And so these town halls are our diet.”

The event was moderated by Carin, James L. Meriam professor of electrical and computer engineering, and hosted by Duke Office of Scientific Integrity, featured Dan Ariely, James B. Duke professor of psychology and behavioral economics; Steven Grambow, assistant professor of biostatistics and bioinformatics; and David Carlson, assistant professor of civil and environmental engineering.

Ariely spoke to the behavioral component behind ethical research, inviting the audience to consider the thinking that drives dishonesty, both in research and in personal lives. 

“We have many human values. We have our own benefit, the benefit of the people we love, not wanting to offend someone, financial success, all kinds of things,” he said. “And sadly, not all human values point in the same direction all the time. What do we do when some of those values don’t fit?” 

He illustrated the motivations behind dishonesty through rolling a die. A participant was asked to choose the top or bottom of a die prior to rolling it, under the pretense that whatever number was on the side they chose would be the amount they would be paid by the researcher. 

The participant was not required to disclose which side they chose. They simply reported the final number. After rolling the die 20 times, participants would end up with sum totals of money far greater than what was statistically anticipated. 

“We find that people are surprisingly lucky,” Ariely joked. 

In the same study performed while the participants sat next to their significant other, he found that people have an easier time justifying dishonesty when it’s for a team or for other people. 

He also recounted a study from Harvard University in which a drunk outlier participant pulled group performance down and destroyed the results for the whole experiment. The individual’s removal would have confirmed the hypothesis, yet such an act would call into question research ethics. 

Although the group eventually substantiated the hypothesis without removing the individual from the study, the question still remained of what to do in such a case.

Ariely asserted that the solution lies in making clear rules, such as excluding drunk participants, for an experiment prior to starting due to “our conflicts of interest, our motivation to seriality in a certain way.” Above all, he emphasized the need to respect all experimental results.

“We need to celebrate all results and not just the results that agree with our initial intuition,” Ariely said.

Complications can arise in models that are too complex, as well. Taking a technical look at machine learning, Carlson described the problem of data overfitting—the idea that an increasingly complex model might fit data better but fail to predict more or future data—and the problems this predictive technique brings to the real world. 

It is for this reason that Carlson stressed the importance of creating a one-time use data test set. The test set aims to mimic a new real world experiment, a relatively straightforward feat that is often not carried out, he argued.

Grambow added a statistical perspective to the discussion. Some researchers view “data as a thing onto itself,” and he countered that this is not often true. 

There is a “research protocol” that occurs before the data is gathered, he said, and each step is critical for obtaining a “clear inference on the study question.” This process involves “how the data is to be collected, how it is to be managed, how it is to be analyzed,” he explained.

“Statistics as a discipline has to some degree failed, because we often teach statistics as here’s some data let’s analyze it, and that’s not how science actually works,” Grambow said.

To elucidate his point on correcting false data, he pointed to a case in pediatric psychology where consultation data was erroneous and led to incorrect conclusions. The false conclusions were due to a failure to reverse the code for some of the questions on a survey, he said. 

Problems like those in the pediatric psychology case may occur because the programmer, data collector and statistician “never had a conversation,” Granbow added. 

“We need to change things so that people are intrinsically motivated to do good science,” he said. “We need better team science. We need to create better incentives. We need to increase resources for doing transparent and reproducible science, and we need to educate people so they know how to do reproducible research.”


Preetha Ramachandran profile
Preetha Ramachandran | Diversity, Equity and Inclusion Coordinator

Preetha Ramachandran is a Trinity senior and diversity, equity and inclusion coordinator for The Chronicle's 118th volume. She was previously senior editor for Volume 117.

Discussion

Share and discuss “Duke professors illustrate human error in research at town hall” on social media.