Patients may have questions about artificial intelligence-driven healthcare—which is at odds with protecting "trade secrets."
A group of Duke researchers recently received a $196,000 grant to address the problem and help patients understand the rationale behind artificial intelligence-enabled health care delivery.
Although machine learning is becoming increasingly sophisticated, problems still arise when it is introduced into new industries. In terms of health care delivery, patients have a right to understand the reasoning behind their course of treatment. But when software is responsible for making clinical decisions, disclosing the rationale may expose important trade secrets, leaving software companies vulnerable to having their software stolen and reproduced.
In order to solve this issue, faculty from the Duke Law Center for Innovation Policy (CIP) and Duke-Margolis, MD, Center for Health Policy will collect data on private and public investments on the AI-based software. They will also interview stakeholders including developers, purchasers, regulators, users and patients.
Arti Rai, Elvin R. Latty professor of law and co-director of CIP, said that this project will be the first to address this specific issue.
“While software development has always involved trade secrecy, the importance of trade secrecy as an innovation incentive may have increased as a consequence of challenges associated with securing and enforcing software patents,” said Rai, who is a principal investigator on the grant, in a news release. “For this reason, the principal regulator of AI-based software, the FDA, as well as professional organizations, providers and insurers are actively interested in the question of how to balance explainability and trade secrecy.”
The researchers will make recommendations concerning the appropriate levels of explainability for both health care professionals and patients, and what legal precautions should be taken in order to allow the software companies to maintain their trade secrets. Their research is enabled through a one-year grant of more than $196,000 from The Greenwall Foundation, an organization that supports bioethics research.
Gregory Daniel, clinical professor in the Fuqua School of Business, noted that the collaborative nature of the project will enable the researchers to make progress quickly on their goals.
“By working in collaboration with Duke Law, we can move much more quickly to identify real-world policy approaches to support emerging technologies that incorporate AI in helping physicians and patients make better healthcare decisions," Daniel said in the release.
Get The Chronicle straight to your inbox
Signup for our editorially curated, weekly newsletter. Cancel at any time.