Emerging tech and smart policy

simple complexity

While we might be far off from artificial intelligence at the caliber and malice of the world-ending “Skynet” from the Terminator universe, A.I. and other innovative technologies are moving right along.

Certain applied usages of machine learning, neural networks and other conceptions of artificial intelligence are revolutionizing transportation, changing how we use big data and altering how we interact with information more generally. As A.I. gets “smarter,” we will need to consider how to address this from a policy standpoint.

When discussing artificial intelligence, it is important to distinguish between general intelligence and specific intelligence. General intelligence is exactly what it sounds like: a general intelligence that has the common sense and thought process abilities to execute basic tasks. Specific intelligence, on the other hand, is rooted in some sort of specific ability or skill.

For example, a chess app might have a very powerful algorithm that could beat even the very best chess player in the world. However, a toddler would be smarter than this specific A.I. at pretty much anything in the world besides chess. The algorithm has specific intelligence for chess, but has no notion of general intelligence and no intuition about the world or how to interpret it.

Smart policy will be necessary for regulating both general intelligence applications and specific intelligences. The immediate questions are rooted in specific applications of A.I. (think self-driving cars, kidney distribution algorithms, privacy issues, etc.), and the more long-term questions are rooted in the development of general intelligence (think neural networks, brain emulations and other pathways to seed general artificial intelligences).

For now, one compelling policy area is transportation technology policy, specifically regarding self-driving cars. There are plenty of mind-bending questions to consider. Who is responsible when an accident happens—the car company, or someone else? Should the cars be designed to save the most lives or protect those in the car, in emergency situations? If a driverless car malfunctions, is the car company always liable?

In the realm of self-driving cars, we as a society also should reconsider our organ donor networks. It is a morbid but important fact that a huge share of organs donated to those who need them come from automobile accidents. There is already a shortage of kidney donors as it is. When self-driving cars become the new norm, the number of accidents will converge to zero, putting unbearable weight on our already struggling kidney donor network.

If we truly want to question the ethics of self-driving cars, we also must reconsider the ethics of organ donation and perhaps legal organ markets, as has been suggested by economists and ethicists alike. To be clear, I am not endorsing this suggestion, but rather am listing it as a proposed policy response.

These questions are becoming more and more relevant to the real world every day. Uber recently started testing self-driving cars in Pittsburgh. Google, Tesla and others also have impressive projects underway. Philosophers like Duke’s very own Professor Walter Sinnott-Armstrong and Professor Vincent Conitzer are already doing work in this area. Wider work from more researchers and more fields is necessary to better understand where we stand with specific applications of A.I. The sooner we figure these questions out, the better.

Although a little more far off—though not as far off as one might think—policymakers and industry leaders must take much more seriously the methods by which we move towards artificial general intelligence.

Since the potential value of this system is so large, private researchers and corporation R&D departments will be in a race of sorts. If it is assumed that one could reach general A.I. more quickly by cutting ethical and safety corners, then it is likely that a “race to the bottom” could develop, driven by the prisoner’s dilemma setup. While it would be socially optimal for all of the research teams to take precautions and ensure ethical guidelines, this is less likely to happen when everybody is in a race. This dangerous Nash Equilibrium makes the race to seed general intelligence something to take seriously, from a safety and policy perspective.

This race, believe it or not, has already begun. One organization, Open A.I., is attempting to mitigate the risk of this “race to the bottom” by encouraging researchers to use ethical standards in their developments.

While private efforts are to be encouraged, they might not be strong enough. This is where policymakers could step in and set smart legal guidelines for what people can and cannot do in A.I. research.

Policymakers must learn from experts in the field to better understand the timeline towards more robust artificial intelligence, and to consider ethical and safety guidelines to which researchers must adhere. These policies should be set such that research teams do not feel that they are being disproportionately disadvantaged in the “race” to general intelligence by adhering to safety and ethical considerations, thereby mitigating the risks posed by reckless research that codes first and asks questions later.

While it might sound very abstract or “out there,” there is a strong need to pose these sorts of questions. We need to collectively decide what role we want artificial intelligence to play in our lives, and in the lives of our children. Policymakers, researchers, nonprofits and corporations should all have a seat at the table to determine how to most effectively move forward.

Author’s note: This piece is a modified version of my Duke Political Review column published on Apr. 3.

David Wohlever Sánchez is a Trinity sophomore. His column, “simple complexity” runs on alternate Wednesdays.

Discussion

Share and discuss “Emerging tech and smart policy” on social media.