Duke researchers use machine learning to defend personal information

Two Duke researchers have found a way to confuse machine learning systems, potentially revealing a new way to protect online privacy. 

Neil Gong, assistant professor of electrical and computer engineering, and Jinyuan Jia, a Ph.D. candidate in electrical and computer engineering, have displayed the potential for so-called “adversarial examples,” or deliberately altered data, to confuse machine learning systems. This research could be used to fool attackers who use these systems to analyze user data. 

“We found that, since attackers are using machine learning to perform automated large-scale inference attacks, and the machine learning is vulnerable to those adversarial examples, we can leverage those adversarial examples to protect our privacy,” Gong said. 

Machine learning systems are tools for statistical analysis. For example, in 2016, Donald Trump’s presidential campaign hired Cambridge Analytica to collect and analyze data about Facebook users. 

The firm’s algorithm was able to use information about the pages that users had liked to predict their personalities and political orientations—an example of how a system uses public data to infer private information, or an inference attack, Gong wrote in an email.

These systems have a vulnerability: adversarial examples, or pieces of data that have been designed to confuse them. Gong said that a self-driving car can recognize a stop sign—but it might think that a stop sign with a sticker on it is a speed limit sign. 

“This basically means we can somehow change my data such that machine learning makes incorrect predictions,” he said. 

In the case of Cambridge Analytica, adding just a few fake page likes to a Facebook profile could throw off algorithms that try to analyze data. 

To test this idea, Gong and Jia turned to the Google Play store. They collected users’ app ratings and location data, constructing a machine learning system that could use ratings to guess the city where a user lived correctly about 40% of the time. Adding three to four fake ratings made the predictions no better than a random guess. 

The result—a system that Gong and Jia call “AttriGuard” in their paper—could be a tool for companies like Facebook to defend themselves from third-party attackers.

“I think it’s not hard for Facebook to do it,” Jia said, noting that the company could use its own machine learning systems to add fake likes to users’ profiles.

Gong said that individual users could also use adversarial examples to prevent Facebook from analyzing their data. Jia explained that this might take the form of something like a browser extension that adds fake likes to a user’s profile.

The research does not provide a perfect solution. Gong said that malware could record the sound of users’ keystrokes, and machine learning algorithms could analyze the data to steal their passwords. It would be impossible to modify the sound data, short of using a physical speaker to play fake typing sounds. 

Nevertheless, there are other types of attacks that are more similar to what took place during the Cambridge Analytica scandal. Machine learning systems could perform statistical analysis to identify which websites people are visiting, even if they are using encrypted browsers, Gong said. 

“There are many other machine learning-based inference attacks you can also use adversarial examples to defend against,” Jia said. 

Gong and Jia have made headlines for their research, which was featured in a recent Wired article about adversarial examples. 

The pair has already begun to conduct further research on machine learning attacks, Gong explained, examining ways to trick systems that predict whether data points fall within a certain data set.

“I think that is very important … defending against this kind of attack,” Jia said, “because machine learning [is becoming] very popular to perform this kind of attack.”


Matthew Griffin

Matthew Griffin was editor-in-chief of The Chronicle's 116th volume.

Discussion

Share and discuss “Duke researchers use machine learning to defend personal information” on social media.