Cybersecurity

AI’s Malicious Potential Front and Center in New Report

As beneficial as artificial intelligence can be, it has its dark side, too. That dark side is the focus of a 100-page report a group of technology, academic and public interest organizations jointly released Tuesday.

AI will be used by threat actors to expand the scale and efficiency of their attacks, the report predicts. They will employ it to compromise physical systems such as drones and driverless cars, and to broaden their privacy invasion and social manipulation capabilities.

Novel attacks that take advantage of an improved capacity to analyze human behaviors, moods and beliefs on the basis of available data are to be expected, according to the researchers.

“We need to understand that algorithms will be really good at manipulating people,” said Peter Eckersley, chief computer scientist at the Electronic Frontier Foundation.

We need to develop individual and society-wide immune systems against them,” he told the E-Commerce Times.

The EFF is one of the one of the sponsors of the report, along with the Future of Humanity Institute, the University of Oxford’s Centre for the Study of Existential Risk, the University of Cambridge, the Center for a New American Security, and OpenAI.

More Fake News

Manipulating human behavior is a most significant concern in the context of authoritarian states, but it also may undermine the ability of democracies to sustain truthful public debates, notes the report, The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.

“We’re going to see the generation of more convincing synthetic or fake imagery and video, and a corruption of the information space,” said Jack Clark, strategy and communications director at OpenAI, a nonprofit research company cofounded by Elon Musk, CEO of Tesla and SpaceX.

“We’re going to see more propaganda and fake news,” Clark told the E-Commerce Times.

There is a critical connection between computer security and the exploitation of AI for malicious purposes, the EFF’s Eckersley pointed out.

“We need to remember that if the computers we deploy machine learning systems on are insecure, things can’t go well in the long run, so we need massive new investments in computer security,” he said.

“AI could make cybersecurity either better or worse,” Eckersley continued, “and we really need it to be used defensively, to make our devices more stable, secure and trustworthy.”

Hampering Innovation

In response to the changing threat landscape in cybersecurity, researchers and engineers working in artificial intelligence development should take the dual-use nature of their work seriously, the report recommends. That means misuse-related considerations need to influence research priorities and norms.

The report calls for a reimagining of norms and institutions around the openness of research, including prepublication risk assessment in technical areas of special concern, central access licensing models, and sharing regimes that favor safety and security.

However, those recommendations are troubling to Daniel Castro, director of the Center for Data Innovation.

“They could slow down AI development. They would be moving away from the innovation model that has been successful for technology,” he told the E-Commerce Times.

“AI can be used for a lot of different purposes,” Castro added. “AI can be used for bad purposes, but the number of people trying to do that is fairly limited.”

Breakthroughs and Ethics

By releasing this report, the researchers hope to get ahead of the curve on AI policy.

“In many technology policy conversations, it’s fine to wait until a system is widely deployed before worrying in detail about how it might go wrong or be misused,” explained the EFF’s Eckersley, “but when you’ve got a drastically transformative system, and you know the safety precautions you want will take many years to put in place, you have to start very early.”

The problem with public policymaking, however, is that it rarely reacts to problems early.

“This report is a ‘canary in the coal mine’ piece,” said Ross Rustici, senior director of intelligence services at Cybereason.

“If we could get the policy community moving on this, if we could get the researchers to focus on the ethics of the implementation of their technology rather than the novelty and engineering of it, we’d probably be in a better place,” he told the E-Commerce Times. “But if history shows us anything, those two things almost never happen. It’s very rare that we see scientific breakthroughs deal with their ethical ramifications before the breakthough happens.”

John P. Mello Jr.

John P. Mello Jr. has been an ECT News Network reportersince 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, theBoston Phoenix, Megapixel.Net and GovernmentSecurity News. Email John.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by John P. Mello Jr.
More in Cybersecurity

Technewsworld Channels