Follow Datanami:
February 21, 2018

‘Dual-Use’ AI Poses New Security Threats

While the genie may already be out of the bottle, the rapid growth and broad availability of AI and machine learning technology along with a growing list of development tools is prompting critics to highlight future security concerns and the need to consider upfront the potential for malicious use of the technology.

A report released this week by researchers in the U.S. and U.K., including members of OpenAI, a group that promotes “safe artificial general intelligence,” urges greater consideration of unforeseen security threats posed by ubiquitous AI.

Among the first steps is acknowledging the “dual-use” nature of AI that can be used for “public good or harm,” notes the report released Tuesday (Feb. 20) by a cross-section of AI researchers affiliated with the Future of Humanity Institute at the University of Oxford.

The study focuses on security domains as a way of underscoring the up- and downside of AI and machine learning, to wit: The same algorithm use to spot junk mail that ends up in your spam folder also has potential malware applications. The researchers argue that these malicious uses must be considered and mitigated before code is released to the open-source community or new algorithms are written and deployed.

“It is clear that AI will figure prominently in the security landscape of the future, that opportunities for malicious use abound and that more can and should be done,” warns the report.

Focusing on digital, physical and political security (e.g., “fake news” and information warfare), the report examines a range of unintended consequences posed by dual-use AI. Among them are the expansion of existing security threats, including the potential proliferation of attacks as well as potential targets.

AI systems could also be misused to introduce new threats beyond the capabilities of human hackers. One recent example are the social media bots unleashed by “troll farms” to stoke domestic political tensions.

The authors also warn of evolving security threats. “We believe there is reason to expect attacks enabled by the growing use of AI to be especially effective, finely targeted, difficult to attribute and likely to exploit vulnerabilities in AI systems.”

Among those vulnerabilities, observers note, is a lack of understanding about how current AI systems learn and infer from large datasets, an uncertainty that makes those systems prone to manipulation.

The authors therefore offer a set of “look-before-you-leap” recommendations for blunting the potential malicious use of AI technologies. Along with bringing policymakers into the mitigation and prevention process, the report urges AI developers to “take the dual-use nature of their work seriously.”

There are indications that those recommendations are already being implemented in the form of “beneficial AI” conferences and other technical efforts designed to rein in malicious AI applications.

The ongoing problem, the report notes, is the accelerating pace of AI and machine learning development along with open-source availability of code and development tools.

OpenAI and other groups have attempted to tackle the “AI safety” problem by generating possible scenarios for malicious use of the technology. Among the scenarios they considered were “persuasive ads” generated by AI systems to target security system administrators. Another involves using neural networks and “fuzzing” techniques to create computer viruses capable of automating exploits.

“AI challenges global security because it lowers the cost of conducting many existing attacks, creates new threats and vulnerabilities and further complicates the attribution of specific attacks,” OpenAI noted in a blog post.

Recent items:

How to Make Deep Learning Easy

AWS, Microsoft Team on ‘Open AI’

 

Datanami