The rapid expansion and application of artificial intelligence, from defense and intelligence to media and information, have left the technology vulnerable to exploitation by malicious entities and individuals, a recent report warns.
The Malicious Use of Artificial Intelligence report – composed by 26 experts from 14 institutions in academia, industry and civil society – identifies three particularly vulnerable areas: digital, physical and political.
The 100-page review names threats from terrorists and rogue states, ranging from weaponized drones to media manipulation to enhanced hacking, able to strike significant damage to physical and financial security.
"We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems – because the risks are real," Seán Ó hÉigeartaigh, executive director of the Centre for the Study of Existential Risk and co-author of the report, told the BBC.
Naming a few of the threats, Oxford research fellow Miles Brundage said, "AI will alter the landscape of risk for citizens, organizations and states – whether it's criminals training machines to hack or 'phish' at human levels of performance or privacy-eliminating surveillance, profiling and repression – the full range of impacts on security is vast."
To meet these risks, the researchers counsel scientists and engineers to consider how they might mitigate misuse of their creations and governments to put laws in place to protect the emerging technology from exploitation.
Both entities must work together to guard against threats, the report advises, and strive to involve more stakeholders in mitigation of the risks.
To design security mechanisms, experts suggest examining how longer-established disciplines, such as computer security, have handled dual use risks.
Though the "hype" surrounding AI's capacity for good has wowed the world over the past decade, hÉigeartaigh warns it's time to take its capacity for harm seriously.
"For many decades hype outstripped fact in terms of AI and machine learning. No longer. This report looks at the practices that just don't work anymore – and suggests broad approaches that might help: for example, how to design software and hardware to make it less hackable – and what type of laws and international regulations might work in tandem with this," hÉigeartaigh said.