The next 5 years could be very bleak according to a new report on malicious AI

While a number of analysts have gone to great lengths to highlight that artificial intelligence (AI) will ultimately benefit humanity, a new report has cautioned about how the technology may also be used maliciously.

The report, which was written by 26 authors from 14 institutions, spanning academia, civil society, and industry (including participants from Oxford, Cambridge, and Yale universities), details how new AI technology could maliciously affect digital, physical and political security over the next five years.

“Artificial intelligence (AI) and machine learning (ML) have progressed rapidly in recent years, and their development has enabled a wide range of beneficial applications,” it said.

“We are excited about many of these developments, though we also urge attention to the ways in which AI can be used maliciously.

“We analyze such risks in detail so that they can be prevented or mitigated, not just for the value of preventing the associated harms, but also to prevent delays in the realization of the beneficial applications of AI,” its authors said.

You can find a summary of the biggest risks found by the report, detailed below.


Digital 

  • Automation of social engineering attacks – Victims’ online information is used to automatically generate custom malicious websites/emails/links they would be likely to click on, sent from addresses that impersonate their real contacts, using a writing style that mimics those contacts. As AI develops further, convincing chatbots may elicit human trust by engaging people in longer dialogues, and perhaps eventually masquerade visually as another person in a video chat.
  • Automation of vulnerability discovery – Historical patterns of code vulnerabilities are used to speed up the discovery of new vulnerabilities, and the creation of code for exploiting them.
  • More sophisticated automation of hacking – AI is used (autonomously or in concert with humans) to improve target selection and prioritization, evade detection, and creatively respond to changes in the target’s behavior. Autonomous software has been able to exploit vulnerabilities in systems for a long time , but more sophisticated AI hacking tools may exhibit much better performance both compared to what has historically been possible and, ultimately (though perhaps not for some time), compared to human.
  • Human-like denial-of-service – Imitating human-like behavior (e.g. through human-speed click patterns and website navigation), a massive crowd of autonomous agents overwhelms an online service, preventing access from legitimate users and potentially driving the target system into a less secure state.
  • Automation of service tasks in criminal cyber-offense – Cybercriminals use AI techniques to automate various tasks that make up their attack pipeline, such as payment processing or dialogue with ransomware victims.
  • Prioritising targets for cyber attacks using machine learning –  Large datasets are used to identify victims more efficiently, e.g. by estimating personal wealth and willingness to pay based on online behavior.
  • Exploiting AI used in applications, especially in information security – Data poisoning attacks are used to surreptitiously maim or create backdoors in consumer machine learning models.
  • Black-box model extraction of proprietary AI system capabilities –  The parameters of a remote AI system are inferred by systematically sending it inputs and observing its outputs.

Physical 

  • Terrorist repurposing of commercial AI systems – Commercial systems are used in harmful and unintended ways, such as using drones or autonomous vehicles to deliver explosives and cause crashes.
  • Endowing low-skill individuals with previously high-skill attack capabilities – AI-enabled automation of high-skill capabilities — such as self-aiming, long-range sniper rifles – reduce the expertise required to execute certain kinds of attack.
  • Increased scale of attacks – Human-machine teaming using autonomous systems increase the amount of damage that individuals or small groups can do: e.g. one person launching an attack with many weaponized autonomous drones.
  • Swarming attacks – Distributed networks of autonomous robotic systems, cooperating at machine speed, provide ubiquitous surveillance to monitor large areas and groups and execute rapid, coordinated attacks.
  • Attacks further removed in time and space – Physical attacks are further removed from the actor initiating the attack as a result of autonomous operation, including in environments where remote communication with the system is not possible.

Political security

  • State use of automated surveillance platforms to suppress dissent – State surveillance powers of nations are extended by automating image and audio processing, permitting the collection, processing, and exploitation of intelligence information at massive scales for myriad purposes, including the suppression of debate.
  • Fake news reports with realistic fabricated video and audio –  Highly realistic videos are made of state leaders seeming to make inflammatory comments they never actually made.
  • Automated, hyper-personalised disinformation campaigns – Individuals are targeted in swing districts with personalised messages in order to affect their voting behavior.
  • Automating influence campaigns – AI-enabled analysis of social networks are leveraged to identify key influencers, who can then be approached with (malicious) offers or targeted with disinformation.
  • Denial-of-information attacks – Bot-driven, large-scale information generation attacks are leveraged to swamp information channels with noise (false or merely distracting information), making it more difficult to acquire real information.
  • Manipulation of information availability – Media platforms’ content curation algorithms are used to drive users towards or away from certain content in ways to manipulate user behavior.

Read: What you may or may not be covered for when using a self-driving car in South Africa

Must Read

Partner Content

Show comments

Trending Now

Follow Us

The next 5 years could be very bleak according to a new report on malicious AI