AI is making its mark in the world of cybersecurity.
For defenders, AI can help security teams detect and mitigate threats faster. For attackers, weaponized AI can aid in a number of attacks, such as deepfakes, data poisoning, and reverse engineering.
But, lately, it’s AI-powered malware that has come to light and whose existence has been questioned.
AI-enabled attacks against AI-powered malware
AI-enabled attacks occur when a malicious actor uses AI to participate in an attack. Deepfake technology, a type of AI used to create fake but convincing images, sound and videos, can be used, for example, in social engineering attacks. In these situations, the AI is a tool for carrying out an attack, not for creating it.
AI-powered malware, on the other hand, is trained via machine learning to be sneakier, faster, and more effective than traditional malware. Unlike malware that targets large numbers of people with the intention of successfully attacking a small percentage of them, AI-powered malware is trained to think for itself, update its actions depending on the scenario and to specifically target their victims and their systems.
IBM researchers presented DeepLocker, an AI-powered, proof-of-concept malware, at the 2018 Black Hat conference to demonstrate this new breed of threat. The WannaCry ransomware was hidden inside a video conferencing app and lay dormant until a specific face was identified using AI facial recognition software.
Does AI-powered malware exist in the wild?
The quick answer is no. AI-powered malware has yet to be seen in the wild, but don’t rule out the possibility.
“No one was hit or managed to uncover a truly AI-powered offense,” said Justin Fier, vice president of tactical risk and response at Darktrace. “That doesn’t mean he’s not there; we just haven’t seen him yet.”
Malwarebytes malware analyst Pieter Arntz agreed that AI malware has yet to be seen. “To my knowledge, so far, AI is only used on a large scale in malware circles to improve the effectiveness of existing malware campaigns,” he said in an email. mail to SearchSecurity. He predicted that cybercriminals will continue to use AI to improve operations, such as targeted spam, deepfakes and social engineering scams, rather than relying on AI-powered malware.
Potential use cases for AI-powered malware
Just because AI-powered malware hasn’t been seen in the wild doesn’t mean it won’t in the future, especially as the company’s defenses grow stronger.
“Most criminals won’t spend time inventing a new system or even improving an existing one when it already works so well for them,” Arntz said. “Unless they can get their hands on something that works better for them, maybe with a little tweaking, they’ll stick with what works.”
But, as defenses get stronger, cybercriminals may need to take that step and create new attacks. Take ransomware, for example. Ransomware has dominated the threat landscape for years and has been so successful that attackers have not needed to create or use AI-powered malware. However, defenders are slowly catching up and increasing their security, as recent gradual declines in ransomware attacks show.
Although Fier said many reasons contribute to the decline, “you have to assume we’re getting better at our job.” But that means attackers can be tricked into investing in AI-powered malware if they haven’t already, he added.
Beyond the potential reinforcement of ransomware attacks, Arntz outlined the following three use cases for AI-powered malware:
- worms capable of adapting to the next system they attempt to infect;
- polymorphic malware that modifies its code to avoid detection; and
- malware that tailors social engineering attacks based on the data it collects, such as data mined from social media sites.
He noted, however, that certain steps must be taken before it is possible for attackers to practically implement AI-powered malware. For now, Arntz said, “it looks like the cybersecurity industry is using AI better than its malicious adversaries.”
How to prepare for attacks involving AI
Ninety-six percent of respondents to a 2021 MIT Technology Review Insights survey, in association with Darktrace, said they had started preparing for AI attacks.
Jon France, CISO at (ISC)2, said the best way to prepare for AI-enabled attacks — as well as the potential future threat of AI-powered malware — is to practice basic hygiene best practices. of cybersecurity. Also, he said, defenders should use AI to their advantage.
“It’s insane to think attackers wouldn’t use AI to their advantage as much as defenders,” France said. Security teams can use AI to help with threat hunting, malware discovery, and phishing detection practices, for example. AI is also useful in containing threats through automated responses. Responses programmed through AI can offer the added benefit of helping organizations manage burnout and cybersecurity skills gaps.
Arntz suggested organizations use tactics, techniques, and procedures (TTPs) to detect traditional malware, as they also help detect AI-based malware and ultimately malware. powered by AI. TTPs, the strategies used by threat actors to develop and carry out attacks, have long been followed by security teams to detect malware based on behaviors and patterns, rather than having to track every new variant. As the prevalence of cyberattacks increases and the potential threat of AI-powered malware continues, this two-in-one strategy provides additional protection now and in the future.
“Defending and attacking has always been a game of cat and mouse,” France said.