Source: www.csoonline.com – Author:
AI and machine learning are improving cybersecurity, helping human analysts triage threats and close vulnerabilities quicker. But they are also helping threat actors launch bigger, more complex attacks.
Machine learning and artificial intelligence (AI) are becoming core technologies for threat detection and response tools. The ability to learn on the fly and automatically adapt to changing cyberthreats gives cybersecurity teams an advantage.
According to a survey conducted by Sapio Research on behalf of Vanta, 62% of organizations plan to invest more in AI security over the next 12 months. However, some threat actors are also using machine learning and AI a to scale up their cyberattacks, evade security controls, and find new vulnerabilities all at an unprecedented pace and to devastating results.
According to Bugcrowd’s annual hacker survey, released in October, 77% of hackers use AI to hack and 86% say it has fundamentally changed their approach to hacking. Today, 71% say that AI technologies show value for hacking, up from just 21% in 2023.
There is generative AI tools specifically created for crime, including FraudGPT and WormGPT, according to cybersecurity firm SecureOps.
“Gen AI, as well as AI more broadly, is lowering the bar in hackers’ ability to deploy and develop a series of attacks,” says Vanessa Lyon, global leader for cyber and digital risk at Boston Consulting Group. And the non-deterministic aspects of generative AI make it harder for traditional, rules-based defenses to remain relevant, she adds.
In fact, according to an October report by Keeper Security, 51% of IT and security leaders say that AI-powered attacks are the most serious threat facing their organizations.
Here are the ten most common ways attackers leverage AI and machine learning technologies.
1. Spam, spam, spam, spam
Defenders have been using machine learning to detect spam for decades, says Fernando Montenegro, analyst at Omdia. “Spam prevention is the best initial use case for machine learning.”
If the spam filter used provides reasons for why an email message did not go through or generates a score of some kind, then the attacker can use it to modify their behavior. They’d be using the legitimate tool to make their own attacks more successful. “If you submit stuff often enough, you could reconstruct what the model was, and then you can fine-tune your attack to bypass this model,” Montenegro says.
It’s not just spam filters that are vulnerable. Any security vendor that provides a score or some other output could potentially be abused, Montenegro says. “Not all of them have this problem, but if you’re not careful, they’ll have a useful output that someone can use for malicious purposes.”
2. Better phishing emails
Attackers aren’t just using machine-learning security tools to test if their messages can get past spam filters. They’re also using machine learning to create those emails in the first place, says Adam Malone, a former EY partner. “They’re advertising the sale of these services on criminal forums. They’re using them to generate better phishing emails. To generate fake personas to drive fraud campaigns.” These services are specifically being advertised as using machine learning, and it’s probably not just marketing. “The proof is in the pudding,” Malone says. “They’re definitely better.”
Machine learning allows attackers to customize the phishing emails in creative ways so that they don’t show up bulk emails and are optimized to trigger engagement and clicks. They don’t stop at just the text of the email. AI can be used to generate realistic-looking photos, social media profiles, and other materials to make the communication seem as legitimate as possible.
Generative AI takes it up one level. According to a survey released by OpenText, 45% of companies have seen an increase in phishing due to AI and 55% of senior decision-makers say their companies are at greater risk of ransomware due to the proliferation of AI use among threat actors. Another study, by Keeper Security, found 84% of IT and security leaders say that AI tools made phishing attacks more difficult to detect.
3. Better password guessing
Criminals are also using machine learning to get better at guessing passwords. “We’ve seen evidence of that based on the frequency and success rates of password guessing engines,” Malone says. Criminals are building better dictionaries to hack stolen hashes. They’re also using machine learning to identify security controls, “so they can make fewer attempts and guess better passwords and increase the chances that they’ll successfully gain access to a system.”
4. Deep fakes
The most frightening use of artificial intelligence are the deep fake tools that can generate video or audio that is hard to distinguish from the real human. “Being able to simulate someone’s voice or face is very useful against humans,” says Montenegro. “If someone is pretending to sound like me, you might fall for it.”
In fact, a couple of high-profile cases have been made public over the last couple of years in which faked audio costs companies hundreds of thousands — or millions — of dollars. “People have been getting phone calls from their boss — that were fake,” says Murat Kantarcioglu, previously professor of computer science at the University of Texas.
More commonly, scammers are using AI to generate realistic-looking photos, user profiles, emails — even audio and video — to make their messages seem more believable. It’s a big business. According to the FBI, business email compromise scams led to more than $55 billion in losses over the last ten years. Even back in 2021, there were media reports of a bank in Hong Kong duped into transferring $35 million to a criminal gang, because a bank official received a call from a company director with whom he’d spoken before. He recognized the voice, so he authorized the transfer. Today, hackers can make a Zoom video that’s hard to distinguish from a real person.
According to a survey by insurance company Nationwide released in late September, 52% of small business owners admit to having been fooled by a deepfake image or video, and 9 in 10 say generative AI scams are becoming more sophisticated.
And large companies aren’t immune either. According to a survey by Teleport, AI impersonation is the hardest cyberattack vector to defend against.
5. Neutralizing off-the-shelf security tools
Many popular security tools used today have some form of artificial intelligence or machine learning built in. Antivirus tools, for example, are increasingly looking beyond the basic signatures for suspicious behaviors. “Anything available online, especially open source, could be leveraged by the bad guys,” says Kantarcioglu.
Attackers can use these tools, not to defend against attacks, but to tweak their malware until it can evade detection. “AI models have many blind spots,” Kantarcioglu says. “You might be able to change them by changing features of your attack, like how many packets you send, or which resources you’re attacking.”
It’s not just the AI-powered security tools that attackers are using. AI is part of a lot of different technologies. Consider, for example, that users often learn to spot phishing emails by looking for grammar mistakes. AI-powered grammar checkers like Grammarly can help attackers improve their writing, while generative AI tools like ChatGPT can write convincing emails from scratch.
6. Reconnaissance
AI and machine learning can be used for research and reconnaissance, so that attackers can look at publicly available information and their target’s traffic patterns, defenses, and potential vulnerabilities. This is where hackers always start, says Thomas Scanlon, principal researcher and technical manager in the CERT division of the Software Engineering Institute at Carnegie Mellon University. “And all of that activity is able to be done smarter and faster when it’s AI-enabled.”
Many organizations don’t realize the amount of data that’s out there. And it’s not just lists of hacked passwords distributed on the dark web and social media posts by employees. For example, when companies put up a job posting or a call for proposals, they might reveal the types of technologies they use, Scanlon says. “It used to be labor intensive to gather all that data and do some analysis on it but a lot of that can be automated now.”
According to the Bugcrowd hacker survey, 62% use AI to analyze data, 61% use it to automate tasks, and 38% use it to identify vulnerabilities.
7. Autonomous agents
If an enterprise notices that it’s under attack and shuts off internet access to affected systems, then malware might not be able to connect back to its command-and-control servers for instructions. “Attackers might want to come up with an intelligent model that will stay even if they can’t directly control it, for longer persistence,” says Kantarcioglu.
Now, these kinds of autonomous agents are available to anyone, thanks to commercial offerings from Microsoft, and several open-source platforms that don’t have any guardrails to keep them from being used maliciously. “In the past, the adversary would have needed human touchpoints to carry out an attack, since most attacks involve multiple steps,” says CMU’s Scanlon. “If they can deploy agents to carry out those steps, that’s definitely a looming threat — well, more than looming. It’s one of the things that AI is making real.”
8. AI poisoning
An attacker can trick a machine learning model by feeding it new information. “The adversary manipulates the training data set. They intentionally bias it, and the machine learns the wrong way,” says Alexey Rubtsov, senior research associate at Global Risk Institute.
For example, a hijacked user account can log into a system every day at 2 a.m. to do innocuous work, making the system think that there’s nothing suspicious about working at 2 a.m. and reduce the security hoops the user has to jump through.
This is similar to how Microsoft’s Tay chatbot was taught to be a racist in 2016. The same approach can be used to teach a system that a particular type of malware is safe or particular bot behaviors are completely normal.
9. AI fuzzing
Legitimate software developers and penetration testers use fuzzing software to generate random sample inputs in an attempt to crash an application or find a vulnerability. The souped-up versions of this software use machine learning to generate the inputs in a more focused, organized way, prioritizing text strings most likely to cause problems. That makes the fuzzing tools more useful to enterprises, but also more deadly in the hands of attackers.
All these techniques are a reason why basic cybersecurity hygiene such as patching, anti-phishing education and micro-segmentation continue to be vital. “And it’s one of the reasons why defense in depth is so important,” says Allie Mellen, analyst at Forrester Research. “You need to put up multiple roadblocks, not just the one thing that attackers end up using against you to their advantage.”
10. AI malware
In September, HP Wolf Security reported that they’ve identified a malware campaign that was “highly likely” to have been written with the help of generative AI. “Gen AI is accelerating attacks and lowering the bar for cybercriminals to infect endpoints,” the authors say.
And HP isn’t alone. According to the Vanta report, 32% of organizations surveyed are seeing a rise in AI-based malware.
Researchers have even demonstrated that generative AI can be used to discover zero-day vulnerabilities.
And, just as legitimate developers can use AI to look for problems in their code, so can attackers, says CMU’s Scanlon. That could be open-source code available in public repositories, or code that has been obtained through other means. “Hackers can take the code and run it through ChatGPT or some other foundational model and ask it to find weaknesses in code that can be exploited,” he says, adding that he’s aware of this use for both research and nefarious purposes.
Generative AI makes up for lack of expertise
In the past, only the most advanced threat actors, such as nation states, had the ability to leverage machine learning and AI for their attacks.
Today, anybody can do it.
What makes it particularly difficult to defend against is that AI is now evolving faster than any technology. “This is a moving target,” said Boston Consulting Group’s Lyon. “Businesses should prioritize keeping up with their level of understanding of the threat landscape and adapt skillsets.”
SUBSCRIBE TO OUR NEWSLETTER
From our editors straight to your inbox
Get started by entering your email address below.
Original Post url: https://www.csoonline.com/article/564321/6-ways-hackers-will-use-machine-learning-to-launch-attacks.html
Category & Tags: Cybercrime, Hacking, Machine Learning, Security – Cybercrime, Hacking, Machine Learning, Security
Views: 3