web analytics

It Takes AI Security to Fight AI Cyberattacks

Rate this post

Generative artificial intelligence technologies such as ChatGPT have brought sweeping changes to the security landscape almost overnight. Generative AI chatbots can produce clear, well-punctuated prose, images, and other media in response to short prompts from users. ChatGPT has quickly become the symbol of this new wave of AI, and the powerful force unleashed by this technology has not been lost on cybercriminals.

A new kind of arms race is underway to develop technologies that leverage generative AI to create thousands of malicious text and voice messages, Web links, attachments, and video files. The hackers are seeking to exploit vulnerable targets by expanding their range of social engineering tricks. Tools such as ChatGPT, Google’s Bard, and Microsoft’s AI-powered Bing all rely on large language models to exponentially increase access to learning and thus generate new forms of content based on that contextualized knowledge.

In this way, generative AI enables threat actors to rapidly accelerate the speed and variation of their attacks by modifying code in malware, or by creating thousands of versions of the same social engineering pitch to increase their probability of success. As machine learning technologies advance, so will the number of ways that this technology can be used for criminal purposes.

Threat researchers warn that the generative AI genie is out of the bottle now, and it is already automating thousands of uniquely tailored phishing messages and variations of those messages to increase the success rate for threat actors. The cloned emails reflect similar emotions and urgency as the originals, but with slightly altered wording that makes it hard to detect they were sent from automated bots.

Fighting Back With a “Humanlike” Approach to AI

Today, humans make up the top targets for business email compromise (BEC) attacks that use multichannel payloads to play off human emotions such as fear (“Click here to avoid an IRS tax audit…”) or greed (“Send your credentials to claim a credit card rebate…”). The bad actors have already retooled their strategies to attack individuals directly while seeking to exploit business software weaknesses and configuration vulnerabilities.

The rapid rise in cybercrime based on generative AI makes it increasingly unrealistic to hire enough security researchers to defend against this problem. AI technology and automation can detect and respond to cyber threats much more quickly and accurately than people can, which in turn frees up security teams to focus on tasks that AI cannot currently address. Generative AI can be used to anticipate the vast numbers of potential AI-generated threats by applying AI data augmentation and cloning techniques to assess each core threat and spawn thousands of other variations of that same core threat, enabling the system to train itself on countless possible variations.

All these elements must be contextualized in real time to protect users from clicking on malicious links or opening bad attachments. The language processor builds a contextual framework that can spawn a thousand similar versions of the same message but with slightly different wording and phrases. This approach enables users to stop current threats while anticipating what future threats may look like and blocking them too.

Protecting Against Social Engineering in the Real World

Let’s examine how a social engineering attack might play out in the real world. Take the simple example of an employee who receives a notice about an overdue invoice from AWS, with an urgent request for an immediate payment by wire transfer.

The employee cannot discern if this message came from a real person or a chatbot. Until now, legacy technologies have applied signatures to recognize original email attacks, but now the attackers can use generative AI to slightly alter the language and spawn new undetected attacks. The remedy requires a natural language processing and relationship graph technology that can analyze the data and correlate the fact that the two separate messages express the same meaning.

In addition to natural language processing, the use of relationship graph technology conducts a baseline review of all emails sent to the employee to identify any prior messages or invoices from AWS. If it can find no such emails, the system is alerted to protect the employee from incoming BEC attacks. Distracted employees may be fooled into quickly replying before they think through the consequences of giving up their personal credentials or making financial payments to a potential scammer.

Clearly, this new wave of generative AI has tilted the advantage in favor of the attackers. As a result, the best defense in this emerging battle will be to turn the same AI weapons against the attackers in anticipation of their next moves and use AI to protect susceptible employees from any future attacks.

About the Author


Patrick Harr

Patrick Harr is the CEO of SlashNext, an integrated cloud messaging security company using patented HumanAI™ to stop BEC, smishing, account takeovers, scams, malware, and exploits in email, mobile, and Web messaging before they become a breach.

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post

More Latest Published Posts