When ChatGPT Goes Phishing – Source:


Source: – Author: Jim Broome

ChatGPT has become a powerful tool for security professionals seeking to enrich their work. However, its widespread use has raised concerns about the potential for bad actors to misuse the technology. Experts are worried that ChatGPT’s ability to source recent data about an organization could make social engineering and phishing attacks more effective than ever before.

Potential Dangers of ChatGPT in the Hands of Cybercriminals

1. Developing Highly-Pinpointed Campaigns

In today’s world, cybercriminals are constantly developing new tactics to deceive unsuspecting individuals and organizations. Social engineering and phishing attacks are becoming increasingly sophisticated, and the availability of data on the internet makes it easier for attackers to craft convincing and targeted campaigns.

With the help of tools like ChatGPT, attackers can quickly gather recent data about an organization and use it to create a highly efficient and pinpointed campaign. For example, an attacker could ask ChatGPT to provide them with all public disclosures or press releases from a corporation in the last 30 days and then use that information to target specific divisions of the organization. Through this approach, an attacker could create a campaign that uses current organization-specific buzzwords, project names or acronyms to appear more legitimate.

In addition to ChatGPT, cybercriminals can also leverage publicly available information on the web and social media to gather personal information about their targets. This information can be fed into the chatbot to assist bad actors in the development of a campaign to further lure their targets.

The availability of ChatGPT in more than 90 languages is a major cause for concern. This localization opens the door for bad actors to launch the same campaign against multiple targets, including global corporations.

2. Breaking the Mold on Security Awareness Training

Most canned user security awareness training solutions constantly hammer home the standard issues that indicate a potential phishing email–primarily remaining vigilant about spelling, grammar and erroneous buzzwords. Grammar and spelling errors have been a common cybersecurity tip-off for some time. However, with the advancement of AI technology, cybercriminals are now using more sophisticated methods to craft their phishing emails, including machine learning and language processing tools like ChatGPT to eliminate any spelling or grammatical errors that may otherwise give away the fraudulent nature of the message.

The ability for ChatGPT to include company-specific buzzwords and improved grammar and spelling in phishing emails makes it even more challenging for users to distinguish between genuine and fake messages. The quality of these AI-assisted campaigns can create a sense of trust, leading email recipients to believe that the message is from a legitimate source. Cybercriminals can now more easily lure users into clicking malicious links, downloading malware or revealing sensitive information.

Changing the Approach to Security Monitoring and Protection

To stay ahead of constantly evolving threats, security practitioners should employ social engineering training for employees. It’s crucial that this training is updated annually, if not bi-annually.

Organizations should also leverage AI-assisted anti-spam and phishing detection solutions that monitor URLs. The good news is that most dedicated anti-spam solutions incorporate AI, which enables them to counteract AI-enabled attacks. These tools monitor and analyze behavior and relationship interactions to identify suspicious activity, like financial requests or human resources impersonations, ensuring the content is flagged and blocked no matter who the source is.

Other proactive solutions are to always require multi-factor authentication (MFA) and monitor accounts to catch any changes to MFA settings. Implement solutions that monitor at the individual user level to detect any time a significant change is made to their MFA settings. For example, if your MFA still uses text to verify identity and the user changed their phone number, this activity should raise flags. Also, some MFA solutions allow for hundreds of attempts per hour by default. Double-check these settings, as there should be a limit to how many times a user can be assigned an MFA request attempt.

Finally, as many successful phishing attacks occur after hours, it’s important to have 24/7 monitoring. While most of these recommendations are preventative, there is always a chance that a threat actor can get through, so make sure to conduct due diligence and monitor around the clock.

Convenience Comes at the Cost of Security

I always say anything convenient always comes at the cost of security, and it’s extremely rare that the two meet.

While AI tools like ChatGPT offer breakthrough functionality that offers many great benefits, they certainly also create greater security vulnerabilities within organizations. As ever, improving security awareness training for employees, implementing anti-spam solutions and enacting round-the-clock monitoring remain excellent ways to defend against the risks ChatGPT presents.

While it may be convenient to embrace ChatGPT and assume no harm can come from a highly-intelligent tool, that scenario is simply not a reality. In the face of AI, the best defense is taking the necessary steps toward security resilience.

Original Post URL:

Category & Tags: Analytics & Intelligence,Cybersecurity,Data Security,Deep Fake and Other Social Engineering Tactics,Identity & Access,Incident Response,Network Security,Security Awareness,Security Boulevard (Original),Threat Intelligence,Threats & Breaches,AI,ChatGPT,generative AI,Malware,security,social engineering – Analytics & Intelligence,Cybersecurity,Data Security,Deep Fake and Other Social Engineering Tactics,Identity & Access,Incident Response,Network Security,Security Awareness,Security Boulevard (Original),Threat Intelligence,Threats & Breaches,AI,ChatGPT,generative AI,Malware,security,social engineering


Leave a Reply

Your email address will not be published. Required fields are marked *