web analytics

What Generative AI Means for Security – Source: www.databreachtoday.com

Rate this post

Source: www.databreachtoday.com – Author: 1

Hacker One Co-Founder Michiel Prins on the Opportunities and Risks of GAI

Michiel Prins, Co-founder of HackerOne


July 24, 2023    

What Generative AI Means for Security

Generative artificial intelligence, or GAI, is popping up in all manner of software every day. Many businesses – including Snapchat, Instacart, CrowdStrike, Salesforce and others – have announced AI-powered features and user experiences. Software customers will soon expect GAI features. For example, users will expect to talk directly to their reports and dashboards instead of figuring out yet another query language.

See Also: Live Webinar | Unmasking Pegasus: Understand the Threat & Strengthen Your Digital Defense

What does generative AI mean for security? I have two main predictions.

As with any new technology, it is hard for most people, especially optimists, to appreciate the risks that may surface – and this is where hackers come in. 

Offensive AI Will Outpace Defensive AI

In the short term, and possibly indefinitely, we will see offensive or malicious AI applications outpace defensive ones that use AI for stronger security. While GAI offers tremendous opportunities to advance defensive use cases, cybercrime rings and malicious attackers level up their weaponry asymmetrically to defensive efforts, resulting in an unequal match.

It’s highly possible that the commoditization of GAI will mean the end of cross-site scripting and other current common vulnerabilities. Some of the top 10 most common vulnerabilities – such as XSS or SQL injection – are still far too common, despite industry advancements in static application security testing, web browser protections, and secure development frameworks. GAI has the opportunity to finally deliver the change we all want to see in this area.

While advances in generative AI may eradicate some vulnerability types, others will explode in effectiveness. As GAI lowers the barrier to entry, attacks such as social engineering via deepfakes and phishing will become more convincing and fruitful than ever before.

The strategy of security through obscurity will also be impossible with the advance of GAI. HackerOne research shows that 64% of security professionals claim their organization maintains a culture of security through obscurity. The seemingly magical ability of GAI to sift through enormous data sets and distill what truly matters, combined with advances in open-source intelligence and hacker reconnaissance, will render security through obscurity obsolete.

Attack Surfaces Will Grow Exponentially

We will see an outsized explosion in new attack surfaces. Defenders have long followed the principle of attack surface reduction, a term coined by Microsoft, but the rapid commoditization of generative AI is going to reverse some of our progress.

Code increases exponentially every year, and now it is often written with the help of generative AI. This dramatically lowers the bar for who can be a software engineer, resulting in more and more code being shipped by people who do not fully comprehend the technical implications of the software they develop, let alone oversee the security implications.

Also, GAI requires vast amounts of data. It is no surprise that the models that continue to impress us with human levels of intelligence happen to be the largest models out there. In a GAI-ubiquitous future, organizations and commercial businesses will hoard more and more data, beyond what we now think is possible. Therefore, the sheer scale and impact of data breaches will grow out of control. Attackers will be more motivated than ever to get their hands on data.

Attack surface growth doesn’t stop there. Many businesses have rapidly implemented features and capabilities powered by generative AI in the past months. As a result, novel attacks against GAI-powered applications will emerge as a new threat. A promising project in this area is the OWASP Top 10 for Large Language Models LLMs.

What Does Defense Look Like in a Future Dominated by Generative AI?

Even with the potential for increased risk, there is hope. Ethical hackers are ready to secure applications and workloads powered by generative AI. As with any new technology, it is hard for most people, especially optimists, to appreciate the risks that may surface – and this is where hackers come in. Hackers will quickly investigate the technology and look to trigger unthinkable scenarios – all so you can develop stronger defenses.

There are three tangible ways in which HackerOne can help you prepare your defenses for a not-too-distant future where generative AI is truly ubiquitous:

  1. HackerOne Bounty: Continuous adversarial testing with the world’s largest hacker community will identify vulnerabilities of any kind in your attack surface, including potential flaws stemming from poor GAI implementation.
  2. HackerOne Challenge: This involves conduct-scoped and time-bound adversarial testing with a curated group of expert hackers.
  3. HackerOne Security Advisory Services: Work with our security advisory team to understand how your threat model will evolve by bringing generative AI into your attack surface, and ensure your HackerOne programs are firing on all cylinders to catch these flaws.

Original Post url: https://www.databreachtoday.com/blogs/what-generative-ai-means-for-security-p-3473

Category & Tags: –

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post

More Latest Published Posts