web analytics

How AI is becoming a powerful tool for offensive cybersecurity practitioners – Source: www.csoonline.com

Rate this post

Source: www.csoonline.com – Author:

From vulnerability assessments to penetration testing, AI and large language models are profoundly changing the fundamentals of offsec.

Artificial intelligence, especially large language models (LLMs) and the agents powered by them, has been transformative across the cybersecurity spectrum, and the game-changing technology has been nothing short of revolutionary in the realm of offensive cybersecurity.

The introduction of AI “has triggered a profound transformation in the landscape of offensive security, including vulnerability assessment, penetration testing, and red teaming,” according to a recent report by the Cloud Security Alliance (CSA). “This shift redefines AI from a narrow-use case to a versatile and powerful general-purpose technology.”

AI has become a necessary tool in the race to keep pace with evolving threats, including those posed by attackers who are increasingly leveraging the technology themselves, says Amit Zimerman, co-founder and chief product officer at Oasis Security, a provider of non-human identity management solutions.

“As adversaries become more sophisticated, organizations must adopt AI-driven offensive cybersecurity to stay ahead, making AI not just a convenience, but a critical asset for maintaining a competitive edge in security,” Zimerman says.

Offensive cybersecurity — offsec — involves simulating an adversary’s behavior to identify system vulnerabilities. It includes such things as penetration testing, red teaming, and ethical hacking.

Offsec strives to solve problems before they arise

“Offensive cybersecurity means solving the problem before it becomes a problem, finding your own vulnerabilities through active testing, and fixing them before an adversary does,” says Stefan Leichenauer, vice president of engineering at SandboxAQ, a developer of B2B and quantum software.

What makes offensive security all the more important is that it addresses a potential blind spot for developers. “As builders of software, we tend to think about using whatever we’ve developed in the ways that it’s intended to be used,” says Caroline Wong, chief strategy officer at Cobalt Labs, a penetration testing company.

In other words, Wong says, there can be a bias towards overemphasizing the good ways in which software can be used, while overlooking misuse and abuse cases or disregarding potentially harmful uses.

“One of the best ways to identify where and how an organization or a piece of software might be susceptible to attack is by taking on the perspective of a malicious person: the attacker’s mindset,” Wong says.

AI can reduce the effect of manpower shortages

The intersection of AI and offsec is creating solutions for many of the challenges faced by offensive security practitioners, the CSA states in its report.

“Artificial intelligence, particularly LLMs, offers promising avenues for addressing many of these challenges,” the report notes. “AI could alleviate the strain on human resources, significantly augment the capabilities of offensive security testers, and enhance the effectiveness of offensive security practices in general.”

CSA Technical Research Director Sean Heide says that AI can help assist teams with analyzing large sets of data and combining them to see common themes that in the past might have been missed. “This ties into tasks such as 24/7 systems monitoring and providing suggested courses of action when certain problems arise.”

“I think the latter part is key to AI’s usage over time in the security field,” Heide says. “We are continuously told and shown statistics that there are many hundreds of thousands — if not millions — of open job positions in security that can’t be filled. Utilizing AI-generated suggestions for problem-solving can be one way to help interns or those in junior roles learn at a faster rate by expediting the rate at which they can tie things together and see how they work.”

Those shortages were borne out in the latest cybersecurity workforce study by ISC2, which pegged the global cybersecurity workforce gap at 4.8 million and estimated that the total workforce needed to satisfy global demand to be 10.2 million.

That gap for offsec practitioners could be narrowed with AI. “AI can help reduce the need for humans in offensive cybersecurity by automating repetitive and time-consuming tasks,” says David Lindner, CISO of Contrast Security, a maker of self-protecting software solutions.

“It can rapidly conduct vulnerability scans across multiple systems, identifying weaknesses far faster than manual efforts,” Lindner says. “AI also accelerates reconnaissance, mapping network topologies, identifying open ports, and profiling systems. By processing information from sources like architecture diagrams, scan results, and vulnerability reports, AI can use predictive analytics to anticipate potential vulnerabilities and prioritize testing. This automation not only decreases the need for human resources but also improves the efficiency and effectiveness of cybersecurity operations.”

The scale of offsec programs can be boosted with AI

In addition to addressing manpower issues, AI can assist practitioners in scaling up their operations. “AI’s ability to process vast datasets and simulate large-scale attacks without human intervention allows for testing more frequently and on a broader scale,” says Augusto Barros, a cyber evangelist at Securonix, a security analytics and operations management platform provider.

“In large or complex environments, human operators would struggle to perform consistent and exhaustive tests across all systems,” Barros says. “AI can automate these tasks and simulate multiple attack vectors concurrently, ensuring a more comprehensive evaluation.”

“AI can also help perform these tests continuously, enabling organizations to scale offensive security operations in line with their growth, without a proportional increase in human resources,” he added.

The CSA report also pointed out that AI can help move offsec “left” in the development lifecycle. “With increased automation and shorter feedback cycles in offensive security, these activities can be integrated earlier in the DevSecOps process,” the report noted.

“This shift-left approach means that security considerations are embedded from the beginning of the software development lifecycle, resulting in a more proactive and fundamental impact on a business’s overall security posture. By identifying and mitigating vulnerabilities earlier, organizations can reduce the risk of security breaches and ensure more robust protection.”

AI can foster a higher frequency of testing

The CSA’s Heide says AI will enable more frequent security testing in development, automation of security checks in CI/CD pipelines, and almost immediate feedback loops on vulnerabilities for development teams. “We will also see it used for threat modeling creation during design and continuous security assessments throughout the software development lifecycle,” he says.

Code is already being produced at machine speed with the help of generative AI, so it is imperative to match that velocity to remediate vulnerabilities, argues Sohail Iqbal, CISO of Veracode, a provider of cloud-based application intelligence and security verification services.

“AI platforms with remediation capabilities can play a big role in offensive cybersecurity by preventing vulnerabilities before applications are released into production,” Iqbal says. “This will help avoid long-term security challenges and more efficiently shift responsibility for software security left, to the start of the development process.”

Filling gaps in automation

GenAI can also address some deficiencies found in other kinds of automation, says Bytewhisper Security CTO Kyle Hankins.

“One thing LLMs are surprisingly good at is detecting business-logic findings that SAST has traditionally struggled with,” Hankins says. “Decent AI can make an educated guess that an API is sensitive and should require authentication or will notice that a password is being passed to a database in a way that suggests it is stored unencrypted.”

“By integrating these tools early in the development cycle, we can guide and accelerate developers early in the process to help them catch errors early,” he says. “Importantly, there’s something to be said for multiple gates. SAST tooling and manual security code review remain critical backstops to catch errors from both developers and the LLM itself.”

The rapid advancement of artificial intelligence has been a double-edged sword, bringing enhanced agency and automation that offer both new opportunities and challenges for offensive security teams globally, the CSA warns in its report.

“Malicious actors operating outside the bounds of legal and ethical frameworks are already exploiting these advancements, highlighting the critical need for defenders to innovate proactively,” the report says.

As AI systems integrate into workflows, they continue to introduce new technical and organizational challenges that must be managed carefully — vigilance is required to prevent AI-driven tools from being misused or allowed to behave unpredictably.

Risks and challenges notwithstanding, if offensive security evolves with AI capabilities, the net benefits of adding AI to the practitioner’s arsenal are undeniable, according to the CSA. “By adopting AI, training teams on its potential and risks, and fostering a culture of continuous improvement, organizations can significantly enhance their defensive capabilities and secure a competitive edge in cybersecurity.”

SUBSCRIBE TO OUR NEWSLETTER

From our editors straight to your inbox

Get started by entering your email address below.

Original Post url: https://www.csoonline.com/article/3564657/how-ai-is-becoming-a-powerful-tool-for-offensive-cybersecurity-practitioners.html

Category & Tags: Penetration Testing, Security, Security Practices, Threat and Vulnerability Management – Penetration Testing, Security, Security Practices, Threat and Vulnerability Management

Views: 2

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post