Source: www.csoonline.com – Author:
Way more AI is on the menu for CISOs going forward. Here are some key tensions to keep in mind when shaping cyber defense strategies bolstered with AI.
As you walk around trying to avoid the 41,000 participants at RSA Conference in San Francisco, you become aware of the Waymo autonomous cars in the streets that always elicit an extra glance. Yes, there is no driver in that seat!
Waymo cars aim to revolutionize transportation through fully autonomous driving technology that offers the promise of a safer, more accessible, and sustainable way to get around.
At RSA Conference it is hard to not see the same proposition at the macro level of nearly every cybersecurity provider talking up the AI capabilities of their products and services. The sales pitch is equivalent to Waymo’s: These AI-enabled cyber tools will be safer, more accessible, and sustainable.

David Gee
Indeed, “way more” AI is what we are seeing in the current offerings and product roadmaps — not just at RSA, but throughout the industry today.
This promise is very enticing for CISOs, especially on the back of the very real shortage of cyber resources they are enduring. Many companies are already looking to AI to help bridge their cyber skills gaps. But the future of security with AI is yet to be shaped. How this future will unfold and impact their organizations should be top of mind for every CISO today.
RSA offered some interesting early insights CISOs should keep in mind as they further develop their strategies for implementing AI defenses and securing AI use in their enterprises.
Shaping the future of AI security
One early morning panel session I attended featured a lively discussion facilitated by Jamil Jaffer, a venture partner and strategic advisor at Paladin Capital. Joining Jaffer on stage were Jason Clinton, CISO of Anthropic; Matt Knight, CISO of OpenAI; and Sandra Joyce, VP at Google Threat Intelligence.
The panel explored how collaboration between industry and government is vital to ensuring secure AI systems. But the discussion around the use of AI tools to stave off cyberattacks and bolster cyber defenses offered fodder for thought that goes beyond the frame of the conversation.
Here are the key points of insight around AI’s evolution in cybersecurity that I gleaned from this discussion along with my own quick-take commentary on how they impact CISOs and security teams going forward.
1. AI presents a complex duality for cybersecurity, potentially offering an unfair advantage to attackers while also providing significant benefits for defenders.
At the moment, it appears the momentum is with the defense team. But this battle has just begun.
2. Defenders are seeing the development of AI tools that can accelerate malware analysis and vulnerability scanning, but the same tools can be leveraged by malicious actors.
This is double-edge sword that can also hurt us. AI is already becoming a powerful tool for offensive security pros, for use in vulnerability assessments and penetration testing, but by democratizing vulnerability hunting, generative AI is also lowering the barrier of entry for attackers as well.
3. As AI models become more intelligent and the cost of sophisticated engineering decreases, defenders could gain advantages in threat landscape visibility and automated security measures like penetration testing and vulnerability prevention.
These capabilities continue to be an ongoing treadmill that create stress for the CISO and the team to manage effectively. But even as generative AI helps make penetration testing quicker or easier, it is also making vulnerability remediation worse.
4. While concerns exist about AI’s use in advanced social engineering and information attacks, current observations suggest that AI-driven attacks haven’t yet surpassed human capabilities in sophistication and destructiveness.
A quick take could be that phishing attacks, thanks to gen AI, can now have better spelling and grammar and thus are harder to spot. But even as phishing drops as an initial access vector for cyberattackers, generative AI is already beginning to supercharge social engineering attacks by also helping to mimic writing styles, avoid traditional phishing red flags, and offer greater personalization based on public data.
5. Specific threats on the horizon include automated vulnerability discovery, which could be exploited by nation-states, and the potential for polymorphic or adaptive malware that moves autonomously within networks.
These new technologies could be exploited by rogue nation-statesand pose a significant risk because they can autonomously identify weaknesses and adapt to evade detection, making our traditional cybersecurity defenses less effective. Right out of the gate in 2023, ChatGPT delivered proof-of-concept attacks that generated polymorphic code to evade endpoint detection and response systems.
6. Early evaluations of AI models against capture-the-flag exercises show rapid learning of basic cybersecurity skills, but more complex, real-world attack simulations still pose a significant challenge.
This gap underscores the need for continued development before AI can reliably defend against or simulate advanced cyberthreats. It’s still too early to be more than a junior assistant in the security operations center.
7. Collaborative efforts in threat intelligence sharing and the development of offensive AI models for testing in controlled environments are being explored to better understand and prepare for future threats.
These proactive approaches can help build resilience across sectors and ensure that defensive strategies evolve alongside advanced AI-driven threats. But here, collaboration is key. The US Trump administration recently shifted threat preparedness to state and local governments as it made cuts to its cyberthreat information-sharing policies.
8. AI is being used to enhance the work of security analysts by processing vast amounts of intelligence data quickly, but human oversight remains crucial for decision-making.
While AI can significantly accelerate the data analysis and help detect threats faster, it still lacks the human judgment, context awareness, and ethical reasoningour analogue colleagues possess.
9. Autonomous agents are emerging that can perform typical cyber defense tasks and even offensive actions in staging environments.
This is the start of a significant move toward greater automation in security workflows that have been based historically on manual playbooks. Agentic AI, which promises autonomous decision-making capabilities, can be a boon for security teams, but their autonomous nature could also do more harm than good — and their presence elsewhere in the enterprise will be another headache for CISOs to address.
10. Despite the potential risks of unconstrained AI models in the hands of adversaries, the security community emphasizes responsible development, threat intelligence sharing, and the importance of fundamental security practices to maintain a defensive edge.
This is important because it highlights the security community’s proactive approach to mitigating the dangers of AI misuse. We must work together to shape the AI future that will benefit cyber defenses for all.
Going forward
We all in enterprises and government have been trying to address these challenges with new risk frameworks and policies. There are deep concerns that AI tools when combined with insider threats can pose a significant risk.
The ability of our teams to adopt these tools is still constrained by our lack of our own understanding of how to use AI and this trend will be overcome in time. But as noted earlier, the duality of AI — for defensive and attacking purposes — implies that waiting will come at an increased risk.
If we wait, then we accept the risk that our attackers gain a time and space advantage. And of course, attackers don’t have to worry about “Responsible AI” usage policies.
So CISOs need to figure out a way to advance their teams’ knowledge on what AI can do to bolster security and how it can be used to fill gaps, not just in security tools currently in use but in the skills on hand to make the most of them.
SUBSCRIBE TO OUR NEWSLETTER
From our editors straight to your inbox
Get started by entering your email address below.
David Gee is a contributing writer for the Foundry group of publications. He has more than 20 years experience as CIO, CISO and Technology, Cyber & Data Risk Executive across Financial Services and Pharmaceutical industries. He served as Global Head Technology, Cyber and Data Risk at Macquarie Group and as CISO for HSBC Asia Pacific. David has made the transition to Board Advisor, Non-Executive Director and Strategic IT Advisor. He has written extensively for Foundry Australia across CIO, Computerworld and CSO over several years, and has just written a new book, The Aspiring CIO and CISO.
More from this author
Show me more
Original Post url: https://www.csoonline.com/article/3974052/10-insights-on-the-state-of-ai-security-from-rsa-conference.html
Category & Tags: Artificial Intelligence, RSA Conference, Security Practices – Artificial Intelligence, RSA Conference, Security Practices
Views: 3