web analytics

How AI is changing the GRC strategy – Source: www.csoonline.com

Rate this post

Source: www.csoonline.com – Author:

CISOs find themselves at a pinch-point needing to manage AI risks while supporting organizational innovation. The way forward is adapting GRC frameworks.

As businesses incorporate cybersecurity into governance, risk and compliance (GRC), it is important to revisit existing GRC programs to ensure that the growing use and risks of generative and agentic AI are addressed so businesses continue to meet regulatory requirements.

“[AI] It’s a hugely disruptive technology in that it’s not something you can put into a box and say ‘well that’s AI’,” says Jamie Norton, member of the ISACA board of directors and CISO with the Australian Securities and Investment Commission (ASIC).

It’s hard to quantify AI risk, but data as to how the adoption of AI expands and transforms an organization’s risk surface provides a clue. According to Check Point’s 2025 AI security report, 1 in every 80 prompts (1.25%) sent to generative AI services from enterprise devices had a high risk of sensitive data leakage.

CISOs have the challenge to keep pace with business demands for innovation while securing AI deployments with these risks in view. “With their pure security hat on, they’re trying to stop shadow AI from becoming a cultural thing where we can just adopt and use it [without guardrails],” Norton tells CSO.

AI is not a typical risk, so how do GRC frameworks help?

Governance, risk and compliance is a concept that originated with the Open Compliance and Ethics Group (OCEG) in the early 2000s as a way to define a set of critical capabilities to address uncertainty, act with integrity, and ensure compliance to support organizational objectives. Since then, GRC has developed from rules and checklists focused on compliance to a broader approach of managing risk. Data protection requirements, the growing regulatory landscape, digital transformation efforts, and board-level focus have driven this shift in GRC.

At the same time, cybersecurity has become a core enterprise risk and CISOs have helped ensure compliance with regulatory requirements and establish effective governance frameworks. Now as AI expands, there’s a need to incorporate this new category of risk into GRC frameworks.

However, industry surveys suggest there’s still a long way to go for the guardrails to catch up with AI. Only 24% of organizations have fully enforced enterprise AI GRC policies, according to the 2025 Lenovo CIO playbook. At the same time, AI governance and compliance is the number one priority, the report found.

The industry research suggests that CISOs will need to help strengthen AI risk management as a matter of urgency, driven by leadership’s hunger to realize some pay-off without moving the risk dial.

CISOs are in a tough spot because they have a dual mandate to increase productivity and leverage this powerful emerging technology, while still maintaining governance, risk and compliance obligations, according to Rich Marcus, CISO at AuditBoard. “They’re being asked to leverage AI or help accelerate the adoption of AI in organizations to achieve productivity gains. But don’t let it be something that kills the business if we do it wrong,” says Marcus.

To support risk-aware adoption of AI, Marcus’ advice is for CISOs to avoid going alone and foster broad trust and buy-in to risk management across the organization. “The really important thing to be successful with managing AI risk is to approach the situation with a collaborative mindset and broadcast the message to folks that we’re all in it together and you’re not here to slow them down.”

This approach should help encourage transparency about how and where AI is being used across the organization. Cybersecurity leaders must try and get visibility by establishing a security process operationally that will capture where AI’s being used currently or where there’s an emerging request for new AI, says Norton.

“Every single product you’ve got these days has some AI and there’s not one governance forum that’s picking it all up across the spectrum of different forms [of AI],” he says.

Norton suggests CISOs develop strategic and tactical approaches to define the different types of AI tools, capture the relative risks, and balance potential pay-off in productivity and innovation. Tactical measures such as secure by design processes, IT change processes, shadow AI discovery programs or risk-based AI inventory and classification are practical ways to deal with the smaller AI tools. “Where you have more day-to-day AI — that bit of AI sitting in some product or some SaaS platform, which is growing everywhere — this might be managed through a tactical approach that identifies what [elements] need oversight,” Norton says.

The strategic approach applies to the big AI changes that are coming with major tools such as Microsoft Copilot and ChatGPT. Securing these ‘big ticket’ AI tools using internal AI oversight forums is somewhat easier than securing the plethora of other tools that are adding AI.

CISOs can then focus their resources on the highest-impact risks in a way that doesn’t create processes that are unwieldy or unworkable. “The idea is not to bog this down so that it’s almost impossible to get anything, because organizations typically want to move quickly. So, it’s more of a relatively lightweight process that applies this consideration [of risk] to either allow AI or be used to prevent it if it’s risky,” Norton says.

Ultimately, the task is for security leaders to apply a security lens to AI using governance and risk as part of the broader GRC framework in the organization. “A lot of organizations will have a chief risk officer or someone of that nature who owns the broader risk across the environment, but security should have a seat at the table,” Norton says. “These days, it’s no longer about CISOs saying ‘yes’ or ‘no’. It’s more about us providing visibility of the risks involved in doing certain things and then allowing the organization and the senior executives to make decisions around those risks.”

Adapting existing frameworks with AI risk controls

AI risks include data safety, misuse of AI tools, privacy considerations, shadow AI, bias and ethical considerations, hallucinations and validating results, legal and reputational issues, and model governance to name a few.

AI-related risks should be established as a distinct category within the organization’s risk portfolio by integrating into GRC pillars, says Dan Karpati, VP of AI technologies at Check Point. Karpati suggests four pillars:

  • Enterprise risk management defines AI risk appetite and establishes an AI governance committee.
  • Model risk management monitors model drift, bias and adversarial testing.
  • Operational risk management includes contingency plans for AI failures and human oversight training.
  • IT risk management includes regular audits, compliance checks for AI systems, governance frameworks and aligning with business objectives.

To help map these risks, CISOs can look at the NIST AI Risk Management Framework and other frameworks, such as COSO and COBIT, and apply their core principles — governance, control, and risk alignment — to cover AI characteristics such as probabilistic output, data dependency, opacity in decision making, autonomy, and rapid evolution. An emerging benchmark, ISO/IEC 42001 provides a structured framework for AI for oversight and assurance that’s intended to embed governance and risk practices across the AI lifecycle.

Adapting these frameworks offers a way to elevate AI risk discussion, align AI risk appetite with the organization’s overarching risk tolerance, and embed robust AI governance across all business units. “Instead of reinventing the wheel, security leaders can map AI risks to tangible business impacts,” says Karpati.

AI risks can also be mapped to the potential for financial losses from fraud or flawed decision-making, reputational damage from data breaches, biased outcomes or customer dissatisfaction, operational disruption from poor integration with legacy systems and system failures, and legal and regulatory penalties. CISOs can utilize frameworks like FAIR (factor analysis of information risk) to assess the likelihood of an AI-related event, estimate loss in monetary terms, and access risk exposure metric. “By analyzing risks from both qualitative and quantitative perspectives, business leaders can better understand and weigh security risks against financial benchmarks,” says Karpati.

In addition, with emerging regulatory requirements, CISOs will need to monitor draft regulations, track requests for comment periods, have early warnings about new standards, and then prepare for implementation before ratification, says Marcus.

Tapping into industry networks and peers can help CISOs stay across threats and risks as they happen, while reporting functions in GRC platforms monitor any regulatory changes. “It’s helpful to know what risks are manifesting in the field, what would have protected other organizations, and collectively building key controls and procedures that will make us as an industry more resilient to these types of threats over time,” Marcus says.

Governance is a critical part of the broader GRC framework and CISOs have an important role in setting the organisational rules and principles for how AI is used responsibly.

Developing governance policies

In addition to defining risks and managing compliance, CISOs are having to develop new governance policies. “Effective governance needs to include acceptable use policies for AI,” says Marcus. “One of the early outputs of an assessment process should define the rules of the road for your organization.”

Marcus suggests a stoplight system — red, yellow, green — that classifies AI tools for use, or not, within the business. It provides clear guidance to employees, allows technically curious employees a safe space to explore while enabling security teams to build detection and enforcement programs. Importantly, it also let security teams offer a collaborative approach to innovation.

‘Green’ tools have been reviewed and approved, ‘yellow’ require additional assessment and specific use cases, and those labelled ‘red’ lack the necessary protections and are prohibited from employee use.

At AuditBoard, Marcus and the team have developed a standard for AI tool selection that includes protecting proprietary data and retaining ownership of all inputs and outputs among other things. “As a business, you can start to develop the standards you care about and use these as a yardstick to measure any new tools or use cases that get presented to you.”

He recommends CISOs and their teams define the guiding principles up front, educate the company about what’s important and help teams self-enforce by filtering out things that don’t meet that standard. “Then by the time [an AI tool] gets to the CISO, people have an understanding of what the expectations are,” Marcus says.

When it comes to specific AI tools and use cases, Marcus and the team have developed ‘model cards’, one-page documents that outline the AI system architecture including inputs, outputs, data flows, intended use case, third parties, and how the data for the system is trained. “It allows our risk analysts to evaluate whether that use case violates any privacy laws or requirements, any security best practices and any of the emerging regulatory frameworks that might apply to the business,” he tells CSO.

The process is intended to identify potential risks and be able to communicate those to stakeholders within the organization, including the board. “If you’ve evaluated dozens of these use cases, you can pick out the common risks and common themes, aggregate those and then come up with strategies to mitigate some of those risks,” he says.

The team can then look at what compensating controls can be applied, how far they can be applied across different AI tools and provide this guidance to the executive. “It shifts the conversation from a more tactical conversation about this one use case or this one risk to more of a strategic plan for dealing with the ‘AI risks’ in your organization,” Marcus says.

Jamie Norton warns that now the shiny interface on AI is readily accessible to everyone, security teams need to train their focus on what’s happening under the surface of these tools. Applying strategic risk analysis, utilizing risk management frameworks, monitoring compliance, and developing governance policies can help CISOs guide the organization in its AI journey.

“As CISOs, we don’t want to get in the way of innovation, but we have to put guardrails around it so that we’re not charging off into the wilderness and our data is leaking out,” says Norton.

SUBSCRIBE TO OUR NEWSLETTER

From our editors straight to your inbox

Get started by entering your email address below.

Original Post url: https://www.csoonline.com/article/4016464/how-ai-is-changing-the-grc-strategy.html

Category & Tags: Compliance, IT Governance Frameworks, Risk Management – Compliance, IT Governance Frameworks, Risk Management

Views: 2

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post