Source: www.securityweek.com – Author: Alastair Paterson
Over the next 12-18 months, organizations will face an increasingly complex landscape of AI compliance frameworks and regulations. While AI adoption accelerates across industries, governments worldwide are advancing legislation to address its risks and usage. For security executives, these frameworks introduce significant challenges in governance, risk management, and compliance planning.
In the European Union, the EU AI Act marks a critical development, with its phased rollout beginning in February 2025. Meanwhile, in the United States, regulatory initiatives include proposed SEC rules and a growing patchwork of state-level legislation, such as Colorado’s Artificial Intelligence Act. According to global law firm BCLP, at least 15 US states have enacted AI-related legislation, with more in development. Outside of these regions, China’s regulatory approach has been iterating since 2022, introducing yet another layer of complexity for global enterprises.
And this is just the start. In addition to AI-specific regulations, broader frameworks like the Digital Operational Resilience Act (DORA) introduce industry-specific requirements that intersect with AI use, particularly in financial services and other regulated sectors.
For multinational organizations, aligning compliance efforts across these overlapping and evolving regulations will be a significant challenge in the coming years.
February 2025: The Beginning of the EU AI Act Rollout
Similar to the GDPR, the EU AI Act will take a phased approach to implementation. The first milestone arrives on February 2, 2025, when organizations operating in the EU must ensure that employees involved in AI use, deployment, or oversight possess adequate AI literacy. Thereafter from August 1 any new AI models based on GPAI standards must be fully compliant with the act. Also similar to GDPR is the threat of huge fines for non-compliance – EUR 35 million or 7 percent of worldwide annual turnover, whichever is higher.
While this requirement may appear manageable on the surface, many organizations are still in the early stages of defining and formalizing their AI usage policies. In my conversations with security and compliance leaders, few report having implemented enforceable policies to govern AI use internally – much less demonstrate their compliance to outside regulators.
This phase highlights a broader opportunity for security awareness training programs to expand beyond traditional exercises such as phishing simulations. For example, dynamic and automated training assignments—a model already employed by platforms like KnowBe4—could help organizations ensure employees are equipped to understand and mitigate AI-related risks.
High-Risk Applications and the Challenge of AI Asset Inventories
Later phases of the EU AI Act, expected in late 2025 and into 2026, will introduce stricter requirements around prohibited and high-risk AI applications. For organizations, this will surface a significant governance challenge: maintaining visibility and control over AI assets.
Advertisement. Scroll to continue reading.
The concept of Shadow IT— employees using unsanctioned tools without approval — is not new, but generative AI tools have amplified the problem. Compared to legacy software, AI tools are often more enticing to end users, who may circumvent controls to leverage their perceived productivity benefits. The result is the rise of “Shadow AI,” where unsanctioned or embedded AI capabilities are used without security oversight.
Tracking the usage of standalone generative AI tools, such as ChatGPT or Claude, is relatively straightforward. However, the challenge intensifies when dealing with SaaS platforms that integrate AI functionalities on the backend. Analysts, including Gartner, refer to this as “embedded AI,” and its proliferation makes maintaining accurate AI asset inventories increasingly complex.
If the proposed SEC regulations are enacted in the United States, AI asset management will become even more critical. Organizations will need to implement robust processes to inventory, monitor, and manage AI systems across their environments.
Understanding AI Use Cases: Beyond Tool Tracking
Where frameworks like the EU AI Act grow more complex is their focus on ‘high-risk’ use cases. Compliance will require organizations to move beyond merely identifying AI tools in use; they must also assess how these tools are used, what data is being shared, and what tasks the AI is performing.
For instance, an employee using a generative AI tool to summarize sensitive internal documents introduces very different risks than someone using the same tool to draft marketing content. As AI usage expands, organizations must gain detailed visibility into these use cases to evaluate their risk profiles and ensure regulatory compliance.
This is no small task. Developing the ability to monitor and manage AI use cases across a global enterprise will demand significant resources, particularly as regulations mature over the next 12-24 months.
The EU AI Act: Part of a Larger Governance Puzzle
For security and compliance leaders, the EU AI Act represents just one piece of a broader AI governance puzzle that will dominate 2025. Regardless of geography, organizations will face growing pressure to understand, manage, and document their AI deployments.
The next 12-18 months will require sustained focus and collaboration across security, compliance, and technology teams to stay ahead of these developments. While the challenges are significant, proactive organizations have an opportunity to build scalable AI governance frameworks that ensure compliance while enabling responsible AI innovation.
Three steps to success
With regulatory momentum accelerating globally, preparation today will be essential to avoid disruption tomorrow. Here’s what organizations can do today:
- Establish an AI Committee – if you haven’t already, get a cross-functional team to tackle the challenge of AI. This should include governance representative, but also security and business stakeholders
- Get visibility – understand what your employees are using and what they are using it for
- Train users to understand AI and the risks
Related: Can AI be Meaningfully Regulated, or is Regulation a Deceitful Fudge?
Related: OpenAI Co-Founder Sutskever Sets up New AI Company Devoted to ‘Safe Superintelligence’
Related: AI’s Future Could be Open-Source or Closed. Tech Giants Are Divided as They Lobby Regulators
Related: Attempts to Regulate AI’s Hidden Hand in Americans’ Lives Flounder in US Statehouses
Original Post URL: https://www.securityweek.com/ai-regulation-gets-serious-in-2025-is-your-organization-ready/
Category & Tags: Artificial Intelligence,Government,AI,Regulations – Artificial Intelligence,Government,AI,Regulations
Views: 2