Source: www.csoonline.com – Author:
From understanding what AI means in the context of the organization to being compliant and not forgetting the role third parties play, here are ten key things to keep in mind when creating an AI policy.
The popularity of generative AI has created a tricky terrain for organizations to navigate. On the one hand, there is this transformative technology with the potential to reduce costs and increase revenues, on the other hand, misuse of AI can upend entire industries, lead to public relations disasters, customer and employee dissatisfaction, and security breaches. Not to mention lots of money wasted on failed AI projects.
Researchers disagree about how much return enterprises are seeing on their AI investments, but surveys show increased adoption of generative AI in more business use cases, and a steady growth of projects moving from pilot to production. A Zscaler AI security report, released in late March, saw a 3,464% increase in enterprise AI activity.
But with the growing awareness of the potential of generative AI, there’s also a growing awareness of its risks. For example, according to Zscaler, enterprises currently block 60% of all AI transactions, with ChatGPT being the individual application blocked most often. One reason? There were around 3 million attempts by users to upload sensitive data to ChatGPT alone, Zscaler reports.
A carefully thought AI use policy can help a company set criteria for risk and safety, protect customers, employees, and the general public, and help the company zero in on the most promising AI use cases. “Not embracing AI in a responsible manner is actually reducing your advantage of being competitive in the marketplace,” says Bhrugu Pange, managing director who leads the technology services group at AArete, a management consulting firm.
According to a survey of over 300 C-suite executives by employment and labor law practice Littler, 42% of companies had an AI policy in place as of September 2024 — up from just 10% a year earlier. Another 25% of organizations are currently working on AI policies, and 19% are considering one.
If you’re still working on your AI policy — or are updating your existing one — here are ten key areas your policy should cover.
Clear definition of AI
AI means different things to different people. Search engines have AI in them. So do grammar checkers and Photoshop. Nearly every enterprise vendor is busy adding AI functionality to their platforms. Even things that have barely any intelligence at all are being rebranded as AI to get attention and funding.
It helps to have a common definition of AI when discussing risks, benefits, and investments.
Aflac began creating its official AI policy in early 2024, Tera Ladner, Aflac’s deputy global CISO says, based on its existing policy frameworks. Aflac isn’t the only company that realized that AI can be a very vague term.
Principal Financial Group CIO Kathy Kay says they also had to come up with a clear definition of AI, because they realized very quickly that people were talking about AI differently. The firm had to develop a definition for what AI meant in the context of the company, so when talking about AI, they are all aligned.
And, since AI can mean different things to different people, it helps to have a variety of viewpoints involved in the discussion.
Input from all stakeholders
At Aflac, the security team took the initial lead on developing the company’s AI policy. But AI is not just a security concern. “And it’s not just a legal concern,” Ladner says. “It’s not just a privacy concern. It’s not just a compliance concern. You need to bring all the stakeholders together. I also highly recommend that your policy be sanctioned or approved by some sort of governing committee or body, so it has the teeth you need.”
An AI policy must serve the entire company, including individual business units.
At Principal Financial, Kay says that she and the company’s chief compliance officer were the executive sponsors of their AI policy. “But we had business unit representations, legal, compliance, technologists, and we even had HR engaged,” she adds. “Everybody learns together and you can align the outcomes you want to achieve.”
Intuit also put together a multidisciplinary team to create its AI policy. That helped the company create enterprise-wide governance policies and helped it cover legal requirements, industry standards, and best practices, according to Liza Levitt, Intuit’s VP and deputy general counsel. “The team includes people with expertise in data privacy, AI, data science, engineering, product management, legal, compliance, security, ethics, and public policy.”
Start at the organization’s core principles
An AI policy needs to start with the organization’s core values around ethics, innovation, and risk. “Don’t just write a policy to write a policy to meet a compliance checkmark,” says Avani Desai, CEO at Schellman, a cybersecurity firm that works with companies on assessing their AI policies and infrastructure. “Build a governance framework that’s resilient, ethical, trustworthy, and safe for everyone — not just so you have something that nobody looks at.”
Starting with core values will help with the creation of the rest of the AI policy. “You want to establish clear guidelines,” Desai says. “You want everyone from top down to agree that AI has to be used responsibly and has to align with business ethics.”
Having these principles in place will also help companies stay ahead of regulations.
Align with regulatory requirements
According to Gartner, AI governance will become a requirement of all sovereign AI laws and regulations worldwide by 2027.
The biggest AI regulation that’s already in place is the EU’s AI Act. “The EU AI Act is the only framework that I’ve seen that covers everything,” says Schellman’s Desai. And it applies to all countries that are delivering their product in the EU or to EU citizens.
The act sets certain minimum standards that all sizable companies need to follow, she says. “It’s very similar to what happened with GDPR. US companies were forced to comply because they couldn’t bifurcate the data of who is in the EU and who’s not. You don’t want to build a new system just for EU data.”
The GDPR isn’t the only regulation that applies to others, there are plenty of regulations around the world that touch on data privacy issues, which are relevant to AI deployment as well. And, of course, there are industry-specific data privacy rules, such as those for health care and financial services.
Some regulators and standards-setting bodies have already begun looking at how to update their policies for generative AI. “We depended heavily on the NAIC guidance that was released that was specific to insurance companies,” says Aflac’s Ladner. “We wanted to be sure that we were capturing the guidelines and guardrails that NAIC was prescribing and making sure they were in place.”
Establish clear responsible use guidelines
Can employees use public AI chatbots or only secure, company-approved tools? Can business units create and deploy their own AI agents? Can HR switch on and use the new AI-powered features in their HR software? Can sales and marketing use AI-generated images? Should humans review all AI output, or are reviews only necessary for high-risk use cases?
These are the kinds of questions that go into the responsible use section of a company’s AI policy and depend on an organization’s specific needs.
For example, at Principal Financial, code generated by AI needs review, says Kay. “We’re not just unleashing code into the wild. We will have a human in the middle.” Similarly, if the firm builds an AI tool to serve customer-facing employees, there will be a human checking the output, she says.
Taking a risk-based approach to AI is a good strategy, says Rohan Sen, data risk and privacy principal at PwC. “You don’t want to overly restrict the low-risk stuff,” he says. “If you’re using Copilot to transcribe an interview, that’s relatively low risk. But if you’re using AI to make a loan decision or decide what an insurance rate should be, that has more consequences and you need to provide more human review.”
Don’t forget the impact of third parties
If something goes wrong due to an AI-related issue, the public isn’t going to care that it wasn’t your fault but that of a vendor or contractor. Whether the problem is a data breach or a violation of a fair lending law, the buck stops with you.
That means that an AI policy can’t just cover a company’s own internal systems and employees but also vendor selection and oversight.
Some vendors will offer indemnification and contractual protection. Avoiding vendor lock-in will also help reduce third-party risk. If a provider violates their customers’ AI policy, it can be difficult to switch.
When it comes to AI model providers, being model agnostic from the start will help manage that risk. This means that, instead of hard-coding one AI or another into an enterprise workflow, the choice of model is left flexible, so it can be changed later.
It does take more work up front, but there are other business benefits in addition to reducing risk. “The technology is changing,” says PwC’s Sen. “You don’t know if one model will be better than another two years from now.”
Establish clear governance structure
An AI policy that sets clear expectations is half the battle, but the policy is not going to be particularly effective if it doesn’t also lay out how it will be enforced.
Only 45% of organizations are at the level of AI governance maturity where their AI policy is aligned with their operating model, says Lauren Kornutick, analyst at Gartner, citing a 2024 Gartner survey. “The rest may have a policy in place of what’s acceptable use, or have a responsible AI policy in place, but haven’t effectively operationalized it throughout the organization,” she says.
Who gets to decide if a particular use case meets a company’s guidelines, and who gets to enforce this decision?
“Policy is great but it’s not enough,” she says. “I hear that pretty consistently from our CISOs and our privacy officers.” Getting this straightened out is valuable, she says. Companies who are effective at this are 12% more advanced in their technology deployment.
And the first step, says Sanjeev Vohra, chief technology and innovation officer at Genpact, is to figure out what AI the company already has in place. “Many companies don’t have a full inventory of their usage of AI. That’s what we recommend as the first thing, and you’ll be surprised by how much time it takes.”
Use technology to ensure compliance
One way to check if an AI policy is being followed is to use automated systems. “We’re seeing technology emerge to support policy enforcement,” says Gartner’s Kornutick.
For example, an AI-powered workflow can include a manual review step, where a human steps in and checks the work. Or data loss prevention tools can be used to ensure that employees don’t upload sensitive data to public chatbots.
“Every client that I work with has monitoring capabilities to see where there’s exfiltration of data, see what’s downloaded into their environment, and has ways to block access to sites that haven’t been approved or that represent risks to enterprises,” says Dan Priest, chief AI officer at PwC. “The risk is real but there are good ways to manage those risks.”
Plan for all possibilities, including the worst
Things happen. No matter how good and comprehensive an AI policy is, there will be violations, and there will be problems. A company chatbot will say something embarrassing or make a promise the company can’t keep because the right guardrails weren’t activated.
“You hear some interesting and fun examples of where AI has gone wrong,” says Priest. “But it’s a very minor part of the conversation, because there are reasonable ways to manage those risks. And if there’s any volume of those risks manifesting, you activate countermeasures at the architectural layer, at the policy layer, and at the training layer.”
And just as a company needs to have technical measures in place for when AI goes off track, an AI policy also needs to include incident response in case the problem is bigger, and management response for cases in which employees, customers, or business partners deliberately or accidentally violate the policy.
For example, employees in a particular department might routinely forget to review documents before they are sent to customers, or a business unit might set up a shadow AI system that ignores data privacy or security requirements.
“Who do you call?” asks Shellman’s Desai.
There needs to be a process, and training, to ensure that people are in place to deal with violations and have the power they need to set things right. And if there’s a problem with an entire AI process, there needs to be a way for the system to be shut off without doing damage to the company.
Plan for change
AI technology moves quickly. That means that much of what goes into a company’s AI policy needs to be reviewed and updated on a regular basis.
“If you design a policy that doesn’t have an ending date, you’re hurting yourself,” says Rayid Ghani, a professor at Carnegie Mellon University. That might mean that certain provisions are reviewed every year — or every quarter — to make sure they’re still relevant.
“When you design the policy, you have to flag the things that are likely to change and require updates,” he says. The changes could be a result of technological progress, or new business needs, or new regulations.
At the end of the day, an AI policy should spur innovation and development, not hinder it, says Sinclair Schuller, principal at EY. “Whoever is at the top — the CEO or the CSO — should say, ‘we’re going to institute an AI policy to enable you to adopt AI, not to prevent you from adopting AI’,” he says.
SUBSCRIBE TO OUR NEWSLETTER
From our editors straight to your inbox
Get started by entering your email address below.
Original Post url: https://www.csoonline.com/article/3950176/10-things-you-should-include-in-your-ai-policy.html
Category & Tags: Generative AI, IT Governance Frameworks, IT Training – Generative AI, IT Governance Frameworks, IT Training
Views: 2