Source: www.securityweek.com – Author: Matias Madou
Thanks to security teams improving exponentially at protecting networks, cyber criminals are increasingly targeting vulnerabilities in software. And, due to the near-ubiquitous deployment of artificial intelligence (AI) tools in the software development lifecycle (SDLC), these criminals are finding exploitable flaws more easily than ever.
According to the Stack Overflow Developer Survey, three-quarters of developers, in fact, are either using or plan to use AI coding tools, up from 70 percent a year ago. They’re doing so because of the clear benefits, which include increased productivity (as cited by 81 percent of developers), accelerated learning (62 percent) and improved efficiencies (58 percent).
However, despite the advantages, only 42 percent of developers trust the accuracy of AI output in their workflows. In our observations, this should not come as a surprise – we’ve seen even the most proficient developers copying and pasting insecure code from large language models (LLMs) directly into production environments. These teams are under immense pressure to produce more lines of code faster than ever. Because security teams are also overworked, they aren’t able to provide the same level of scrutiny as before, causing overlooked and possibly harmful flaws to proliferate.
The situation brings the potential for widespread disruption: BaxBench oversees a coding benchmark to evaluate LLMs for accuracy and security, and has reported that LLMs are not yet capable of generating deployment-ready code. In addition, BaxBench indicates that 62 percent of solutions produced by even the best model are either incorrect or contain a vulnerability. Among the correct ones, about one-half are insecure.
Thus, despite the productivity boost, AI coding assistants represent another major threat vector. In response, security leaders should implement safe-usage policies as part of a governance effort. But such policies will fall far short of raising awareness among developers about the inherent risks. These developers will trust AI-generated code by default – and because they are proficient with some AI functions, they will leave a steady stream of vulnerabilities during the SDLC.
What’s more, they often lack the expertise – or don’t even know where to begin – to review and validate AI-enabled code. This disconnect only further elevates their organization’s risk profile, exposing governance gaps.
To keep everything from spinning out of control, chief information security officers (CISOs) must work with other organizational leaders to implement a comprehensive and automated governance plan that enforces policies and guardrails, especially within the repository workflow. To ensure the plan leads to an ideal state of “secure by design” safe-coding practices by default without any governance gaps, CISOs should build it upon three core components:
Observability. Governance is incomplete without oversight. Continuous observability brings granular insights into code health, suspicious patterns and compromised dependencies. To achieve this, security and development teams need to work together to gain visibility into where AI-generated code is introduced; how developers are managing the tools; and what their overall security process is throughout.
Advertisement. Scroll to continue reading.
Optimal, repository-level observability establishes the time-proven principle of proactive early detection by enabling these teams to track code origin, contributor identities and insertion patterns to eliminate flaws before they emerge as attack vectors.
Benchmarking. Governance leaders must evaluate developers in terms of their security aptitude, so they can identify where the skills gaps exist. Assessed skills should include the ability to write secure code themselves, and sufficiently review code created with the help of AI assistance, as well as code obtained from open-source repositories and third-party providers.
Ultimately, leaders need to establish trust scores based upon continuous and personalized benchmarking-driven evaluations, to determine baselines for learning programs.
Education. With effective benchmarking in place, leaders know where to focus upskilling investments and efforts. By raising developers’ awareness about risks, they gain a greater appreciation for code review and testing. Education programs should be agile, delivering tools and learning with flexible schedules and formats that fit developers’ working lives.
These programs are most successful when they feature hands-on sessions that address real-world problems that developers encounter on the job. Lab exercises will, for example, simulate scenarios where an AI coding assistant makes changes to existing code, and the developer then properly reviews the code to decide whether to accept or reject the changes.
Despite constant pressures to produce, development teams still strive to create quality, secure software products. But leaders must help them better understand how much a Secure by design approach – with observability, benchmarking and education all in place – contributes to the quality of the code. With this, organizations will close any governance gaps as they reap the rewards of AI-assisted productivity and efficiency, while avoiding issues/reworks that could compromise security during the SDLC.
Original Post URL: https://www.securityweek.com/how-to-close-the-ai-governance-gap-in-software-development/
Category & Tags: Artificial Intelligence,AI,DevSecOps,Software – Artificial Intelligence,AI,DevSecOps,Software
Views: 2


















































