web analytics

AI in Software Development: The Good, the Bad, and the Dangerous – Source: www.darkreading.com

Rate this post

Source: www.darkreading.com – Author: 1

Artificial intelligence (AI) is good for a lot more than writing term papers, songs, and poems. In the tech world, its use in software development and application security (AppSec) is rapidly becoming mainstream. A survey commissioned by the Synopsys Cybersecurity Research Center (CyRC) earlier this year reported that 52% of AppSec professionals are actively using AI.

That trend is expected to continue, given that most experts agree that AI is still in its infancy and will only improve. This is in large measure because in software development, speed trumps every other priority, and AI already offers exactly that: speed. It can generate code in seconds that might take a junior developer hours or days.

With AppSec teams already struggling to do rigorous code security testing without slowing developers, the promise of AI — that it can help development teams hit tight production deadlines while still carrying out vulnerability checks — is practically irresistible.

But — there is almost always a “but” with every shiny new thing in tech — the promise is not yet the reality.

4 Reasons to Be Wary of AI-Generated Code

AI is not yet good enough to be used without intense human supervision. Its vulnerability checks are not nearly comprehensive enough to guarantee secure code into production. Indeed, it can create vulnerabilities on its own. So even those who are using it have qualms.

In the Synopsys CyRC survey, 76% of DevSecOps professionals say they’re concerned about using AI. They are wise to be wary for many reasons, including the following.

No unlearning: Large language models (LLMs) ingest enormous amounts of data, and once it’s ingested it can’t be unlearned. This is particularly relevant to generative AI-assisted coding, where there are frequently ownership, copyright, and licensing requirements within the generated code because it came from somewhere else.

Dream a little dream: AI chatbots are known to randomly produce false responses that may seem credible. These so-called “hallucinations” can create significant risks to software supply chain security, as AI may recommend a nonexistent code library or package.

A malicious actor could create a package with the same name, fill it with malicious code, and then distribute it to unsuspecting developers who follow the AI’s recommendations. Researchers have already discovered malicious packages created through AI hallucinations on popular package installers like PyPI and npm.

The snippet sting: All the code that an LLM generates comes from somewhere else. Much of it is open source. The 2023 Synopsys “Open Source Security and Risk Analysis” (OSSRA) report notes that 76% of the code in 1,703 audited codebases is open source, and 96% of those codebases include open source code.

Even if developers don’t use an entire open source component (they often don’t) but take portions of it, those snippets still have any license restrictions connected with the entire component. That means a generative AI tool that incorporates snippets sourced from protected code will propagate the restrictions into any codebase that includes them.

And if the tool doesn’t flag those restrictions or requirements, you could be in major legal trouble. A federal lawsuit filed last November by four anonymous plaintiffs against GitHub’s LLM Copilot and its underlying OpenAI Codex machine learning model alleges that Copilot is an example of “a brave new world of software piracy.”

The complaint says the code offered to Copilot customers “did not include, and in fact removed, copyright and notice information required by the various open source licenses.”

Inherited vulnerabilities: This is another element of the reality that LLMs don’t unlearn anything. Since the priority for development teams is speed and security testing is considered an impediment to speed, the trend is to fix only high-risk vulnerabilities in code. That means codebases used to train generative AI tools will contain vulnerabilities that LLM users will import.

How to Reap Benefits, Minimize Risks of Using AI Tools

None of this means organizations should avoid using AI tools to generate software. In fact, it’s just the opposite. AI has already brought huge value by minimizing the time developers spend on tedious and repetitive tasks. But just like using open source code generally, organizations need to be diligent about testing AI components and understanding where and how they are used in their software.

To reap the benefits and minimize its risks, organizations should start by getting answers to these basic questions about any AI tool they are considering using:

  • How will the AI tool handle and protect sensitive data from your organization?
  • If you’re using a third party to implement AI, will you have control over what data is collected and shared with you?
  • Will your AI tool comply with relevant data protection and privacy regulations, such as the EU’s General Data Protection Regulation?
  • How will your tool handle data privacy and security for any third-party dependencies or external integrations?
  • Will your AI tool be regularly tested for vulnerabilities and subjected to security audits?
  • How will you deliver security updates and patches to give the AI tool ongoing protection against emerging threats?
  • If using AI to generate or test code, will humans vet the AI’s responses to detect any hallucinatory recommendations?

To learn more about AI-based software development, read our blog post.

About the Author


Taylor Armerding


Taylor Armerding is a security advocate with the Synopsys Software Integrity Group. His work has appeared in Forbes, CSO Online, the Sophos Naked Security blog, and numerous other publications.

Original Post URL: https://www.darkreading.com/application-security/ai-in-software-development-the-good-the-bad-and-the-dangerous

Category & Tags: –

Views: 0

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post