web analytics

Google Pushes Software Security Via Rust, AI-Based Fuzzing – Source: securityboulevard.com

Rate this post

Source: securityboulevard.com – Author: Jeffrey Burt

Google is making moves to help developers ensure that their code is secure.

The IT giant this week said it is donating $1 million to the Rust Foundation to improve interoperability between the Rust programming language and legacy C++ codebase in hopes of getting more developers make the shift to Rust.

The donation supports the foundation’s new Interop Initiative to expand interoperability between the languages and make it easier for programmers to adopt Rust, one of a number of new languages – like Go, Python, and C# – that protect memory to reduce the number of vulnerabilities in software.

All Webinars

“Google believes in the critical role that memory safe languages like Rust play and the urgent need to address memory safety in a variety of domains,” Lars Bergstrom, Rust Foundation board chair and member director for Google, said in a statement, adding that “greater interoperability with C++ will be key to Rust’s adoption and allow for more organizations and communities to benefit from memory-safe systems.”

The donation to the Rust Foundation comes a week after Google said it was releasing its AI-based fuzzing framework as an open source resource. The tool uses large-language models (LLMs) to help developers more quickly find vulnerabilities in their C and C++ projects.

LLMs and Fuzzing

In the fuzzing framework announcement, members of Google’s security teams wrote that they also would show developers and researchers how they are using AI to accelerate the processing of patching those vulnerabilities.

“Fuzzing is fantastic for finding bugs, but for security to improve, those bugs also need to be patched,” they wrote. “It’s long been an industry-wide struggle to find the engineering hours needed to patch open bugs at the pace that they are uncovered, and triaging and fixing bugs is a significant manual toll on project maintainers.”

They added that with “continued improvements in using LLMs to find more bugs, we need to keep pace in creating similarly automated solutions to help fix those bugs.”

The growing number and sophistication of cyberattacks is putting greater pressure on organizations to make sure the software they’re developing and running is secure as possible, with increasing pressure from government agencies and the IT industry to make easier to incorporate security through the development lifecycle – what the Cybersecurity and Infrastructure Security Agency calls “secure by design.”

Memory-Safe Languages

CISA in December urged software makers to adopt newer memory-safe languages like Rust and create roadmaps for moving away from C and C++. In a report, the agency said such a shift would not only eliminate many of the most common vulnerabilities in languages but also migrate the responsibility for software security from users to developers, which CISA is promoting.

In a statement about the donation to the Rust Foundation, David Kleidermacher, Google vice president of engineering, Android security and privacy, said that “based on historical vulnerability density statistics, Rust has proactively prevented hundreds of vulnerabilities from impacting the Android ecosystem. This investment aims to expand the adoption of Rust across various components of the platform.”

Google joined the foundation in 2021, by which time the language was being used with Android and other Google products, Bergstrom wrote in a blog post, stressing the need for memory-safe security. The search and cloud giant also has invested in a range of tools like cxx, autocxx, bindgen, and crubit, all of which are improving Rust’s interoperability with C++.

“As these improvements have continued, we’ve seen a reduction in the barriers to adoption and accelerated adoption of Rust,” he wrote. “While that progress across the many tools continues, it is often only expanded incrementally to support the particular needs of a given project or company.”

“In order to accelerate both Rust adoption at Google as well as more broadly across the industry, we are eager to invest in and collaborate on any needed ABI [application binary interface] changes, tooling and build system support, wrapper libraries, or other areas identified.”

AI and Safe Coding

Fuzzing is an automated process to test software for vulnerabilities and Google has been using its OSS-Fuzz tool since 2016. In August 2023, the company said it was testing the use of its LLMs to boost the performance of OSS-Fuzz, adding that “using LLMs is a promising new way to scale security improvements across the over 1,000 projects currently fuzzed by OSS-Fuzz and to remove barriers to future projects adopting fuzzing.”

Google used LLMs to write code specific to projects to boost coverage and find more vulnerabilities, the security team members wrote. Google has used LLMs in more than 300 OSS-Fuzz C and C++ projects, which grew coverage across project codebases, and improved prompt generation and build pipelines, which further increased code line coverage by up to 29% in 160 projects.

“How does that translate to tangible security improvements?,” they wrote. “So far, the expanded fuzzing coverage offered by LLM-generated improvements allowed OSS-Fuzz to discover two new vulnerabilities in cJSON and libplist, two widely used projects that had already been fuzzed for years.”

Now Google is turning AI onto bug fixing, recently announcing an experiment that included building an automated pipeline that takes in vulnerabilities – including those found by fuzzing – and prompting LLMs to generate fixes and test them before choosing the best one to be reviewed by humans.

AI-powered patching fixed 15% of the bugs, which translated into significant time savings for engineers, according to Google, adding that the technology’s benefits should benefit most steps throughout the software development process.

The open sourcing of the fuzzing framework means that any researcher or developer can use their own prompts to test how well fuzz targets generated by LLMs – including Google’s VertexAI – fare. Those interested in the use of LLMs to patch bugs can read Google’s paper about it.

Recent Articles By Author

Original Post URL: https://securityboulevard.com/2024/02/google-pushes-software-security-via-rust-ai-based-fuzzing/

Category & Tags: Cloud Security,Cybersecurity,Data Security,DevOps,Featured,Industry Spotlight,Mobile Security,Network Security,News,Security Boulevard (Original),Social – Facebook,Social – LinkedIn,Social – X,Spotlight,Vulnerabilities,Android vulnerabilities,google,Rust language,Software Security – Cloud Security,Cybersecurity,Data Security,DevOps,Featured,Industry Spotlight,Mobile Security,Network Security,News,Security Boulevard (Original),Social – Facebook,Social – LinkedIn,Social – X,Spotlight,Vulnerabilities,Android vulnerabilities,google,Rust language,Software Security

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post

More Latest Published Posts