web analytics

The Week in Security: Google Cloud Build permissions can be poisoned, WormGPT weaponizes AI – Source: securityboulevard.com

Rate this post

Source: securityboulevard.com – Author: Kate Tenerowicz

google-cloud-build-abused

Welcome to the latest edition of The Week in Security, which brings you the newest headlines from both the world and our team across the full stack of security: application security, cybersecurity, and beyond. This week: Google Cloud Build permissions can be abused to poison production environments. Also: A new AI model allows cybercriminals to launch sophisticated phishing attacks.  

This Week’s Top Story

Attackers can abuse Google Cloud Build to poison production environments

Security researchers at Orca Security have uncovered a new vulnerability that can compromise production environments in Google Cloud Build — a CI/CD platform that is a part of Google Cloud. Cloud Build allows development organizations to integrate source code from different code repositories or cloud storage spaces and conduct builds. It integrates with Google Cloud Services such as Artifact Registry, Google Kubernetes Engine, and App Engine. Orca researchers discovered that Cloud Build’s user permissions can be abused to produce potentially catastrophic environment poisoning.

The Orca researchers discovered the flaw in a Cloud Build permission titled cloudbuild.builds.create, which gives users the ability to create new builds. An attacker could leverage this permission to elevate the permissions of a lower privileged compromised account, giving it the ability to masquerade as a Cloud Build service account and access source code and resources, such as software artifacts. For example, using this flaw, an attacker could extract container images from the Artifact Registry that are used inside the Google Kubernetes Engine (GKE) and inject them with malicious code. The code then executes when the compromised image is launched by the GKE, creating a backdoor that malicious actors can leverage for remote execution. 

Any application built from these manipulated images is vulnerable to backdoor deployment that can result in denial-of-service (DoS) attacks and data theft. Also, if these applications are deployed on customer environments, the risks grow exponentially. Malicious actors can then deliver the final blow, which is a software supply chain attack that could have a similar impact to SolarWinds or 3CX incidents. 

This discovery highlights the potential risks lurking within cloud-based infrastructure and the need for constant vigilance as the threat landscape constantly adapts and shifts. It is recommended that any users of Google Cloud Platform restrict permissions granted to the Cloud Build service account based on the principle of least privilege. 

News Roundup

Here are the stories we’re paying attention to this week…    

WormGPT: New AI Tool Allows Cybercriminals to Launch Sophisticated Cyber Attacks (The Hacker News)

A new generative AI cybercrime tool called WormGPT has been advertised on underground forums as a way to launch sophisticated phishing and business email compromise (BEC) attacks. It operates without any ethical boundaries that limit most public large language models (LLMs) such as ChatGPT. WormGPT can be used in the place  of these ‘ethical’ LLMs to draft highly convincing fake emails that are personalized to the individual recipient, warned Daniel Kelly of the firm SlashNext.

FIN8 Modifies ‘Sardonic’ Backdoor to Deliver BlackCat Ransomware  (Dark Reading)

FIN8 has made a resurgence online using a revised version of ‘Sardonic’ Backdoor to launch BlackCat ransomware attacks. FIN8 is a well-known financially-motivated cybercrime group that has a habit of constantly reinventing its tactics. The revamp of ‘Sardonic’ — first made public in 2021 — maintains much of the same original characteristics but helps avoid detection practices designed for the 2021 version, and expands the hackers flexibility and capabilities. 

US govt bans European spyware vendors Intellexa and Cytrox (Bleeping Computer) 

The U.S. government has banned European commercial spyware manufacturers Intellexa and Cytrox, citing risks to U.S. national security and foreign policy interests. This decision was motivated by the four companies’ involvement in trafficking cyber exploits and their aid in sustaining a global climate of repression and human rights violations.

Linux Ransomware Poses Significant Threat to Critical Infrastructure (Dark Reading)

Linux runs on about 80% of web servers, often in the government, manufacturing, energy, and banking sectors. It is the backbone of the Internet and is the new frontier for cybercriminals operating ransomware, warns Jon Miller the CEO of the firm Halcyon in an opinion piece. Gangs have been introducing Linux versions at an increasing pace, with attacks now coming from some of the most infamous gangs, Miller said. The cybersecurity field needs to get ahead of this major threat by focusing more attention on Linux defense systems and security. 

If George Washington Had a TikTok, What Would His Password Be? (Dark Reading) 

An experiment run on ChatGPT found that the AI model can — if given the correct parameters and wording — generate a password list for an individual on a specific platform. In this experiment’s case, they generated a list for George Washington’s  TikTok password. Despite the goofy nature of the case study, the gravity of the experiment is serious. Any individual could replace George Washington in this scenario, with AI crafting hundreds of potential passwords for that specific user. This kind of information could be handed to a hacker on a silver platter. Other platforms have been able to create password lists, but none to this level of ease and simplicity.This experiment further demonstrates the weakness in password-based authentication, and how AI weakens it even more.

*** This is a Security Bloggers Network syndicated blog from ReversingLabs Blog authored by Kate Tenerowicz. Read the original post at: https://www.reversinglabs.com/blog/the-week-in-security-google-cloud-build-permissions-poisoned-wormgpt-ai

Original Post URL: https://securityboulevard.com/2023/07/the-week-in-security-google-cloud-build-permissions-can-be-poisoned-wormgpt-weaponizes-ai/

Category & Tags: Security Bloggers Network,software supply chain security,The Week in Security – Security Bloggers Network,software supply chain security,The Week in Security

Views: 0

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post