web analytics

Don’t Expect Cybersecurity ‘Magic’ From GPT-4o, Experts Warn – Source: www.databreachtoday.com

Rate this post

Source: www.databreachtoday.com – Author: 1

Open Questions: Degree to Which OpenAI’s Tool Hallucinates, Security of AI Model

Rashmi Ramesh (rashmiramesh_) •
May 27, 2024    

Don't Expect Cybersecurity 'Magic' From GPT-4o, Experts Warn
Image: Shutterstock

While OpenAI’s latest chatbot offers an array of flashy new features, experts recommend tempering expectations or concerns about any profound effects it might have on the cybersecurity landscape.

See Also: Webinar | Mythbusting MDR

OpenAI CEO Sam Altman launched GPT-4o earlier this month, gushing that the tool’s new capabilities “feels like magic to me.”

The free, generative artificial intelligence tool “can reason across audio, vision and text in real time,” said Romain Huet, the company’s head of developer experience. Compared to the company’s previous GTP-4 model, which debuted in March 2023 and accepts text and image input, but only outputs text, he said the new model “is a step towards much more natural human-computer interaction.”

Despite the fresh capabilities, don’t expect the model to fundamentally change how a gen AI tool helps either attackers or defenders, said cybersecurity expert Jeff Williams.

“We already have imperfect attackers and defenders. What we lack is visibility into our technology and processes to make better judgments,” Williams, the CTO at Contrast Security, told Information Security Media Group. “GPT-4o has the exact same problem. So it will hallucinate non-existent vulnerabilities and attacks as well as blithely ignore real ones.”

The jury is still out on whether such hallucinations might zap users’ trust in GPT-4o (see: Should We Just Accept the Lies We Get From AI Chatbots?).

“Don’t get me wrong, I love GPT-4o for tasks where you don’t need a high degree of confidence in the results,” he said. “But cybersecurity demands better.”

Attackers might still gain some minor productivity boosts thanks to GPT-4o’s fresh capabilities, including its ability to do multiple things at once, said Daniel Kang, a machine learning research scientist who has published several papers on the cybersecurity risks posed by GPT-4. These “multimodal” capabilities could be a boon to attackers who want to craft realistic-looking deep fakes that combine audio and video, he said.

The ability to clone voices is one of GPT-4o’s new features, although other gen AI models already offered this capability, which experts said can potentially be used to commit fraud by impersonating someone else’s identify – for example, to defeat banks’ identity checks. Such capabilities can also be used to develop misinformation as well as for attempted extortion, said George Apostolopoulos, founding engineer at supply chain security company Endor Labs (see: Top Cyber Extortion Defenses for Battling Virtual Kidnappers).

The security of the new AI model remains an open question. Compared to previous models, OpenAI said it’s added numerous security and privacy safeguards to GPT-4o, including minimizing the amount of data it collects, more effectively anonymizing that data, using stronger encryption protocols, and being more transparent about how the data it collects gets used and shared.

Users still won’t know what data was used to train GPT-4o, and there’s no way for them to opt out of using a large language model developed using any particular training dataset, Kang said. In addition, he said, users have no way of knowing how exactly the model works or might be subverted. Because the tool is free, expect malicious hackers and nation-state groups alike to be exploring ways to manipulate or defeat it.

For CISOs, GPT-4o doesn’t change the need to safeguard their enterprise using the right policies, procedures and technology. This includes ringfencing how – or if – employees are allowed to access gen AI for work, ensuring their use complies with established security policies, and using strong contracts with suppliers to manage third-party risk, said Pranava Adduri, CEO of Bedrock Security.

“This is basically what the cloud world went through with the shared responsibility model between the cloud infrastructure provider and the user running apps on that cloud,” Adduri told ISMG. “Here, we have the shared AI responsibility model between the LLM provider and the enterprise – and its users – leveraging new applications and uses of that LLM software.”

Experts also recommend never trusting any publicly accessible AI model to keep anything safe or private. To do this, Adduri said CISOs need to apply age-old data protection principles, including safeguarding critical, sensitive or regulated data; knowing how it flows and where it gets stored; as well as applying data-loss prevention policies and safeguards. This goes both for commercial tools a business might develop on someone else’s AI model or LLM, as well as for any of its employees who use tools such as GPT-4o for productivity purposes, he said.

Original Post url: https://www.databreachtoday.com/dont-expect-cybersecurity-magic-from-gpt-4o-experts-warn-a-25332

Category & Tags: –

Views: 0

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post

More Latest Published Posts