web analytics

OpenAI: We’ll Stop GPT Misuse for Election Misinfo – Source: securityboulevard.com

Rate this post

Source: securityboulevard.com – Author: Richi Jennings

Sam Altman, CEO of OpenAISam says avoid AI abuse—protect the democratic process.

With elections coming up in the US and other major countries, concerns are rising that hostile nations might use AI to sow dissent. Generative AI tools such as ChatGPT and DALL-E will get extra guardrails, say their creators.

OpenAI CEO Sam Altman (pictured) wants to “make sure our AI systems are built, deployed and used safely.” In today’s SB Blogwatch, we assess the challenge.

Your humble blogwatcher curated these bloggy bits for your enter­tainment. Not to mention: Dialup modem song.

Guardrails Prevent Trouble?

What’s the craic? Asa Fitch reports—“OpenAI Curbs Use Of Its Tools In Politics”:

People aren’t allowed

OpenAI outlined limits on using its tools in politics during the run-up to elections in 2024, amid mounting concern that artificial-intelligence systems could mass-produce misinformation and sway voters in high-profile races. … The growth of such tools has raised worry that [generative AI] could be used to manipulate voters with false news stories and computer-generated images and video.



OpenAI said people aren’t allowed to use its tools for political campaigning and lobbying. People also aren’t allowed to create chatbots that impersonate candidates and other real people, or chatbots that pretend to be local governments. … It also banned applications that discouraged voting—by claiming a vote was meaningless, for example.

It’s an international issue. Gintaras Radauskas reminds us—“OpenAI to introduce anti-disinformation tools”:

Scaled influence operations

Elections are taking place this year in countries that are home to half the world’s population and represent 60% of global GDP. People in the United States, the United Kingdom, the European Union, and India will all vote this year.



And just recently, the World Economic Forum’s “Global Risks Report 2024” warned that generative artificial intelligence (AI) tools could help disrupt politics via the spread of false information. … To increase vigilance ahead of the elections, OpenAI said it has brought together expertise from its safety systems, threat intelligence, legal, engineering, and policy teams. It anticipates quite a few misleading deepfakes, chatbots impersonating candidates, and scaled influence operations.

For example? Okian Warrior has one:

A recent example is Mark Ruffalo (aka “The Hulk”) reposting an image of Trump on Epstein’s plane. Someone made a deepfake image smearing Trump, Ruffalo believed it, and because Ruffalo has a wide following the fake image went far and wide on the internet.



Because, you know, everybody in the ****in’ country can edit and post videos now.

Why now? Why, ask u/MassiveWasabi:

OpenAI … have all the money they need, they have the best researchers, and they have Microsoft providing them with massive amounts of compute. Now all they need to do is make sure the public doesn’t freak out and put pressure on the government to regulate them.

You know what would make the public freak out? Massive, unending streams of disinformation created by generative AI. Images that are literally indistinguishable from reality, or AI agents all over the internet spreading propaganda for presidential candidates.



The way this upcoming election plays out will shape AI regulation for the foreseeable future, and all the big AI companies know it. There’s way too much at stake here from the corporate perspective.

Who do we need to worry about? Foreigners, finks flatline[You’re fired—Ed.]

I’m not as worried about Joe Schmo using OpenAI services as I am about a foreign state with weaponized AI tools. Plus nobody really cares if you can show that something was AI generated after the fact, the first impression is all that counts.

Remember the backlash against snopes.com? Social media plus AI is going to make this a really spectacular election season and OpenAI can do approximately nothing to curb that.

And will OpenAI’s plan help? Nope, cries OYAHHH:

Kinda worthless when the targets of this technology will simply move to technologies not controlled by big tech. … When you put the squeeze on the free flow of information it tends to leak out from a direction you were not expecting.

Google, Microsoft, Apple honestly are stupid enough to think they are the Gatekeepers. They are minding gates where the sheep pass, not where the wolves roam free.

So, what’s the solution? Since you asked, u/ButSinceYouAsked answers:

Drowning out false information with true information is the way to go (or contextualising out-of-context stuff – see Community Notes on X or YouTube’s little, “Here’s an article on [topic]”). … Anything that gives people greater access to true information wins in my book.

We don’t need no stinkin’ AI, kobe_throwaway’s thinkin’:

I don’t think AI models are capable of producing the amount of misinformation that is coming out from some of the most popular American news outlets nor is it going to have as much impact as the said outlets will have. I do however believe [AI is] going to be the scapegoat.

Meanwhile, VeryFluffyBunny pays attention to that man behind the curtain:

Have we reached peak hype yet? What OpenAI are essentially claiming to potential investors & clients is that they can provide massively influential PR & marketing. This has got nothing to do with preserving democracy & everything to do with making money. A lot of money.

And Finally:

Ask your parents

Previously in And Finally


You have been reading SB Blogwatch by Richi Jennings. Richi curates the best bloggy bits, finest forums, and weirdest websites … so you don’t have to. Hate mail may be directed to @RiCHi, @richij or [email protected]. Ask your doctor before reading. Your mileage may vary. Past per­formance is no guarantee of future results. Do not stare into laser with remaining eye. E&OE. 30.

Image sauce: Steve Jennings/Getty Images for TechCrunch (cc:by; leveled and cropped)

Recent Articles By Author

Original Post URL: https://securityboulevard.com/2024/01/openai-election-misinfo-richixbw/

Category & Tags: AI and Machine Learning in Security,AI and ML in Security,Analytics & Intelligence,Application Security,AppSec,Cloud Security,Cyberlaw,Cybersecurity,Deep Fake and Other Social Engineering Tactics,DevOps,DevSecOps,Editorial Calendar,Featured,Governance, Risk & Compliance,Humor,Incident Response,Industry Spotlight,Insider Threats,Most Read This Week,Network Security,News,Popular Post,Regulatory Compliance,Securing the Cloud,Security Awareness,Security Boulevard (Original),Security Operations,Social – Facebook,Social – LinkedIn,Social – X,Social Engineering,Spotlight,Threat Intelligence,Threats & Breaches,Zero-Trust,2024 presidential election,AI,Biden,Chat GPT,ChatGPT,chatgpt injection,cybersecurity risks of generative ai,DALL-E,Deep Fake,Deep Fakery,Deep fakes,deepfake,deepfake attacks,Deepfake Detection,Deepfake security threats,Deepfake Technology,deepfake videos,deepfakes,Democracy,Democracy-2024,Donald Trump,election,election cybersecurity,election influence,Election Infosecurity,Election Manipulation,generative AI,Generative AI risks,GPT,GPT-3,GPT-4,Joe Biden,Misinformation,OpenAI,SB Blogwatch,Trump – AI and Machine Learning in Security,AI and ML in Security,Analytics & Intelligence,Application Security,AppSec,Cloud Security,Cyberlaw,Cybersecurity,Deep Fake and Other Social Engineering Tactics,DevOps,DevSecOps,Editorial Calendar,Featured,Governance, Risk & Compliance,Humor,Incident Response,Industry Spotlight,Insider Threats,Most Read This Week,Network Security,News,Popular Post,Regulatory Compliance,Securing the Cloud,Security Awareness,Security Boulevard (Original),Security Operations,Social – Facebook,Social – LinkedIn,Social – X,Social Engineering,Spotlight,Threat Intelligence,Threats & Breaches,Zero-Trust,2024 presidential election,AI,Biden,Chat GPT,ChatGPT,chatgpt injection,cybersecurity risks of generative ai,DALL-E,Deep Fake,Deep Fakery,Deep fakes,deepfake,deepfake attacks,Deepfake Detection,Deepfake security threats,Deepfake Technology,deepfake videos,deepfakes,Democracy,Democracy-2024,Donald Trump,election,election cybersecurity,election influence,Election Infosecurity,Election Manipulation,generative AI,Generative AI risks,GPT,GPT-3,GPT-4,Joe Biden,Misinformation,OpenAI,SB Blogwatch,Trump

Views: 0

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post

More Latest Published Posts