Source: securityboulevard.com – Author: Lohrmann on Cybersecurity
Fake news, misinformation and online scams are growing at an alarming rate as generative AI explodes in usage. So what are the problems, and what are some potential solutions to consider?
July 09, 2023 •
It seems that in the cybersecurity world of confidentiality, integrity and availability, we now need to add trust as a new component.
The headline read: “The Titanic Never Sunk.”
I thought: “What?”
“More than a century after it went down in the North Atlantic Ocean, wild myths and urban legends about the luxury liner have continued to swirl, including that it was doomed by the curse of a mummified Egyptian priestess.
“Even more striking are a wave of TikTok videos asserting that the Titanic did not sink at all. Many of them have racked up millions of views — never mind that the claim fails to hold water.
“‘The Titanic never truly went under,’ said a video by a TikTok user called ‘The Deep Dive,’ which garnered more than 4 million views. …”
And while this Titanic story didn’t need help from generative artificial intelligence (GenAI) to be created, there are thousands more stories like it, based on misinformation, that are produced daily using tools like ChatGPT.
BUT WHY THE INCREASED CONCERN?
So let’s dig a little deeper on the GenAI misinformation problems and suggest some possible ways to mitigate these concerns.
The author describes some of his research on GenAI deepfakes, and he comes to a few conclusions:
“One great thing I learned: We’ll never move faster than AI can create to detect what’s fake. Instead, we need to shift our focus to authenticating what’s real. We’re all still buzzing about ChatGPT (and now, GPT-4). Maybe deep down, simmering beneath the excitement, is fear. Fear that the world is now moving at the speed of generative AI … and we just can’t keep up. As it keeps getting better and better at duping us into thinking that the digital content we see is real, it’s also getting better and better at ‘covering up’ the very elements inside digital content that would ordinarily clue us into knowing what’s been altered or tampered with.
“AI really is that good at deception. And it keeps getting craftier. …”
“Tweets generated by OpenAI’s GPT-3 model are so convincing, people can’t spot when they promote misinformation.
“A new study has revealed that people find AI-generated tweets more convincing than those written by humans, even when they contain information that isn’t true.
“Researchers studied the responses of nearly 700 people to a range of tweets on hot-button issues like COVID-19, evolution, 5G and vaccines. Some of the 220 total tweets were accurate, while others featured misinformation. The purpose of the analysis was to see the impact AI had on people’s ability to spot so-called ‘fake news.’
“The survey found that not only were tweets generated by OpenAI’s GPT-3 large language model (LLM) easier to identify when they presented the correct information, but they were also better at duping people into believing them when they were false.”
“Stack Overflow, a popular coding website, has temporarily banned ChatGPT. The chatbot uses a complicated AI model to give convincing, but often incorrect, answers to questions asked by humans. The moderators of the site say that they have seen a surge of responses generated by ChatGPT, which is causing harm to the site and its users. The reason, the platform stated, is that the answers provided by ChatGPT are predominantly incorrect, but they look like they could be right and it’s easy to produce them.”
“Soon after ChatGPT debuted last year, researchers tested what the artificial intelligence chatbot would write after it was asked questions peppered with conspiracy theories and false narratives.
“The results — in writings formatted as news articles, essays and television scripts — were so troubling that the researchers minced no words.
“‘This tool is going to be the most powerful tool for spreading misinformation that has ever been on the Internet,’ said Gordon Crovitz, a co-chief executive of NewsGuard, a company that tracks online misinformation and conducted the experiment last month. “Crafting a new false narrative can now be done at dramatic scale, and much more frequently — it’s like having A.I. agents contributing to disinformation.”
Also, a research paper from Georgia Tech determined that “existing machine learning (ML) models used to detect online misinformation are less effective when matched against content created by ChatGPT or other large language models (LLMs), according to new research from Georgia Tech.
“Current ML models designed for and trained on human-written content have significant performance discrepancies in detecting paired human-generated misinformation and misinformation generated by artificial intelligence (AI) systems, said Jiawei Zhou, a Ph.D. student in Georgia Tech’s School of Interactive Computing.
“Zhou’s paper detailing the findings is set to receive a best paper honorable mention award at the 2023 ACM CHI Conference on Human Factors in Computing Systems.”
Finally, Euronews.com writes that the “Rapid growth of ‘news’ sites using AI tools like ChatGPT is driving the spread of misinformation.”
SOLUTIONS, PLEASE!
So, what can be done?
This piece from Loyola Marymount University offers six ways you can make a difference now by evaluating and engaging.
- Think before you share. Read the entire piece before you decide whether or not to share.
- Verify an unlikely story. Check out some of the tools listed below.
- Install B.S. Detector, a browser extension that identifies stories from sites that produce clickbait, fake news and other suspect stories.
- Help debunk fake news.
- Join the Digital Polarization Initiative.
- Report fake news on Facebook.
- Evaluate your news using IMVAIN
The bedrock method of deconstruction: Each source in a news report is evaluated using the “IMVAIN” rubric:
- Independent sources are preferable to self-interested sources.
- Multiple sources are preferable to a report based on a single source.
- Sources who verify or provide verifiable information are preferable to those who merely assert.
- Authoritative and/or informed sources are preferable to sources who are uninformed or lack authoritative background.
- Named sources are better than anonymous ones.
“Alkesh Kumar Sharma, Secretary of MeitY, opined that the current pace of innovation in AI tools and platforms had created enormous opportunities and risks that every country is looking at. In his opinion, self-governance is a useful tool to fill the gap between innovation and regulation, and the technology industry should urge to lead by example by taking these guidelines to the next step of adoption and building practices and tools that can be used across all sectors.”
“A breach of these ‘Prohibited AI practices’ is subject to an administrative fine of up to €40,000,000 or if the offender is a company up to 7 percent of their global turnover in the prior year. The scale of this fine is indicative of how seriously the EU is taking the development and use of such prohibited practices.
“The AI Act also seeks to regulate more limited risk AI models. This includes placing an obligation on creators of ‘Foundation Models’ to register these with an EU database prior to entering the market; it places certain obligations upon the creators of generative AI systems (such as ChatGPT) to offer a greater level of transparency to end users and ensure that details of copyrighted data used to train their AI systems are publicly available. The transparency obligations include a requirement to disclose when content is generated by AI and to help identify deepfake images.
“The AI Act will provide EU citizens with a facility to file complaints regarding AI through a new EU AI Office; it will also require each member state to appoint a ‘national supervisory authority’ to oversee the implementation and ongoing use of the AI Act locally throughout the EU.
“The AI Act will be the first of its kind and will, if adopted, certainly have a far reaching impact across the EU. It may also influence the approach taken to AI in other jurisdictions including the United States and the U.K.”
Please note that the regulatory changes envisaged by the AI Act are not without controversy. An open letter was sent to the European Commission by executives of over 150 key businesses, technology companies and investment firms from across the EU. The letter warns of the potential limitations on the development of AI by the introduction of the AI Act. Also, these companies claim that the AI Act restrictions will cause a drag on global competitiveness for developers of AI within the EU.
What is clear is that trust is under attack in our new online world.
How can we trust what we are hearing or seeing online? Who was the real sender? What was the source? How do you know? Has the data been changed? Was the story just made up to ”tempt the click?” These questions are getting much harder to verify in a GenAI world.
Daniel J. Lohrmann is an internationally recognized cybersecurity leader, technologist, keynote speaker and author.
See More Stories by Dan Lohrmann
*** This is a Security Bloggers Network syndicated blog from Lohrmann on Cybersecurity authored by Lohrmann on Cybersecurity. Read the original post at: https://www.govtech.com/blogs/lohrmann-on-cybersecurity/how-to-combat-misinformation-in-the-age-of-ai
Original Post URL: https://securityboulevard.com/2023/07/how-to-combat-misinformation-in-the-age-of-ai/
Category & Tags: Security Bloggers Network – Security Bloggers Network
Views: 0