Source: securityboulevard.com – Author: Tim Callan
After a slow build over the past decade, new capabilities of artificial intelligence (AI) and chatbots are starting to make waves across a variety of industries. The Spring 2022 release of OpenAI’s DALL-E 2 image generator wowed users with its ability to create nearly any conceivable image based on a natural language description, even as it set off warning bells for graphic designers and creatives around the world. That reaction was nothing, however, compared to the Fall 2022 release of OpenAI’s ChatGPT text generator. With just a few input prompts, end users could instruct ChatGPT to spit out poems, essays, fiction, speeches and even blocks of software code – serving notice to everyone from writers to software programmers that serious change was afoot. While these new developments are starting to feel like a sea change in what AI is capable of, not everyone is celebrating these advancements. Experts in cybersecurity, for instance, have cautioned that these same improvements are concurrently being used to carry out cyberattacks more efficiently by generating convincing-sounding phishing emails and malicious code with just a few keystrokes.
This is just one area of AI giving people pause as they start to wrap their heads around just how advanced today’s AI tools are – while wondering if there is a future Terminator war on the horizon in which AI will be responsible for the downfall of civilization.
Luckily, we don’t need to resign ourselves to doom-and-gloom scenarios just yet – but we do need a new approach to this fast-evolving landscape in which we find ourselves. Identity is a popular attack vector for many new forms of AI, which means that enterprises need to ensure that they can firmly establish digital trust in this new world.
A Matter of Trust
In order to combat AI-based attacks, it’s first crucial to understand how AI is being used to impersonate digital identities. AI has been serving as the technological backbone for so-called “deepfake” tools that can convincingly clone the voices and images of people.
Bad actors can easily make autoencoders – a kind of advanced neural network – to watch videos, study images and listen to recordings of individuals to mimic that someone’s physical attributes, whether it’s a voice or image. As a result, we find ourselves confronted with convincing-looking videos of everybody from Tome Cruise to President Barack Obama saying and doing things that they actually didn’t say or do.
Celebrities are far from the only potential targets of this AI fakery: For enterprises, this means that a late-night voicemail from a boss or colleague requesting that you email the latest draft of an important contract or file might not actually be from a bad actor. Likewise, a video on YouTube of the CEO announcing massive layoffs might be digital fakery.
In addition, thanks to AI tools like ChatGPT, we can no longer trust that content has been created by an actual human. This development is a potential minefield for enterprises. Companies that decide to use ChatGPT to draft up a speech or a memo for a chief executive might inadvertently be creating content loaded with inaccuracies or plagiarized material. If enterprises think this can’t come back to bite them, just ask Vanderbilt University, which got busted using AI to draft an email to the student body about a tragic shooting that occurred at another university.
The bottom line? Things that we once considered the most basic and instinctive indicators of identity – someone’s voice or someone’s face, or the presence of their name on an official communication – can no longer be 100% trusted.
PKI to the Rescue?
The good news is that there are 100% reliable, cryptographically secure, mathematically undefeatable methods of identifying an individual beyond a shadow of a doubt. The gold standard for strong digital identity-based authentication ensuring digital trust is public key infrastructure (PKI) in the form of digital certificates. These certificates ensure every user and machine attempting to access a network is verified.
PKI can ensure that a message apparently sent from Person A is actually from Person A; it’s what guarantees that when you visit a banking website, you can trust that it’s actually the financial institution it purports to be and not an impostor.
The use of digital certificates underpins all industries and secures a diverse and almost limitless number of systems and processes, everything from email accounts, key fobs, passwordless authentication and the growing number of Internet of Things (IoT) devices.
Vast numbers of digital certificates are deployed at any one time in the modern enterprise, across many aspects of an organization, and maintain secure authentication of human and machine identities. New use cases for digital certificates are continually emerging.
For enterprises aiming to mitigate the potential impact of these new types of AI-powered attacks, automated certificate lifecycle management (CLM) is crucial to managing all these identities at scale. Right now, it’s our best shot at protecting human identities in this new era in which the machines seem to be coming at us from all sides with possible malintent.
The fact is, AI is here whether we want it to be or not. PKI–based solutions can help us navigate this AI world safely by protecting human identities from malicious actors.
Original Post URL: https://securityboulevard.com/2023/05/the-ai-takeover-tool-or-terminator/
Category & Tags: Cybersecurity,Data Security,Deep Fake and Other Social Engineering Tactics,Governance, Risk & Compliance,Identity & Access,Network Security,Security Awareness,Security Boulevard (Original),Threats & Breaches,Vulnerabilities,AI,Chat GPT,chatbot,Digital Transformation,Phishing Attacks,Terminator War – Cybersecurity,Data Security,Deep Fake and Other Social Engineering Tactics,Governance, Risk & Compliance,Identity & Access,Network Security,Security Awareness,Security Boulevard (Original),Threats & Breaches,Vulnerabilities,AI,Chat GPT,chatbot,Digital Transformation,Phishing Attacks,Terminator War
Views: 0