web analytics

Why The Rise of AI Agents Demands a New Approach to Fraud Prevention – Source: securityboulevard.com

Rate this post

Source: securityboulevard.com – Author: Benjamin Fabre

Over the last decade, we’ve witnessed the shift from static, technical detection methods to advanced behavioral analysis powered by machine learning at the edge. This evolution has been vital to our mission at DataDome: to free the web from fraudulent traffic.

As digital ecosystems evolve to accommodate everything from mobile apps to AI agents like ChatGPT, the lines between legitimate use and malicious activity continue to blur. This evolution underscores a crucial shift in focus: it’s no longer about simply identifying human vs. bot traffic. The priority now is discerning legitimate vs. illegitimate users across all channels and preventing fraudsters from exploiting online businesses and their end-users. 

Techstrong Gang Youtube

AWS Hub

This shift demands real-time solutions capable of detecting and blocking fraudulent behavior without disrupting user experience. DataDome’s AI-driven platform is uniquely positioned to tackle these challenges head-on, leveraging the same technology that powers our existing solutions to address the emerging risks of AI-driven, agentic traffic.

The evolution of web technology: From browsers to AI agents

The web has evolved at lightning speed, transforming from its humble beginnings as simple browser-based interactions to a dynamic, multi-channel ecosystem teeming with mobile apps, APIs, and now, AI agents. I like to think of this evolution in three major phases:

Phase 1: Browser-only web
In its early days, the web was accessed primarily through browsers. Detection techniques were relatively straightforward, focusing on differentiating between human users and automated bots. Fraud prevention was often static, relying on server-side signals like IP reputation or basic challenges.

Phase 2: The rise of mobile apps & APIs
The advent of mobile apps and API-driven interactions introduced a new layer of complexity. Businesses had to secure these channels while maintaining seamless user experiences. Fraudsters began exploiting APIs for credential stuffing, data scraping, and unauthorized transactions, prompting the need for advanced detection techniques that incorporated both server-side and client-side signals.

Phase 3: AI agents & headless browsers
Today, AI agents, such as OpenAI’s Operator, represent the next stage of web evolution. Unlike traditional users, AI agents operate programmatically, often through headless browsers. These agents present unique challenges, as they can be used for both legitimate purposes—like content discovery or automated purchases—and malicious activities, such as scraping, fraud, and vulnerability exploitation.

Traffic from LLMs is growing across all channels

Businesses today must manage a landscape where traditional channels like organic search intersect with emerging sources such as traffic from AI-generated responses. To stay competitive, they need to optimize strategies across these channels, secure traffic sources, and use analytics to drive smarter growth.

Non-browser traffic—including mobile apps, APIs, and AI agents—has become a major part of digital interactions, bringing opportunities like automation and content discovery, but also the opportunity for fraud. Here’s a closer look at the evolving mix of traffic channels businesses must address:

  1. Traditional search: Organic and paid search remain critical for capturing users actively searching for products or services.
  2. Social media: Platforms like TikTok, Instagram, and LinkedIn drive engagement and conversions through both organic content and paid campaigns.
  3. APIs & apps: With mobile apps and third-party integrations, APIs now handle a substantial share of traffic, requiring seamless and secure interactions.
  4. LLMs & AI agents: Tools like ChatGPT are creating new traffic streams for content discovery, automation, and e-commerce. However, they also bring risks like scraping and fraud that businesses must address.

As businesses adapt to these changes, non-browser traffic (e.g., mobile apps, APIs, and AI agents) has grown significantly, becoming a major share of digital interactions.

DataDome is already leading the way in protecting this traffic:

  • 35% of our customers’ traffic comes from non-browser APIs, all safeguarded to ensure secure interactions across mobile apps and headless browser sessions.
  • Our SDKs deployed on over 800 million devices worldwide provide unparalleled visibility into user behavior, allowing us to detect and prevent fraud with unmatched precision.
  • Large language models (LLMs), such as ChatGPT and similar AI tools, are driving significant traffic growth. Over the past 30 days, we observed 178.3 million requests from OpenAI-identified crawlers, with a month-over-month increase of 14.5%.
  • Specifically, ChatGPT alone accounted for 10.6 million requests in the last 30 days.
  • We also observed a 48.0% increase in OpenAI crawler traffic during the release of their AI agent, Operator, on January 24th.
  • Overall, LLM-related crawlers represent approximately 2.64% of “legitimate” bot traffic (e.g., Googlebot scrapers) on our customers’ websites, translating to about 350 million requests over the past 30 days from official LLM crawlers like OpenAI’s ChatGPT and Anthropic’s Claude.

How OpenAI’s agentic AI Operator works 

The ChatGPT Operator application uses agentic AI, designed to autonomously perceive, decide, and act within defined parameters on the users’ behalf. It combines a real Chrome browser with a Computer-Using Agent (CUA) program, integrating OpenAI’s GPT-4o model (with vision capabilities for tasks like reading images) and a reinforcement learning mechanism optimized for user interface interactions.

When a task is assigned, the Operator captures browser screenshots, which the model analyzes to determine and perform actions within the browser on behalf of the user, like clicking on links and filling out forms. It is programmed to pause when encountering CAPTCHAs or tasks requiring sensitive inputs, such as payment details.

The good, the bad, & the ugly of AI Agent usage

AI agents, like ChatGPT’s Operator, are increasingly being used for a wide range of purposes, both good and bad. 

Some positive use cases for AI agents include:

  • Assisting with content discovery to find relevant information or products efficiently.
  • Automating repetitive tasks, saving time, and improving productivity.
  • Supporting e-commerce by simplifying product searches and purchases, enhancing user experience.

Some negative use cases for AI agents include: 

  • Scraping content and pricing data to gain unauthorized competitive insights.
  • Conducting credential stuffing attacks to exploit stolen user credentials.
  • Identifying and exploiting vulnerabilities to compromise systems and data.
  • Generating fake accounts to manipulate user metrics or exploit promotions.
  • Automating fraudulent transactions, such as carding or payment fraud.
  • Engaging in click fraud to waste ad budgets and distort campaign performance.
  • Launching DDoS attacks to disrupt service availability and degrade user experience.
  • Simulating human-like interactions to bypass security measures like CAPTCHAs.

To address this, businesses must adopt a balanced approach that supports legitimate AI agent activities while preventing malicious ones. By leveraging real-time detection and behavioral analysis, DataDome ensures businesses can maximize the benefits of AI agent traffic while safeguarding against bad actors who might leverage them for fraud and abuse. 

How to future-proof your business against cyberfraud

With the use AI agents becoming even more commonplace, fraud prevention requires moving beyond static Turing tests and traditional tools. Advanced solutions must leverage behavioral analysis, real-time machine learning, and dynamic feedback loops to balance high detection accuracy with usability. This ensures businesses can effectively protect their platforms while supporting legitimate AI-driven traffic that drives growth and a more seamless user experience.

DataDome is the ultimate Cyberfraud Protection Platform, securing the largest enterprise businesses against security risks and fraud. It delivers comprehensive protection across all devices, APIs, and agents—whether used to consume content, browse websites, or make purchases—ensuring businesses thrive in the age of AI while maintaining trust and safety.

Original Post URL: https://securityboulevard.com/2025/01/why-the-rise-of-ai-agents-demands-a-new-approach-to-fraud-prevention/

Category & Tags: Security Bloggers Network,Threat Research – Security Bloggers Network,Threat Research

Views: 2

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post