Source: www.securityweek.com – Author: Kevin Townsend
AI security specialist Pangea has added to its existing suite of corporate gen-AI security products with AI Guard and Prompt Guard. The first prevents sensitive data leakage from gen-AI applications, while the second defends against prompt engineering, preventing jailbreaks.
According to the current OWASP Top 10 for LLM Applications 2025 (PDF), the number one risk for gen-AI applications comes from ‘prompt injection’, while the number two risk is ‘sensitive information disclosure’ (data leakage). With large organizations each developing close to 1,000 proprietary AI apps, Pangea’s new products are designed to prevent these apps succumbing to their major risks.

Prompt engineering is a skill. It is the art of phrasing a gen-AI query in a manner that gets the most accurate and complete response. Malicious prompt engineering is a threat. It is the skill of phrasing a prompt in a way to obtain information, or elicit responses, that either should not be disclosed or could be used in a harmful manner.
Pangea’s new Prompt Guard analyzes human and system prompts to detect and block jailbreak attempts or limit violations. Detection is done through heuristics, classifiers, and other techniques with, in Pangea’s announcement, ‘99% efficacy’.
AI Guard is designed to prevent sensitive data leakage. It blocks malicious or undesirable content, such as profanity, hate speech, and violence. It examines prompt inputs, responses, and data ingestion from external sources to detect and block malicious content. It can prevent attempts to input false information including malware and malicious URLs, and can prevent the release of PII.
In total, AI Guard employs more than a dozen detection technologies, and can understand over 50 types of confidential and personally identifiable information. It gathers threat intelligence from partners CrowdStrike, DomainTools, and ReversingLabs.
“Prompt engineering,” explains Pangea co-founder and CEO Oliver Friedrichs, “is basically social engineering on a large language model to make it do things that it has been told not to do, circumventing the controls of a typical gen-AI application.” Prompt Guard can identify all common and specialized prompt injection techniques; and if and when new techniques emerge, they will be added to the system.
AI Guard goes further. “It provides prompt injection, detection and prevention,” says Friedrichs. “It also provides malicious entity detection. So, for example, if somebody is inputting a malicious URL or domain name into a prompt, or the application is generating malicious output, it can redact, block or disarm that offending content. It has a dozen different detectors for common things like profanity, sexually explicit content, self-harm, and violence, as well as code and other language. You cannot really deliver enterprise quality AI capabilities without having these security guardrails,” he adds.
Pangea was founded in 2021 by Friedrichs (CEO) and Sourabh Satish (CTO). It has raised a total of $51 million in funding to date.
Advertisement. Scroll to continue reading.
Related: DeepSeek Compared to ChatGPT, Gemini in AI Jailbreak Test
Related: ChatGPT, DeepSeek Vulnerable to AI Jailbreaks
Related: Microsoft Bets $10,000 on Prompt Injection Protections of LLM Email Client
Original Post URL: https://www.securityweek.com/pangea-launches-ai-guard-and-prompt-guard-to-combat-gen-ai-security-risks/
Category & Tags: Artificial Intelligence,AI,generative AI,Pangea – Artificial Intelligence,AI,generative AI,Pangea
Views: 2