web analytics

Understanding AI Risk Management – Securing Cloud Services with OWASP LLM Top 10 – Source: securityboulevard.com

Rate this post

Source: securityboulevard.com – Author: stackArmor

  1. Utilize the Self-Reminder model.
  2. When using RAG, provide a similarity_score_threshold. When using document-based searches like Amazon Kendra, you can hard-code responses to requests that do not match any documents in your index. Malicious requests are typically not going to be found in enterprise data stores used in RAG — therefore, searches for them will come back empty or with low similarity.

##Example 1

retriever = db.as_retriever(

    search_type=”similarity_score_threshold”,

    search_kwargs={“score_threshold”: 0.5}

)

##Example 2

…….

llm_jurassic = Bedrock(

    client=bedrock_client,

    model_id=”ai21.j2-ultra-v1″,

    endpoint_url=”https://bedrock-runtime.” + REGION_NAME + “.amazonaws.com”,

    model_kwargs={“temperature”: 0.2, “maxTokens”: 1200, “numResults”: 1}

)

qnachain = ConversationalRetrievalChain.from_llm(

    llm=llm_jurassic,

    condense_question_llm=llm_jurassic,

    retriever=retriever,

    return_source_documents=True,

    condense_question_prompt=question_generator_chain_prompt,

    combine_docs_chain_kwargs={“prompt”: combine_docs_chain_prompt}

)

…….

llm_result = qnachain(input_variables)

…….

if(len(llm_result[‘source_documents’]) > 0):

    response_text = llm_result[“answer”].strip()

else:

    response_text = “I don’t know, no source documents matched your question”

  1. Insecure Output Handling

This vulnerability occurs when an LLM output is accepted without scrutiny, exposing backend systems. Misuse may lead to severe consequences like XSS, CSRF, SSRF, privilege escalation, or remote code execution.

Insecure Output Handling is the result of inadequate validation, sanitation, and management of outputs generated by LLMs before they are sent downstream for further consumption or processing. This vulnerability arises because LLM-generated content can be influenced by user input, effectively granting indirect access to additional functionality.

Potential Impact:

Exploiting Insecure Output Handling can lead to security risks such as XSS and CSRF in web browsers, as well as SSRF, privilege escalation, or remote code execution in back-end systems. This vulnerability can be exacerbated by over-privileged LLM access, susceptibility to indirect prompt injection attacks, and insufficient input validation third-party plugins.

Mitigations:

Utilize a zero-trust approach and treat the LLM as an insider threat. 

  1. Apply proper input validation on responses coming from the model to backend functions. OWASP recommends following their own ASVS guidelines— such as encoding all output text to prevent it from being executed automatically by Javascript or Markdown. 
  2. If you are hosting a HTTP service that ingests output from an LLM. directly, implement AWS WAF to detect malicious requests in the application layer. If your HTTP service does not utilize an AWS WAF supported service such as ALB or API Gateway, implement application-layer protection utilizing a virtual appliance like Palo Alto NGFW or other mechanism.
  3. If using Agents for Bedrock, consider enabling a Lambda parser for each of the templates to have more control over the logical processing.
  1. Training Data Poisoning

This occurs when LLM training data is tampered, introducing vulnerabilities or biases that compromise security, effectiveness, or ethical behavior. Sources include Common Crawl, WebText, OpenWebText, & books.

Data poisoning is essentially an ‘integrity attack’ due to its disruptive influence on the fundamental capacity of our AI model to generate precise predictions. Introducing external data increases the risk of training data poisoning due to the limited control that model developers have over it.

Potential Impact:

Poisoned information may result in false, biased or inappropriate content be presented to users or create other risks like performance degradation, downstream software exploitation. 

Mitigations:

  1. Verify supply chain of all training data used in embedding or fine-tuning.
  2. Enable audit logging on all sources of training or RAG. For example: if using Amazon S3 (Amazon Bedrock Knowledge Bases or Amazon Kendra), enable Cloud Trail Data events for write events. 
  3. Enable strict access control on data sources. This includes using dedicated Service Roles for crawling data in Amazon Kendra or Amazon Bedrock Knowledge bases — with IAM Policies only allowing access to required sources.
  4. Only use data that has been properly prepped via automation. Your data preparation process should include processes to identify and exclude anomalous data. Amazon SageMaker Data Wrangler can greatly accelerate the data preparation process. Finalized data can be exported to Amazon S3 for consumption.
  5. Test and document model performance prior to release. Include human review of responses made during testing.
  6. Tools like Autopoison can also be used here for adversarial training to minimize impact of data poisoning.
  1. Model Denial of Service

Attackers cause resource-heavy operations on LLMs, leading to service degradation or high costs. The vulnerability is magnified due to the resource-intensive nature of LLMs and unpredictability of user inputs.

Model Denial of Service is similar to a network-based DoS attack — where repeated or very large requests can overwhelm LLM-based systems. 

Potential Impact:

This can result in either completely disabling a service or runaway costs when using AWS services that charge for each request made such as Amazon Bedrock, Amazon Kendra, and Amazon Bedrock Knowledge bases using OpenSearch Serverless (since it could potentially scale the required OCUs). 

Mitigations:

  • Utilize maximum token limits which is supported by Langchain.

from langchain.llms.bedrock import Bedrock

import boto3

bedrock_client=boto3.client(‘bedrock-runtime’)

#print(‘Initalizing Anthropic Claude v2.1’)

llm_anthropic_claude21 = Bedrock(

    client=bedrock_client,

    model_id=”anthropic.claude-v2:1″,

    endpoint_url=”https://bedrock-runtime.” + REGION_NAME + “.amazonaws.com”,

    model_kwargs={“temperature”: 0.25, “max_tokens_to_sample”: 1000}

)

  • Implement a caching layer in your LLM client — this can prevent common queries from being repeatedly processed by your LLM. This is also supported by Langchain. Proper research should be done when selecting and deploying any caching solution. Langchain supports SQLLite, InMemory and GPTCache currently. 

from langchain.cache import SQLiteCache

set_llm_cache(SQLiteCache(database_path=”.langchain.db”))

  • If feasible, proxy your Bedrock model with API Gateway and Lambda — then enabling AWS WAF with rate limiting, source IP restriction, or other relevant rules. Note: All Amazon Bedrock models have rate-limits specific to each model. Side note: would love to see AWS implement resource policies on Amazon Bedrock models/agents to more easily incorporate customized security.
  • Monitor usage closely using Amazon CloudWatch with Anomaly Detection and create alarms.

Original Post URL: https://securityboulevard.com/2024/01/understanding-ai-risk-management-securing-cloud-services-with-owasp-llm-top-10/

Category & Tags: Application Security,CISO Suite,Cloud Security,Governance, Risk & Compliance,Security Bloggers Network,AI,Blog,cloud services,owasp,risk management – Application Security,CISO Suite,Cloud Security,Governance, Risk & Compliance,Security Bloggers Network,AI,Blog,cloud services,owasp,risk management

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post

More Latest Published Posts