web analytics

LLMs & Malicious Code Injections: ‘We Have to Assume It’s Coming’ – Source: www.darkreading.com

Rate this post

Source: www.darkreading.com – Author: Jeffrey Schwartz, Contributing Writer

Source: Bakhtiar Zein via Alamy Stock Vector

A rise in prompt injection engineering into large language models (LLMs) could emerge as a significant risk to organizations, an unintended consequence of AI discussed during a CISO roundtable discussion on Monday. The panel was held during Purple Book Community Connect–RSAC, an event at this week’s RSA Conference in San Francisco.

‍One of the three panelists, Karthik Swarnam, CISO at ArmorCode, an application security operations platform provider, believes incidents arising from prompt injections in code are inevitable. “We haven’t seen it yet, but we have to assume that it is coming,” Swarnam tells Dark Reading. 

LLMs trained with malicious prompting can trigger code that pushes continuous text alerts with socially engineered messages that are typically less adversarial techniques. When a user unwittingly responds to the alert, the LLM could trigger nefarious actions such as unauthorized data sharing.

“Prompt engineering will be an area that companies should start to think about more and invest in,” Swarnam says. “They should train people in the very basics of it so that they know how to use it appropriately, which would yield positive results.”

Swarnam, who has served as CISO of several large enterprises including Kroger and AT&T, says despite concerns about the risks of using AI, most large organizations have begun embracing it for operations such as customer service and marketing. Even those that either prohibit AI or claim they’re not using it are probably unaware of down-low usage, also known as “shadow AI.”

“All you have to do is go through your network logs and firewall logs, and you’ll find somebody is going to a third-party LLM or public LLM and doing all kinds of searches,” Swarnam says. “That reveals a lot of information. Companies and security teams are not naive, so they have realized that instead of saying ‘No’ [to AI usage] they’re saying ‘Yes,’ but establishing boundaries.”

One area in which many companies have embraced AI is incident response and threat analytics. “Security information and event management is definitely getting disrupted with the use of this stuff,” Swarnam says. “It actually eliminates triaging at level one, and in a lot of cases at level two as well.”

Adding AI to Application Development 

When using AI in application development tools, CISOs and CIOs should establish what type of coding assistance is practical for their organizations based on their capabilities and risk tolerance, Swarnam warns. “And don’t ignore the testing aspects,” he adds.

It is also important for leaders to consistently track where their organizations are failing and reinforce it with training. “They should focus on things that they need, where they are making mistakes — they are making constant challenges as they do development work or software development,” Swarnam says.

Original Post URL: https://www.darkreading.com/application-security/llms-malicious-code-injections-we-have-to-assume-its-coming-

Category & Tags: –

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post

More Latest Published Posts