web analytics

DeepSeek can be gently persuaded to spit out malware code – Source: go.theregister.com

Rate this post

Source: go.theregister.com – Author: Connor Jones

DeepSeek’s flagship R1 model is capable of generating a working keylogger and basic ransomware code, just as long as a techie is on hand to tinker with it a little.

Compelled by generative AI’s potential for abuse, Tenable researchers Nick Miles and Satnam Narang probed DeepSeek for its nefarious capabilities and found its guardrails preventing malware creation could be bypassed with some careful prompting.

Simply asking DeepSeek R1, which launched in January and whose purported cost-savings sent Nvidia share prices tumbling, to generate a keylogger won’t be a successful venture.

It responds: “Hmm, that’s a bit concerning because keyloggers can be used maliciously. I remember from my guidelines that I shouldn’t assist with anything that could be harmful or illegal.”

However, telling the model that the results will be used for educational purposes only will twist its arm, and, as the researchers say, with some back and forth, it will proceed to generate some C++ malware, walking the prompter through various steps required and deliberations along the way.

The code it generates isn’t flawless and requires some manual intervention to get it working, yet after some tweaks, a functional keylogger that was hidden from the user’s view was running. It could still be found running in the Task Manager and the log file it dropped was in plain sight within Windows Explorer, but the researchers said that if it had a fairly inconspicuous name it “wouldn’t be a huge issue for most use cases.”

When asked to improve the code by hiding the log file, DeepSeek returned code meeting that aim and carried only one critical error. With that error fixed, the keylogger’s log file was indeed hidden, and the only way to see it was to make changes to the advanced view options.

It was a similar story with ransomware, with DeepSeek able to produce some buggy code after a few carefully worded prompts, suggesting that this particular model could be used to inform or assist cybercriminals.

“At its core, DeepSeek can create the basic structure for malware,” the researchers said. “However, it is not capable of doing so without additional prompt engineering as well as manual code editing for more advanced features.

“Nonetheless, DeepSeek provides a useful compilation of techniques and search terms that can help someone with no prior experience in writing malicious code… to quickly familiarize themselves with the relevant concepts.”

AI and malware

Since generative AI models became generally available in 2023, there were fears that they could be abused to simply generate all kinds of malware, capable of all sorts of nastiness, and evade the most diligent detections. Maybe even some scary polymorphic code that changed and adapted to the victim’s environment on which it was run.

The reality was quite the opposite. In the early days, experts were far from convinced about the technology’s malware-writing capabilities and nearly two years later, GenAI still isn’t capable of shipping malicious code that works on the first attempt, though not for lack of trying.

As the Tenable team noted, the bad guys have been working on their own models without guardrails. WormGPT, FraudGPT, Evil-GPT, WolfGPT, EscapeGPT, and GhostGPT are all examples of large language models whipped up by attackers to varying degrees of efficacy. Some even predate mainstream launches like that of ChatGPT by a few years.

Some of these models claim to produce malware, others cater only to generating convincing phishing email copy to skip past spam filters. None are perfect, despite some costing hundreds of dollars to purchase.

Tenable’s work on DeepSeek isn’t exactly breaking new ground either. Unit 42 showed it was able to bypass its guardrails – a process called jailbreaking – within days of its January launch, for example, although its malware-generating abilities haven’t been widely investigated.

Aspiring cybercrooks who don’t fancy forking out for a crime-specific model can pay a lesser fee for lists of known prompts that can jailbreak mainstream chatbots, according to Kaspersky, which noted hundreds were up for sale last year.

Although the general public doesn’t have access to on-demand malware generators yet, the same might not be true for the most well-equipped adversarial states.

The UK’s National Cyber Security Centre (NCSC) predicted that by the end of 2025, AI’s influence on offensive cyber tooling could be significant.

It said in January 2024 that despite AI malware threats largely being debunked, there remained potential for it to create malicious code capable of bypassing defenses, provided it was trained on quality exploit data that states may already have.

The NCSC expressed serious concern over the technology. It said last year that AI isn’t expected to become truly advanced until 2026, but the potential applications extend beyond mere malware creation.

It said AI could be used to identify the most vulnerable systems during an attack’s reconnaissance phase and the most high-value data to steal during a ransomware attack, for example.

Attackers are already using it to improve phishing campaigns and the most ambitious criminals may even be able to create their own tools, given some time, it added. ®

Original Post URL: https://go.theregister.com/feed/www.theregister.com/2025/03/13/deepseek_malware_code/

Category & Tags: –

Views: 2

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post