Source: www.csoonline.com – Author:
From abusing trusted platforms to reviving old techniques, attackers leave no stone unturned when it comes to evading security controls and targeting their victims.
CISOs have an array of ever-growing tools at their disposal to monitor networks and endpoint systems for malicious activity. But cybersecurity leaders face a growing responsibility of educating their organization’s workforce and driving cybersecurity awareness efforts.
Cybersecurity remains an ongoing battle between adversaries and defenders. As attacks become more sophisticated and evasive, it becomes paramount that security controls catch up – ideally in a proactive manner.
Here are some tactics and techniques cybercriminals are employing to cover their tracks.
Abusing trusted platforms that won’t raise alarms
In my research, I observed that in addition to using obfuscation, steganography, and malware packing techniques, threat actors today frequently take advantage of legitimate services, platforms, protocols, and tools to conduct their activities. This lets them blend in with traffic or activity that may look “clean” to human analysts and machines alike.
Most recently, threat actors have abused Google Calendar, using it as a command and control (C2) server. The Chinese hacking group, APT41 was seen using calendar events to facilitate their malware communication activities.
For defenders, this becomes a grave challenge, while it’s far easier to block traffic to certain IP addresses and domains exclusive to an attacker, blocking a legitimate service like Google Calendar, which may be in rampant use by the entire workforce, poses a far greater practical challenge, prompting defenders to explore alternative detection and mitigation strategies.
In the past, attackers have also leveraged pentesting tools and services like Cobalt Strike, Burp Collaborator, and Ngrok, to conduct their nefarious activities. In 2024, hackers targeting open source developers abused Pastebin to host next stage payload for their malware. In May 2025, cybersecurity specialist “Aux Grep” even demonstrated a fully-undetectable (FUD) ransomware that leveraged metadata in an image (JPG) file as part of its deployment. These are all examples of how threat actors may exploit familiar services and file extensions to conceal their real intentions.
Benign features like GitHub comments, have also been exploited to place malicious “attachments” that would appear to be hosted on official Microsoft GitHub repositories, misleading visitors into treating these as legitimate installers. Because such features are common among similar services, attackers can, at any time, diversify their campaign by switching between different legitimate platforms.
Typically, these services are used by legitimate parties: be it regular employees, technically savvy developers and even in-house ethical hackers, making it far more difficult to impose a blanket ban on them, such as via a web application firewall. Ultimately, their abuse warrants a much more intensive deep packet inspection (DPI) on the network and robust endpoint security rules that can differentiate between legitimate and misuse of web services.
Backdoors in legitimate software libraries
In April 2024, it was revealed that the XZ Utils library had been covertly backdoored as part of years-long supply-chain compromise effort. The widely used data compression library that ships as a part of major Linux distributions had malicious code inserted into it by a trusted maintainer.
Over the last decade, the trend of legitimate open-source libraries being tainted with malware has picked up, particularly unmaintained libraries that are hijacked by threat actors and altered to conceal malicious code.
In 2024, Lottie Player, a popular JavaScript embedded component, was modified in a supply chain attack. The incident occurred due to developer access token compromise and allowed threat actors to override Lottie’s code. Any websites using Lottie Player component had its visitors greeted with a bogus form, prompting them to login to their cryptocurrency wallets, and enable attackers to steal their funds. The same year, Rspack and Vant libraries suffered an identical compromise.
In March 2025, security researcher Ali ElShakankiry analyzed a dozen cryptocurrency libraries that had been taken over by threat actors and had their latest versions turned into info-stealers.
These attacks may typically be conducted by taking over the accounts of maintainers behind these libraries, such as via phishing, or credential stuffing. Other times, as seen with XZ Utils, one of the maintainers may be a threat actor pretending to be good-faith open-source contributor or good-faith contributors who went rogue.
Invisible AI/LLM prompt injections and pickles
Prompt injections are a significant security risk for large language models (LLMs), where malicious inputs manipulate the LLM into unknowingly executing attackers’ objectives. With AI having made its way into many tenets of our life, including software applications, prompt injections are gaining momentum among threat actors.
Carefully worded instructions can trick LLMs into ignoring previous instructions or “safeguards” and performing unintended actions, as desired by a threat actor. This may result in, for example, disclosure of sensitive data, personal information, or proprietary intellectual property. In the context of MCP servers, prompt injection and context poisoning can compromise AI agent systems by exploiting malicious inputs.
A recent Trend Micro report shed light on “Invisible Prompt Injection,” a technicality where hidden texts, that use special Unicode characters, may not readily render in the UI or be visible to a human, but can still be interpreted by LLMs that may fall victim to these covert attacks.
Attackers can, for example, embed invisible characters in web pages or documents (such as resumes) that may be parsed by automated systems (think an AI-powered Applicant Tracking Sytem analyzing resumes for keywords relevant to a job description), and end up overriding safety barriers of the LLM to exfiltrate sensitive information to attackers’ systems, as one example.
Prompt injection itself is of a versatile nature and may be repurposed for or reproduced in a variety of environments. For example, Prompt Security co-founder and CEO Itamar Golan, recently posted about a “whisper injection” variation of the attack, discovered by a red teaming expert, Johann Rehberger, who has exposed other such techniques on his blog. Whisper injection relies on renaming files and directories with instructions that will readily be executed by an AI/LLM agent.
Instead of serving malicious prompts to AI/ML engines, what about tainting a model itself?
Last year, JFrog researchers discovered AI/ML models tainted with malicious code to target data scientists with silent backdoors. Repositories like Hugging Face have frequently been called “GitHub of AI/ML” as they enable data scientists and the AI practitioner community to come together in using and share datasets and models. Many of these models, however, use Pickle for serialization. Although a popular format for serializing and deserializing data, Pickle is known to pose security risks and ‘pickled’ objects and files should not be trusted.
Hugging Face models revealed by JFrog were seen abusing Pickle functionalities to run malicious code as soon as these are spun up. “The model’s payload grants the attacker a shell on the compromised machine, enabling them to gain full control over victims’ machines through what is commonly referred to as a ‘backdoor’,” explains JFrog’s report.
Deploying polymorphic malware with near-zero detection
AI technologies can be abused to generate polymorphic malware — malware that alters its appearance by changing its code structure with each new iteration. This variability allows it to evade traditional signature-based antivirus solutions that rely on static file hashes or known byte patterns.
Historically, threat actors had to manually obfuscate or repack malware using tools like packers and crypters to achieve this. AI now enables this process to be automated and massively scaled, allowing attackers to quickly generate hundreds or thousands of unique, near-undetectable samples.
The primary advantage of polymorphic malware lies in its ability to bypass static detection mechanisms. On malware scanning platforms like VirusTotal, fresh polymorphic samples may initially yield low or even zero detection rates when analyzed statically, especially before AV vendors develop generic signatures or behavioral heuristics for the family. Some polymorphic variants may also introduce minor behavioral changes between executions, further complicating heuristic or behavioral analysis.
However, AI-driven security tools — such as behavior-based endpoint protection platforms (EPPs) or threat intelligence systems — are increasingly able to flag such threats through dynamic analysis and anomaly detection. That said, one trade-off with behavioral AI detection models, especially in their early deployment phases, is a higher incidence of false positives. This is partly because some legitimate software may exhibit low-level behaviors — such as unusual system calls or memory manipulation — that superficially resemble malware activity.
Threat actors may also rely on counter-antivirus (CAV) services like AVCheck, which was recently shut down by law enforcement. The service allowed users to upload their malware executables and check if existing antivirus products would be able to detect them, but it did not share these samples with security vendors, paving way for suspicious use cases, such as for threat actors to test how undetectable their payload was.
Liora Itkin, a security researcher at CardinalOps, breaks down a real world proof of concept featuring AI-generated polymorphic malware and has offered useful pointers in how to detect such samples. “Although polymorphic AI malware evades many traditional detection techniques, it still leaves behind detectable patterns,” explains Itkin. Unusual connections to AI tools like OpenAI API, Azure OpenAI, or other services with API-based code generation capabilities like Claude, are among some techniques that can be used to flag the ever-mutating samples.
Coding stealthy malware in uncommon programming languages
Threat actors are leveraging relatively new languages like Rust to write malware due to the efficiency these languages offer, along with compiler optimizations that can hinder reverse engineering efforts.
“This adoption of Rust in malware development reflects a growing trend among threat actors seeking to leverage modern language features for enhanced stealth, stability, and resilience against traditional analysis workflows and threat detection engines,” explains Jia Yu Chan, a malware research engineer at Elastic Security Labs. “A seemingly simple infostealer written in Rust often requires more dedicated analysis efforts compared to its C/C++ counterpart, owing to factors such as zero-cost abstractions, Rust’s type system, compiler optimizations, and inherent difficulties in analyzing memory-safe binaries.”
The researcher demonstrates a real-world infostealer, dubbed EDDIESTEALER, which is written in Rust and seen in use within active fake CAPTCHA campaigns.
Other examples of languages used to write stealthy malware have included Golang or Go, D, and Nim. These languages add obfuscation in multiple ways. First, rewriting malware in a new language means render signature-based detection tools momentarily useless (at least until new virus definitions are created). Further, the languages themselves may act as an obfuscation layer, as seen with Rust.
In May 2025, Socket’s research team exposed “a stealthy and highly destructive supply-chain attack targeting developers using Go modules.” As a part of the campaign, threat actors injected obfuscated code in Go modules to deliver a damaging disk-wiper payload.
Reinventing social engineering: ClickFix, FileFix, BitB attacks
While defenders may get caught up in technological nitty-gritty and pulling obfuscated code apart, sometimes all a threat actor needs to breach a system and gain initial access is to exploit the human element. No matter how hard your perimeter security controls, network monitoring and endpoint detection systems may be, all it takes is the weakest link — a human to click the wrong link and fall for a copycat webform to aid threat actors achieve their initial access.
Last year, I was tipped off on a ‘GitHub Scanner’ campaign where threat actors were abusing the platform’s ‘Issues’ feature to send official GitHub email notifications to developers and attempted to direct them to a malicious github-scanner[.]com website. This domain would then present users with bogus but real-looking popups titled “Verify you are human” or an error along the lines of: “Something went wrong, click to fix the issue.” The screen would further advise users to copy, paste, and run certain commands on their Windows system, resulting in a compromise. Such attacks, comprising bogus warning and error messages, are now categorized under the umbrella term, ClickFix.
Security researcher mr.d0x recently demonstrated a variation of this attack and called it FileFix.
Whereas ClickFix would entail users clicking on a button that would copy malicious commands onto Windows clipboard, FileFix further enhances this trick by incorporating an HTML file upload dialog box in a misleading manner. Users are prompted to paste the copied “filepath”, which is really a malicious command, into the file upload box which would end up executing the command.
Both ClickFix and FileFix attacks are browser-based attacks that exploit deficiencies in the user interface (UI) and a user’s mental model, a key human-computer interaction concept that represents a user’s internal representation of how a system works.
What may clearly be a file upload box meant to select a file, may, in a FileFix context appear to a user to be an area where they can “paste” the dummy file path shown to them, thereby facilitating the attack.
In the past, mr.d0x demonstrated a phishing technique called Browser-in-the-Browser (BitB) attack that remains an active threat. A recent Silent Push report exposed a new phishing campaign using complex BitB toolkits involving “fake but realistic-looking browser pop-up windows that serve as convincing lures to get victims to log into their scams.”
Lastly, something as simple as an apparent video (MP4) file on your Windows computer that even bears a convincing MP4 icon, may in fact be a Windows executable (EXE).
The point is clear: Rather than relying solely on highly sophisticated malware, many threat actors find greater success by refining simple social engineering techniques. By manipulating user trust and leveraging UI deception, attackers continue to bypass technical defenses, hide their tracks, and “hack” the human mind, reminding us that cybersecurity is as much about people as it is about technology.
SUBSCRIBE TO OUR NEWSLETTER
From our editors straight to your inbox
Get started by entering your email address below.
Original Post url: https://www.csoonline.com/article/570701/5-ways-hackers-hide-their-tracks.html
Category & Tags: Hacker Groups, Hacking, Security – Hacker Groups, Hacking, Security
Views: 1