Source: www.lastwatchdog.com – Author: bacohido
By Byron V. Acohido
In cybersecurity, trust often hinges on what users think their software is doing — versus what’s actually happening under the hood.
Related: Eddy Willem’s ‘Borrowed Brains’ findings
Take antivirus, for example. Many users assume threat detection is based on proprietary research, unique signatures, and internal analysis.
But what happens when a product’s detection engine is mostly echoing what’s already out in the wild?
That’s the concern raised by Eddy Willems, an independent security evangelist with 35+ years in the anti-malware trenches. In a case study shared with Last Watchdog under embargo, Willems documents a revealing experiment tied to a quiet but sweeping shift: U.S. users being migrated from Kaspersky to a lesser-known product called Ultra AV, built onscanning tech from Max Secure Software and distributed by Pango Group.
Willems noticed something strange. Users were surprised by the swap — and he’d never even heard of Ultra AV. So he set out to learn how it actually detects malware.
What he found raises questions about transparency in AV detection — and whether some vendors are leaning heavily on shared data sources like VirusTotal, without doing much original analysis of their own.
We caught up with Willems to unpack what he saw, and what it might mean for the industry.
LW: What made you want to look closer at Ultra AV?
Willems: I started noticing online chatter from U.S. users who’d been switched from Kaspersky to Ultra AV — and they weren’t happy. There was confusion, poor communication, and complaints about how hard it was to remove. That got my attention.
What really raised flags for me, though, was the fact that I’d never heard of Ultra AV — and I’ve worked in this space for over three decades. That was odd enough to make me curious about its detection capabilities.
LW: How did you structure your test?
Willems: I wanted to answer one question: Is Ultra AV’s detection influenced by whether a file shows up on VirusTotal?
I started with real malware samples that Ultra AV already detected. From those, I made modified copies — same behavior, but a single byte changed to generate a new hash. Let’s call them set A. Then I duplicated those again, changing a different byte — that’s set B. All functionally identical, all undetected by Ultra AV.
I put both sets on a test machine with Ultra AV installed and scanned daily for changes. Then I uploaded five samples from set A to VirusTotal — using a totally separate, AV-free machine to keep the test clean.
After that, I kept scanning everything for several days to watch for new detections.
LW: What happened?
Willems: By day four, Ultra AV began flagging three of the five files I had uploaded. Day five, it flagged the other two. None of the non-uploaded files were detected — not from set A or set B.
The only difference was that the flagged files had been uploaded to VirusTotal.
This suggests Ultra AV’s detection engine wasn’t catching those files based on behavior, telemetry, or its own research — but rather reacting to signals from the broader community, specifically VirusTotal visibility.
LW: Ultra AV responded. What’s your read on their statement?
Willems: They didn’t deny it. In fact, they said, “We hope you recognize this is actually a positive for consumers.”
Look, collaborating via VirusTotal is common in security — it’s a useful platform. But if a vendor is mostly repackaging community-contributed intelligence without running its own detections or doing deeper analysis, that’s a different story. Especially if users think they’re buying original protection.
LW: Is this just one vendor’s shortcut — or something bigger?
Willems: Ultra AV stood out because of how suddenly it was pushed to users at scale. But I think this may be a broader issue.
Other vendors could be doing the same thing more quietly. If we want users to actually understand what they’re buying, we need better norms — and more transparency — about how detection is built.
LW: What should buyers take away from this?
Willems: Check whether your AV vendor is doing original research — or just responding to what others post. Look at how open they are about where their detections come from.
And I’d love to see third-party testing labs bake in these kinds of tests, to reveal which products rely too heavily on VirusTotal and which don’t.
LW: Should the industry be more explicit about how detection works?
Willems: Yes. We already have frameworks like ATMSO that promote good testing standards. But there’s room to go further.
Imagine a Software Bill of Materials — an SBOM — for security products. Something that shows what’s built in-house vs. what’s pulled from shared sources. It wouldn’t just build trust; it would help buyers make smarter choices.
LW: What’s your advice to vendors and CISOs navigating this gray zone?
Willems: Vendors should be honest about how their detections work. And if they’re using community intelligence, that’s fine — just say so. But don’t claim it’s proprietary magic when it isn’t.
For CISOs, it’s hard to tell from the outside who’s doing what. But tests like this — repeated across different products — could help shine a light. My hope is that it sparks more scrutiny and better standards.
Acohido
Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.
(LW provides consulting services to the vendors we cover.)
September 8th, 2025 | Q & A | Top Stories
Original Post URL: https://www.lastwatchdog.com/shared-intel-qa-is-your-antivirus-catching-fresh-threats-or-just-echoing-virustotal/
Category & Tags: Q & A,Top Stories – Q & A,Top Stories
Views: 2