Source: www.schneier.com – Author: Bruce Schneier
Comments
Clive Robinson • March 4, 2025 8:29 AM
Sad yes, but it was going to happen to someone, as the industry has made a “Target Rich Environment” that includes any and all ICT Systems that connect to external communications like the Internet.
Thus somebody’s number was going to come up and it was this guy’s and he’s become in effect a modern day out cast / leper.
Welcome to the modern world where you are to blame for the actions of others, because you are doubly an “easy target”. It more commonly goes by the name of “victim blaming”.
Oh and if you follow the “usual advice” it’s really not going to change anything with respect to your vulnerability.
So what can you do?
First realise that you are in a “Red Queen’s Race” where no matter how hard you try you are eventually going to loose.
Thus as expressed back in the 1983 movie War Game’s the only way to draw or win is,
“A strange game. The only winning move is not to play.”
So think on that carefully then reread the article.
Two things you should see and note,
1, Most security products are to fragile to work reliably.
2, Lack of mitigation by segregation etc enabled the attacker free reign.
But let’s be a bit more blunt,
“Adding junk software won’t noticeably strengthen a badly designed system, in fact it will probably make it break more easily”
Which is the history of most consumer and commercial security products. With even higher security products for “Government Agencies/Entities” failing on a regular basis.
In part because,
“All the consumer and commercial systems are broken by design”.
And it gets worse because,
“Most ‘security tool’ software/Apps and devices since the early AV days back before the 1990’s Internet kicked off were ‘junk’ and still are”.
Because there was and still is little or no incentive to make them otherwise. In fact it’s easy to find reasons why they are kept at junk status by considered design.
You get told you “have to have” AV / FireWall etc etc etc. So you have to “buy it” or as you are told “be at risk”. What they don’t tell you is buying it usually does not really change your “risk profile” except adversely.
So you are in effect a “captured market” that is seen as “something to milk dry” by the producers who have no incentive to do a proper job as that would “kill the profit”.
Have a look at Alphabet/Google and the Android and the Chrome Browser products. They very deliberately stop you having any type of effective security of worth, because they make most of their revenue by selling you as a product… Because you would not be a profitable product if you had effective security.
Some do try, which is why Alphabet/Google have forced not just Identifiers you can not change or stop being broadcast onto your devices, they are yet again changing things to stop effective security products from working,
You can read more on this at,
https://www.theregister.com/2025/03/04/google_android/
So when you actually get down to it you realise the only way to improve your security is by,
“Using effective segregation mitigations”.
Anything else is just not going to work for you, long term, short term, or now…
The only things stopping you getting completely violated are,
1, Your turn has not yet come up.
2, When it does and it will you have ensured there is nothing to steal or ransom.
3, Anything of importance is not connected by communications thus can not be reached by external attackers.
Which unfortunately leaves another issue,
4, Employers acting as inside attackers.
Yup due to lockdown employers forced many employees to install irremovable junk on the employees personal devices as an extension to the ludicrously insecure “Bring Your Own Device”(BYOD) nonsense.
Bob • March 4, 2025 9:59 AM
@Clive
Our policies forbid downloading and running random software. The sooner people get it through their heads that their work computers aren’t their personal computers, the better. This isn’t exactly a drive-by attack.
Clive Robinson • March 4, 2025 11:22 AM
@ Bob,
With regards,
“The sooner people get it through their heads that their work computers aren’t their personal computers, the better.”
The same applies the other way which is why I indicated BYOD device is such a very bad idea.
The important paragraph in the article to note is,
“During the pandemic, companies quickly made sure workers could access systems from home—and hackers soon realized home computers had become corporate back doors.”
Thus the employer became an “insider attacker” to the “employees personal home computers” in so many cases as it was “the cheap option” for the employer…
So looking a little deeper, according to the article, the victims troubles started,
“when he downloaded free software from popular code-sharing site GitHub while trying out some new artificial intelligence technology on his home computer.”
“… the AI assistant was actually malware that gave the hacker behind it access to his [home] computer, and his entire digital life.
With the issue being that the victim had not used a “Password Manager” correctly as further indicated,
“The hacker gained access to 1Password, a password-manager that Van Andel used to store passwords and other sensitive information, as well as “session cookies,” digital files stored on his computer that allowed him to access online resources including Disney’s Slack channel.”
The article is vague but it appears there were at least two computers, the victims own computer and one issued by the employer. In theory if very good “OpSec” was followed then the two computers should have been the equivalent of “air-gapped”.
However I’ve yet to meet anyone who practices “very good OpSec” and most even half brain dead hackers not just know this but exploit it…
And the article claimed the mistake the victim made was not using 2FA on the password manager,
“the 1Password account—wasn’t itself protected by a second factor. It required just a username and password by default, and he hadn’t taken the extra step of turning on two-factor authentication.”
Lack of 2FA or better is a problem I all to often see with many user accounts and other things such as “security devices, software, and applications”. Hence my dislike of human memorable passwords/phrases and most password managers (preferring our hosts old “folded paper in your wallet” option as it’s more secure against outsider attacks).
But… Likewise inadequate authentication of which many 2FA systems are guilty of. Hence my skepticism with regards much of what is claimed to be 2FA.
The simple fact as I noted on this blog way more than a decade ago,
“You have to authenticate the transaction not the channel.”
The reason the channel all to often gets authenticated and not individual transactions is,
“User convenience”.
One thing is clear –if what is reported is accurate,– the use of a password manager across multiple computers / devices was the “weak link” that allowed access to just about everything the victim had access to.
Arguably the password manager was the security “air gap crossing” component and it’s use was an OpSec failure (and as such happens more than most would think). Hence my comment about “mitigation by segregation”.
lurker • March 4, 2025 12:48 PM
@Bruce
“Sad story” yes. And MSM are having a field day mangling the facts. Apart from the victim’s poor opsec, there is a question: should he have gone to LEA first, as he did, or should he have gone to his work security team? A snag with the second option is it appears the intruders may have uploaded to his work machine some NSFW material, which could be harder to deal with than simple theft of credit card details …
Bob • March 4, 2025 1:48 PM
BYOD is dumb AF. Typically the brain child of a bean counter and an executive who can’t be trusted with an RJ45 connector. The cost savings are great until the 5th or 6th breach that results.
Anyone promoting BYOD should be retroactively fired.
MrC • March 4, 2025 7:50 PM
The article isn’t clear about whether the gen-AI nature of the software mattered. Was the malicious nature of this software was somehow obscured inside its gen-AI functionality, or this is just the same plain, old “malware posing as useful software” attack that’s been around for decades?
ResearcherZero • March 4, 2025 11:03 PM
Maintaining work systems is bad enough without people monkeying around with them or polluting the environment with whatever that thing is they brought into the office.
There are those that do not care about reality and will happily pretend it does not exist.
This is the sad thing about reality, as much as I would like it to bend to my wishes, it rarely does so. Reality still requires regular maintenance, fact checking and re-analysis.
Others could not give a hoot about rules and the laws of physics and will attempt to defy them, no matter what the advice is, and they will keep on trying contrary to the danger.
Clive Robinson • March 5, 2025 2:53 AM
@ MrC
With regards the actual attack the article says,
‘Once someone has a keylogging Trojan program on his or her computer, “an attacker has nearly unrestricted access,” a 1Password spokesman said.’
So you are right the article is not clear, as it’s not in quite a few other respects, but I’m guessing it’s your last option of it,
“is just the same plain, old “malware posing as useful software” attack that’s been around for decades?”
Or a case of “new bottle, same old wine”.
My reasoning is two fold,
Firstly it’s the sort of thing certain types of attacker are well practiced in, and there is,
“A degree of truth in the old saying about dogs and vomit”.
(It’s how most “attributions have been made in the past).
And secondly in the article it says of the victim,
“His antivirus software hadn’t turned up anything on his PC, but he installed a second antivirus program that found the malware almost immediately.”
As far as I’m aware nobody has produced “antivirus software” that can look inside the weights of a current AI LLM’s DNN and determine it has built in malicious values.
Likewise I’ve not seen any “claims” that “antivirus software” can act as a set of “client side guide rails” that can detect “malicious communications to any current AI LLM” that a user has not produced themselves.
Whilst a lot has been talked about getting “current AI LLM’s” to produce hallucination output in source code and some malicious code. The simple fact is, the current consensus is that as “current AI LLM’s” are just glorified “auto-complete systems” malicious code would have to have been in the LLM “input corpus or user prompts”.
Whilst I would not rule out a possible weird combination of hallucination and risky code/prompt. I do think the probability of it is rather low, and more importantly, if someone had found a way to do it reliably they would have published a paper that the community choir would be loudly “singing about”.
Clive Robinson • March 5, 2025 3:52 AM
@ ResearcherZero, ALL,
With regards,
“Maintaining work systems is bad enough without people monkeying around with them or polluting the environment with whatever that thing is they brought into the office.”
Whilst that is true enough, I suspect the issue here was a little more subtle than most hence my mentioning of “segregation” crossing. Which is something I looked into and talked about on this blog some time before stuxnet came along [1].
If the WSJ article is to be believed then the attack time line was,
1, The victim used a password manager for all his passwords.
2, He did not use 2FA on the password manager, thus it was vulnerable to “key logging”.
3, On his own computer –not his work computer– he downloaded an AI interface program that had a key logger built in.
4, The malicious attacker used the key logger to get the username and password for the password manager.
5, The malicious attacker used these to access the victims password manager.
6, The malicious attacker in effect downloaded all the victims usernames and passwords and corresponding server URLs.
7, The malicious attacker then impersonated the victim.
8, In order to be able to extort the victim the malicious attacker used a number of techniques to put pressure on the victim (some of which are mentioned indirectly in the article).
9, When the extortion failed the malicious attacker published information they had obtained by impersonating the victim.
What is less clear is how the malicious actor accessed the computers it’s said they did.
However it is also unclear what version of the password manager was in use. That is if it was “local” as was once available, or entirely cloud-vault subscription bassed as is apparently now the only option (something the company received user complaint over),
https://en.m.wikipedia.org/wiki/1Password
The point is that the malicious attacker gained “access” to the “home computer” then using the victims password manager pivoted their attack across the segregation / gap to gain access to the victims work servers etc.
However how the victims work computer was attacked is not at all indicated in the article.
[1] Quite some years ago now, I did some independent research on how to “get malware on to voting machines that are air-gapped” as a Proof of Concept (to have the whole very bad idea of “electronic voting machines” dropped). I found several ways to “cross the air-gap” and reported one or two indirectly on this blog.
The main one I talked about was “fire and forget” software via a free-download game or similar that would infect a careless maintenance techs computer used for voting machine diagnostic / patch / update. That should always be kept isolated but in the case of many voting machine companies was effectively not.
Thinking about how to do such PoCs, is an interesting “research project” in it’s own right, and has the advantage you only have to think about how to get it to work in one direction, not both directions as you have to do when trying to illicitly acquire protected data.
Subscribe to comments on this entry
Leave a comment
Sidebar photo of Bruce Schneier by Joe MacInnis.
Original Post URL: https://www.schneier.com/blog/archives/2025/03/trojaned-ai-tool-leads-to-disney-hack.html
Category & Tags: Uncategorized,AI,credentials,cybersecurity,hacking – Uncategorized,AI,credentials,cybersecurity,hacking
Views: 3