web analytics

AI red-teaming tools helped X-Force break into a major tech manufacturer ‘in 8 hours’ – Source: go.theregister.com

Rate this post

Source: go.theregister.com – Author: Team Register

RSAC An unnamed tech business hired IBM’s X-Force penetration-testing team to break into its network to test its security. With the help of AI automation, Big Blue said it was able to do so within hours.

While he can’t name names, Chris Thompson, global head of X-Force Red, said this particular customer is “the largest manufacturer of a key computer component in the world.” 

Thompson says the senior hacking team scheduled three weeks for the project. “And that’s based on going after similar technology companies,” he added. “We allocated three resources to it for three weeks.”

IBM’s red team (along with everyone else in the world) has been building out its AI capabilities. This includes using generative and predictive AI for penetration testing from a platform the team code-named Vivid, which they used to help with the unnamed computer component manufacturer break-in. 

“With the automation that we’ve built out, we managed to hack into that company within eight hours,” Thompson told The Register during an interview at the RSA Conference in San Francisco last week.

With the automation that we’ve built out, we managed to hack into that company within eight hours

“Technology’s finally caught up to where we need it to be to solve these really big data analysis problems, because that’s really what red teaming is,” he added. “You have all the data in the world, you have to collect it really quietly, but then you have to go through lines and lines of code and connect the dots.”

While AI tools can “never replace dedicated hackers, truly the most skilled people out there, we can take a load off. There’s a lot of fluff out there around AI. But there’s also a lot of really interesting things that are happening.”

In this particular case, the X-Force crew and its AI tooling found a flaw in the manufacturer’s HR portal, exploited this to upload a shell, and then waited to see if they would get caught. They didn’t, so they pushed further, escalating their privileges on the host, and used a rootkit to cover their tracks and avoid being detected. 

“Then we just sat and waited, mapped up their internal network over time, and eventually got to the design of that key computer component,” Thompson said. 

The team is completing similar jobs for similarly huge technology providers, as well as some of the world’s biggest banks and defense manufacturers, he noted, adding that ultimately AI helps them “put the dots closer together.

“The attack paths that we needed to leverage were actually there day one, it just took us two weeks to put it together because there’s just this fire hose of information and it’s really difficult to know what to focus in on,” he explained. 

“Now that we have more tools for this offensive data analysis problem, it’s just accelerating our work so we can free up our really smart people to solve more interesting challenges instead of just doing that crazy data analysis,” Thompson added. 

Crims like offensive AI, too

Of course, criminals and government-backed intruders are also seeing how they can use machine-learning tools to make their jobs more efficient, and Thompson said he believes the pace at which this technology is changing and improving is only going to accelerate from here on out. 

He cited an AI security event held during this year’s RSA Conference and attended by officials from US Cyber Command and the NSA. 

“Everyone was in agreement that in two years, the models will be ten-times more powerful that they are today,” Thompson said, adding that the discussion during the event centered around “how do we leverage advancements in AI security to better defend us when our adversaries are going to be using that to attack us? It’s a scary thought.”

Currently, nation-state crews are the ones investing in offensive AI tools, likely because they have the deeper pockets compared to their criminal counterparts.

But, Thompson noted, as more open source projects and research gets published, these types of penetration tools will become “more accessible to the average hacker,” who may turn around and use these for nefarious purposes without having to make the initial financial investment upfront. 

“On the flip side: There’s a positive spin because a lot of vendors want to invest money into proactively using AI to defend themselves and proactively discover, hold and take action on weaknesses,” he opined.

“I think you will see a big shift, enterprise portfolio-wide, on proactive vulnerability management and things like that to get ahead of it. It’s not all doom and gloom.” ®

Original Post URL: https://go.theregister.com/feed/www.theregister.com/2024/05/13/ai_xforce_red_penetration/

Category & Tags: –

Views: 0

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post

More Latest Published Posts