web analytics

I’m a security expert, and I almost fell for a North Korea-style deepfake job applicant …Twice – Source: go.theregister.com

Rate this post

Source: go.theregister.com – Author: Jessica Lyons

Twice, over the past two months, Dawid Moczadło has interviewed purported job seekers only to discover that these “software developers” were scammers using AI-based tools — likely to get hired at a security company also using artificial intelligence, and then steal source code or other sensitive IP.

Moczadło is a security engineer who co-founded Vidoc Security Lab, a San Francisco-based vulnerability management company, in 2021. 

“If they almost fooled me, a cybersecurity expert, they definitely fooled some people,” Moczadło told The Register.

The startup is hiring employees to build out a product that, according to Moczadło, uses machine learning to find and fix vulnerable code written by Microsoft Copilot, ChatGPT, and human developers. 

So it was a strange, Upside-Down world experience when in December, a job applicant made it through the first few rounds of interviews before moving on to a video call with Moczadło during which the co-founder says it became very obvious that the interviewee was using software to change his appearance in real-time.

“We spent and lost more than five hours on him,” Moczadło said. “And the surprising thing was, he was actually good. I kind of wanted to hire him because his responses were good; he was able to answer all of our questions.”

There were some red flags. Vidoc Security Lab was looking to hire developers in Poland, and as such had posted the ad on a Polish website. The applicant claimed to live in that country, and had a Polish name — but also had a strong Asian accent on phone calls with Moczadło and his co-founder, we’re told. 

“But I gave him the benefit of the doubt,” Moczadło said.

As soon as he turned on his camera, I instantly knew

Until the video interview, that is. “We noticed it after the third or fourth step of our interview process,” Moczadło recalled. “His camera was glitchy, you could see a person, but the person wasn’t moving like a person. We spoke internally about him, and we thought, OK, this person is not real.”

The applicant was rejected. Two months later, it happened again.

This second fake IT job candidate reached out to Moczadło and his colleagues via LinkedIn. According to the employment hopeful’s phony profile, which has since been removed, and his résumé, which Moczadło shared with The Register, a person we’ll just refer to as Bratislav claimed to be a software engineer from Serbia looking for a remote job. 

Bratislav had about 500 connections on the Microsoft-owned social network, nine years of experience, and a computer science degree from the University of Kragujevac, all of which seemed legit to the Vidoc Security Lab team.  

“His experience was decent, his surname was Slavic, his CV said he lived in Serbia and had a university degree from Serbia, but also he had a really strong Asian accent,” Moczadło said.

‘All of his answers were from ChatGPT’

During Bratislav’s first round of interviews, he told Vidoc Security Lab that his camera wasn’t working. Then on February 4, after rescheduling once with Moczadło, he agreed to an on-camera interview. “When he joined the meeting, as soon as he turned on his camera, I instantly knew,” Moczadło said.

Plus, the job seeker’s answers to interview questions seemed to be straight out of OpenAI’s ChatGPT, the co-founder added. The interviewee’s answers always had a lag time to them, and while they were “spot on,” they weren’t conversational but rather spoken in bullet points.

“ChatGPT has this style of answering in bullet points all the time, and he was answering in bullet points as well, like he was reading everything from ChatGPT,” Moczadło said.

“And it was super hilarious for me,” because for a second time he was interviewing an AI-generated face, Moczadło remembers. “So I thought, OK, this time I will record it, because so many people didn’t believe me before that we got candidates like this.”

Moczadło later posted the video on LinkedIn with the job seeker’s voice muted, and wrote: “WTF, developer used AI to alter his appearance during a technical interview with me. Yes, this is a real recording, it happened today.”

To be clear: While The Register has not had a chance to perform deep forensic analysis of the video, it does appear the person’s head doesn’t quite match up with his neck and the face image glitches more than the neck and torso. 

Moczadło also repeatedly asks the interviewee to wave his hand in front of his face — this is supposed to detect an AI-generated face because it disrupts the model and will make the image appear glitchy as the software lags while trying to integrate a real hand covering a deepfake face.

The interviewee refuses to do this, and Moczadło ends the call.

IT worker scam nets Norks $88m

Moczadło suspects that both of the fake job candidates were part of a larger bogus IT worker scam, along the lines of those favored by North Korean techies that have netted Pyongyang least $88 million over six years, according to the US Justice Department. What usually happens is that someone in or working for North Korea pretends to be a legit Western technology worker to get a remote job.

Once the fake IT workers obtain these positions in the US and elsewhere, they not only funnel their wages into Kim Jong Un’s coffers, some also use their access to steal sensitive info to exploit and even blackmail their employers, threatening to expose corporate assets if an extortion demand isn’t paid.

The Feds have repeatedly claimed these ill-gotten gains contribute to the DPRK’s illegal weapons programs.

Plus, US law enforcement and cybersecurity agencies have been warning companies for years that deepfakes pose a growing threat to corporate IP and bank accounts, as well as companies’ brand reputation.

I won’t be able to decide if the person I’m talking with is a real person or not

“Multiple” infosec researchers have reached out to Moczadło, we’re told; he said he has shared videos, screenshots, and other details with them to help attribute the activity to a particular criminal group or nation state.

“I feel kind of scared about the future,” he said. “Right now the software that the person used wasn’t that great. I was able to spot all of the artifacts and all of the glitches.

“But I’m scared that in a year, as AI advances, I won’t be able to decide if the person I’m talking with is a real person or not.” ®

Original Post URL: https://go.theregister.com/feed/www.theregister.com/2025/02/11/it_worker_scam/

Category & Tags: –

Views: 2

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post