Criminals Use Deepfake Videos to Interview for Remote Work
Security experts are on the lookout for the next evolution of corporate social engineering: deepfake job interviews. The latest trend offers a glimpse into the future arsenal of criminals who use persuasive fake personas against business users to steal data and commit fraud.
The concern comes following a new advisory this week from the FBI’s Internet Crime Complaint Center (IC3), which warned of increased activity by fraudsters trying to thwart the online interview process. for remote workstations. The notice says criminals are using a combination of deepfake videos and stolen personal data to misrepresent themselves and get jobs in a range of home-based jobs that include information technology, computer programming , database maintenance and professional functions related to software.
Federal law enforcement officials said in the notice that they had received a flurry of complaints from businesses.
“In these interviews, the actions and lip movement of the person being interviewed on camera does not completely coordinate with the audio of the person speaking,” the notice said. “Sometimes actions such as coughing, sneezing, or other auditory actions are not aligned with what is presented visually.”
Complaints also noted that criminals used stolen personally identifiable information (PII) in conjunction with these fake videos to better impersonate candidates, with subsequent background checks unearthing discrepancies between the interviewee and the identity presented in the interview. nomination.
Potential Reasons for Deepfake Attacks
While the advisory did not specify the motives for these attacks, it did note that the positions these fraudsters applied for were those with some level of company access to sensitive data or systems.
Thus, security experts believe that one of the most obvious purposes of deep tampering with a remote interview is to enable a criminal to infiltrate an organization for everything from corporate espionage to theft. ordinary.
“Notably, some flagged positions include access to customers’ personal information, financial data, company computer databases and/or proprietary information,” the notice said.
“A fraudster who snags a remote job takes a giant leap forward in stealing the organization’s data crown jewels or locking it down for ransomware,” says Gil Dabah, co-founder and CEO of Piiano. “Now they are an insider threat and much harder to detect.”
Additionally, short-term spoofing could also be a way for candidates with a “tainted personal profile” to pass security checks, says DJ Sampath, co-founder and CEO of Armorblox.
“These deepfake profiles are set up to circumvent checks and balances to get through the company’s recruiting policy,” he says.
It is possible that in addition to gaining access to steal information, foreign actors are trying to force their way into US companies to fund other hacking ventures.
“This FBI security warning is one of many that have been flagged by federal agencies in recent months. Recently, the US Treasury, State Department and FBI published an official warning that companies should beware of North Korean IT workers pretending to be independent contractors to infiltrate companies and collect revenue for their country,” says Stuart Wells, CTO of Jumio. “Organizations that unknowingly pay North Korean hackers risk facing legal consequences and violating government sanctions.”
What this means for CISOs
Many of the deepfake warnings in recent years were primarily about political or social issues. However, this latest development in the use of synthetic media by criminals underscores the growing relevance of deepfake detection in professional environments.
“I think it’s a valid concern,” says Dr. Amit Roy-Chowdhury, professor of electrical and computer engineering at the University of California, Riverside. “Making a deepfake video during the duration of a meeting is difficult and relatively easy to detect. However, small businesses may not have the technology to perform this detection and therefore may be fooled by deepfake videos. Deepfakes, images in particular, can be very persuasive and, if combined with personal data, can be used to create workplace fraud.
Sampath warns that one of the most confusing parts of this attack is the use of stolen PII to aid in identity theft.
“As the prevalence of the DarkNet with compromised credentials continues to grow, we should expect these malicious threats to continue to expand,” he says. “CISOs need to go the extra mile to improve their background check security posture when recruiting. Quite often these processes are outsourced and a stricter procedure is warranted to mitigate these risks.”
Future concerns about deepfakes
Previously, the most public examples of corporate criminal use of deepfakes was as a tool to support business email compromise (BEC) attacks. For instance, in 2019 an attacker used deepfake software to impersonate the voice of the CEO of a German company to convince another company executive to send an emergency wire transfer of $243,000 in support of a business emergency invented. More dramatically, last fall a criminal used deepfake audio and a forged email to convince an employee of a company in the United Arab Emirates to transfer $35 million to an account held by the bad guys, leading people to believe the victim that it was support for the acquisition of a business.
According to Matthew Canham, CEO of Beyond Layer 7 and faculty member at George Mason University, attackers will increasingly use deepfake technology as a creative tool in their arsenals to make their social engineering attempts more effective.
“Synthetic media like deepfakes are just going to take social engineering to another level,” says Canham, who last year at Black Hat presented research on countermeasures to combat deepfake technology.
The good news is that researchers like Canham and Roy-Chowdhury are making progress in deepfake detection and countermeasures. In May, Roy-Chowdhury’s team developed a framework to detect manipulated facial expressions in deepfake videos with unprecedented levels of accuracy.
He believes that new detection methods like this can be implemented relatively quickly by the cybersecurity community.
“I think they can be operationalized in the short term – one or two years – with collaboration with professional software developers who can take the research to the software product phase,” he says.