Skip to main content

Deepfakes in cyberattacks aren’t coming. They’re already here. 

Cloud blocks forming faces in sky
Cloud blocks forming faces in sky
Image Credit: Colin Anderson Productions pty ltd/Getty Images

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


In March, the FBI released a report declaring that malicious actors almost certainly will leverage “synthetic content” for cyber and foreign influence operations in the next 12-18 months. This synthetic content includes deepfakes, audio or video that is either wholly created or altered by artificial intelligence or machine learning to convincingly misrepresent someone as doing or saying something that was not actually done or said.

We’ve all heard the story about the CEO whose voice was imitated convincingly enough to initiate a wire transfer of $243,000. Now, the constant Zoom meetings of the anywhere workforce era have created a wealth of audio and video data that can be fed into a machine learning system to create a compelling duplicate. And attackers have taken note. Deepfake technology has seen a drastic uptick across the dark web, and attacks are certainly taking place.

In my role, I work closely with incident response teams, and earlier this month I spoke with several CISOs of prominent global companies about the rise in deepfake technology they have witnessed. Here are their top concerns.

Dark web tutorials

Recorded Future, an incident-response firm, noted that threat actors have turned to the dark web to offer customized services and tutorials that incorporate visual and audio deepfake technologies designed to bypass and defeat security measures. Just as ransomware evolved into ransomware-as-a-service (RaaS) models, we’re seeing deepfakes do the same. This intel from Recorded Future demonstrates how attackers are taking it one step further than the deepfake-fueled influence operations that the FBI warned about earlier this year. The new goal is to use synthetic audio and video to actually evade security controls. Furthermore, threat actors are using the dark web, as well as many clearnet sources such as forums and messengers, to share tools and best practices for deepfake techniques and technologies for the purpose of compromising organizations.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

Deepfake phishing

I’ve spoken with CISOs whose security teams have observed deepfakes being used in phishing attempts or to compromise business email and communication platforms like Slack and Microsoft Teams. Cybercriminals are taking advantage of the move to a distributed workforce to manipulate employees with a well-timed voicemail that mimics the same speaking cadence as their boss, or a Slack message delivering the same information. Phishing campaigns via email or business communication platforms are the perfect delivery mechanism for deepfakes, because organizations and users implicitly trust them and they operate throughout a given environment.

Bypassing biometrics

The proliferation of deepfake technology also opens up Pandora’s Box when it comes to identity. Identities are the common variable across networks, endpoints, and application, and the focus on who or what you are authenticating becomes pivotal to an organization’s security on their journey to Zero Trust. However, when a technology exists that can imitate identity to the point of fooling authentication factors, such as biometrics, the risk for compromise becomes greater. In a report from Experian outlining the five threats facing businesses this year, synthetic identity fraud, in which cybercriminals use deepfaked faces to dupe biometric verification, was identified as the fastest growing type of financial crime. This will inevitably create significant challenges for businesses that rely on facial recognition software as part of their identity and access management strategy.

Distortion of digital reality

In today’s world, attackers can manipulate everything. Unfortunately, they are also some of the first adopters of advanced technologies, such as deepfakes. As cybercriminals move beyond using deepfakes purely for influence operations or disinformation, they will begin to use this technology to compromise organizations and gain access to their environment. This should serve as a warning to all CISOs and security professionals that we’re entering a new reality of distrust and distortion at the hands of attackers.

Rick McElroy is principal cybersecurity strategist at VMware.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.