Artificial intelligence (AI) has changed the way organizations identify, respond to and recover from cyberattacks. Concurrently, bad actors are weaponizing AI as both an attack vector and attack surface, adding to the growing list of digital vulnerabilities and blind spots in the insider risk space. In 2019, a reported 14,000 deepfake videos were found online, a 100% increase over those detected just one year prior.

One of the most prominent forms of AI exploited by bad actors today is a deepfake. To put it simply, a deepfake is a type of AI-generated media that depicts a person saying or doing something they did not say or do. In the growing digital world, media (i.e., visuals, images, audio, etc.) are used to inform decision-making. The intention behind the deepfake synthetic media is to deceive viewers, listeners and technology systems.

Business email compromise’s Gen-Z sibling

While many security leaders are aware of business email compromise (BEC) attacks, the weaponization of synthetic media like deepfakes threatens both the public and private sector through BEC’s Gen-Z sibling, business identity compromise (BIC) attacks. Through a BIC cyberattack, bad actors create synthetic, fictitious personas — or personas impersonating an existing employee — and strategically deploy them through one of the many forms of synthetic media to cause maximum damage to their target.

  • Deepfake video: Deepfake videos are created using AI, machine learning, and face-swapping software. These computer-generated videos combine images to create new video footage that depict people, statements and/or events that never actually happened. These videos can have wide-reaching effects: former President Donald Trump shared a video deepfake of House Speaker Nancy Pelosi on his Twitter account in which Speaker Pelosi appears to be impaired and possibly under the influence of a substance. This video picked up 2.5 million views on Facebook alone.
  • Deepfake audio: The first-known audio deepfake was used to impersonate the voice of a CEO of a U.K.-based energy firm, demanding a transfer of €220,000. Audio deepfakes are a type of AI that produces hyper-realistic, but synthetically generated speech.
  • Textual deepfake: The early days of AI and natural language processing (NLP) painted a challenging picture for a future where machines could write like a human being. Fast-forward to 2022, robust libraries of language models have grown over the years, and machines can now generate text-based communications mirroring that of a human being. Former OpenAI Policy Director Jake Clark cautioned the U.S. House Permanent Select Committee on Intelligence in 2019, asserting that textual deepfakes significantly aid in the production of “fake news,” misinformation and disinformation, as well as the impersonation of fictitious online personas that spread propaganda.
  • Deepfakes on social media: Synthetic media can be generated in a variety of ways. The most popular of these techniques deployed on social media are profile images developed via generative adversarial networks (GANs). Social media user “Katie Jones” appeared to be well connected in the Washington D.C. political scene, linked to everyone from an economist to a Deputy Assistant Secretary of State and a Senior Congressional Aide. There are two glaring issues with “Katie Jones” — firstly, she isn’t real, and secondly, the real person operating this account was determined to be a state-sponsored actor targeting the U.S. The AI-generated, completely synthetic image serving as the “face” behind “Katie Jones” is a synthetically generated image — a GAN.

Not “just a prank”

Individuals and companies have already fallen victim to synthetic media-based attacks. These attacks far exceed the pranks they’re typically associated with, like the deepfake of Tom Cruise that went viral on TikTok.

In 2018, actor and director Jordan Peele created and shared a deepfake video starring a synthetic former President Barack Obama. This convincing video makes one wonder: what happens if a deepfake of a U.S. president is released declaring war, or expressing sentiments that incite violence during times of political and/or civil unrest? Deepfakes have the power to change the course of businesses, financial markets, politics and entire countries. Ukraine’s President Volodymyr Zelenskyy was a recent target of this new generation cyberattack whereby a deepfake of Zelenskyy surfaced online ordering Ukrainian troops to surrender to Russia.

A call for vigilance

The Federal Bureau of Investigation (FBI) recently drew much needed attention to this emerging digital threat, issuing an urgent warning that bad actors are actively weaponizing deepfakes to obtain remote jobs. The agency also alerted that hackers are actively deploying deepfakes — as well as other AI-generated content — in furtherance of foreign influence operations.

As science and technology leaders continue to develop defenses to detect the influence of deepfakes, employees and security teams must become hyper-observant and hyper-vigilant to the digital media put before them each day.

  1. Trust your gut. If something seems “off” it probably is. Key indicators of a deepfake include:
    1. Image blurring
    2. Skin tone changes
    3. Duplicative features
    4. Lower-quality sections of content
    5. Box-like shapes around facial features
    6. Unnatural movements
    7. Changes in background and/or lighting
    8. Choppy sentences
    9. Varying inflection in speech
    10. Lack of conversational flow
  2. Don’t skip identity verification. While verifying one’s identity in-person is ideal, it may not be possible. Exercise due diligence in verifying that someone is who they claim to be and don’t be afraid to ask questions. When hiring someone, be sure to follow through on background checks and other identity verification steps a company may have in place prior to granting someone access.
  3. Awareness is key. The deepfake threat is still a mystery to many. Both organizations and individual employees will decrease their chances of falling victim to a deepfake if everyone is informed of the threat.
  4. Adopt a zero-trust approach online. No person, piece of content, device or software is deemed trustworthy without continuous verification.

By implementing this calibrated and interconnected defense against deepfakes, both individuals and organizations will be well-positioned to combat the deepfake threat. Both the public and private sectors must stay vigilant to pernicious impacts of a deepfake attack, as they lead to deeply disparaging and damaging outcomes for both individuals and organizations alike.

The views and opinions expressed are that of the author and not those of the FBI or any other U.S. government agency.