top of page

US Government Agencies Issue Guidance on Detecting and Mitigating Deepfake Threats

Several US government agencies have jointly released a cybersecurity information sheet aimed at addressing the emerging threat posed by deepfakes and providing guidance to organizations on how to detect and respond to these malicious synthetic media.

Deepfakes, which are fabricated images and videos generated through artificial intelligence and machine learning, have grown in sophistication, making them increasingly convincing. While they have been used for purposes such as propaganda and misinformation, the report emphasizes the potential risks they pose to organizations, including government entities, national security organizations, defense sectors, and critical infrastructure providers.

The agencies highlight that deepfakes can be employed for various malicious objectives, including social engineering tactics that involve fake online profiles, deceptive text and voice messages designed to evade technical defenses, and the dissemination of disinformation through manipulated videos. These activities create significant vulnerabilities for organizations, including executive impersonation, financial fraud, and unauthorized access to internal communications.

One concerning scenario outlined in the report involves cybercriminals using deepfakes to impersonate corporate executives, potentially for purposes like manipulating a company's brand image or influencing stock prices. Furthermore, malicious actors could utilize deepfakes in social engineering attacks, such as business email compromise (BEC) schemes and cryptocurrency scams.

Deepfakes also have the potential to enable impersonation attempts aimed at accessing user accounts and valuable data, such as proprietary information, internal security details, or financial data. To illustrate the practical threat posed by deepfakes, the agencies cited two real-world incidents that occurred in May 2023. In one case, a cybercriminal employed synthetic audio and visual techniques to impersonate a CEO and target a company's product line manager. In the second incident, profit-seeking attackers combined audio, video, and text message deepfakes to pose as an executive, attempting to convince an employee to transfer funds to their accounts.

The report offers a summary of ongoing efforts to detect deepfakes and verify media authenticity, including initiatives by organizations like DARPA, DeepMedia, Microsoft, Intel, Google, and Adobe. Rick McElroy, Principal Cybersecurity Strategist, VMware, shared support of the information sheet and acknowledged the challenge that's been laid before the cybersecurity industry:

“This information sheet brings excellent awareness to a massively growing threat. Everything from common scams and cyberattacks to political influence campaigns will become more effective as a result of this technology. Awareness is needed and it is needed now. I applaud the continued effort to make the public and organizations aware.

While it represents a great move in the right direction for awareness, there is a large challenge that already exists in training employees and citizens about the continually evolving threats in the world. More education and nuanced training will be needed to help individuals understand how to discern these types of attacks themselves. Technology to detect these types of attacks at scale is lagging, and organizations and individuals will need to be the front line of defense against these for the foreseeable future. Historically, cybersecurity has referred to the humans involved as the weakest link. The era of deepfakes represents the perfect opportunity to turn them into the strongest link and the best defense against deepfakes.”

The agencies also present recommendations for implementing technology to detect deepfakes and validate media authenticity. Additionally, they stress the importance of safeguarding the data of individuals who may be targeted, as attackers can create more convincing deepfakes when they possess personal information and unwatermarked media content of their targets. Other measures advised include developing response plans, sharing experiences with the government, and training personnel to identify deepfakes. ###


bottom of page