NSA, partners issue advice on deepfake threats
The National Security Agency (NSA) and U.S. federal agency partners have issued new advice on a synthetic media threat known as deepfakes, NSA announced September 12. This emerging threat could present a cybersecurity challenge for National Security Systems (NSS), the Department of Defense (DoD), and DIB organizations.
They released the joint Cybersecurity Information Sheet (CSI) “Contextualizing Deepfake Threats to Organizations” to help organizations identify, defend against, and respond to deepfake threats. NSA authored the CSI with contributions from the Federal Bureau of Investigation (FBI) and the Cybersecurity and Infrastructure Security Agency (CISA).
The term “deepfake” refers to multimedia that has either been synthetically created or manipulated using some form of machine or deep learning (artificial intelligence) technology. Other terms used to describe media that have been synthetically generated and/or manipulated include Shallow/Cheap Fakes, Generative AI, and Computer Generated Imagery (CGI).
“The tools and techniques for manipulating authentic multimedia are not new, but the ease and scale with which cyber actors are using these techniques are. This creates a new set of challenges to national security,” said Candice Rockell Gerstner, NSA Applied Research Mathematician who specializes in Multimedia Forensics. “Organizations and their employees need to learn to recognize deepfake tradecraft and techniques and have a plan in place to respond and minimize impact if they come under attack.”
The CSI advises organizations to consider implementing a number of technologies to detect deepfakes and determine the provenance of multimedia. These include real-time verification capabilities, passive detection techniques, and protection of high priority officers and their communications. The guidance also provides a set of recommendations for minimizing the impact of deepfakes, including information sharing, planning and rehearsing responses to exploitation attempts, and personnel training.
According to the CSI, synthetic media threats include techniques that threaten an organization’s brands, impersonate leaders and financial officers, and use fraudulent communications to enable access to an organization’s networks, communications, and sensitive information.
Technological advances in computational power and deep learning have made mass production of fake media easier and less expensive. In addition to undermining brands and finances, synthetic media can also cause public unrest through the spread of false information about political, social, military, or economic issues. In 2021, NSA’s The Next Wave journal reported that many deep learning-based algorithms are already available on open source repositories like GitHub. These ready-to-use repositories pose a threat to national security in that the application of these technologies require no more than a personal laptop and a minimal amount of technical skill.
The NSA, FBI, and CISA encourage security professionals to implement the strategies in the report to protect their organizations from these evolving threats.
Source: NSA
If you enjoyed this article, please consider becoming a paid subscriber. Your support helps keep our site ad-free.