H.R.5586, DEEPFAKES Accountability Act, also cited “Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act of 2023,” was introduced to the House on September 20th, 2024, recently passed on April 28th, 2025, and sent to President Donald Trump to sign into law. Adequately nicknamed the “Take It Down Act,” the bill aimed to criminalize deepfake pornography, an issue that is becoming increasingly problematic.
Deepfake technology, explained simply, has the ability to create any human image requested by a given prompt. It is a form of generative AI, but it requires much more energy, money, and sophistication than your average AI generator, such as ChatGPT. The technology for generating fake faces emerged because researchers were having difficulty getting their AI systems to generate convincing human faces.
Ian J Goodfellow, a PhD student in 2014, came up with the idea of pitting two deep learning machines together in a facial generation battle, forcing the machines, essentially, to “talk” in order to create content. The results were immediately phenomenal. Unfortunately, Goodfellow and his colleagues’ research was quickly co-opted to create content that was much more sinister. In late 2017, a Redditor by the name of “Deepfakes” began creating fake porn by feeding real porn content into an AI algorithm along with images of Gal Gadot. The algorithm learned to “swap” Gadot’s face into the porn film, and Redditors immediately proceeded to experiment with the technology. The rest is history.
Deepfake production had received relatively little attention until Jordan Peele’s Obama stunt, in which deepfake technology was used to create a fake video of former president Barack Obama that was then aired on Buzzfeed.1 This is perhaps why, up until 2019, there were only state laws prohibiting the distribution of non-consensual deepfakes, specifically in Virginia and California.2
The first federal bill introduced to criminalize the creation and distribution of particular deepfakes was introduced by Senator Ben Sasse in late 2018. The bill made it a federal felony for individuals to create or knowingly distribute a deepfake intended to facilitate criminal conduct.
The bill ended up dying in Congress, likely due to its limitations, but was followed by another offer. On December 20, 2019, President Trump signed into law the National Defense Authorization Act for Fiscal Year 2020 (NDAA), a $738 billion defense policy bill.3 While the bill had nothing to do with revenge porn or pornography, it did address the problem of political interference by deepfake tech.4
Despite prior law indicating clear consternation over the potential of deepfake tech to sway elections or otherwise incite rebellion by manufacturing malfeasance, the 2025 law continues the mission to heavily monitor political deepfakes. It outlines penalties for any person who “knowingly alters an advanced technological false personation record . . . with the intent to distribute such altered record . . . with the intent to cause violence or physical harm, incite armed or diplomatic conflict, or interfere in an official proceeding, including an election, provided the advanced technological false personation record did in fact post a credible threat of instigation or advancing such.”
Nevertheless, political deepfakes are actually relatively rare. Recent reports estimate that 98% of online deepfakes are sexual in nature.5 Apparently originating with “Deepfakes,” fake celebrity porn was only the beginning of a major problem. Celebrities and online personalities such as Gal Gadot and Valkyrae, among many others, have had to deal with pornographic deepfakes of themselves being made and distributed despite their own attempts to block such content; and unfortunately, the majority of victims are women or female-presenting. As Scarlett Johansson explained in a 2018 interview with The Washington Post: “I have sadly been down this road many, many times. Every country has its own legalese regarding the right to your own image. So while you may be able to take down sites in the U.S. that are using your face, the same rules might not apply in Germany.” 6 Even “unknown” figures have had their faces and bodies fed to such systems without their consent or knowledge, marking them indelibly with a questionable online footprint. All a deepfake creator needs, after all, is access to enough human content, a generator, and intent.
Deepfakes are neither cheap nor easy to produce.7 While the average person can hop online and find dozens of deepfake generators to use for free, as MP Laura McClure recently explained to New Zealand parliament—producing her own deepfake nudes to a stunned house—the actual technology is complicated, largely anonymous, and uses a hell of a lot of energy.8 This means that the creators of deepfake tech are often difficult, if not impossible, to locate, because they are often involved in other cybercrime and consequently linked to a large network of individuals who also do not want to be found. As explained by a 2019 report, bringing justice to some of the perpetrators of cybercrimes involves a lot of moving parts and the cooperation of numerous law enforcement agencies—each requiring the capacity and capability to contribute to a multi-agency, transnational investigation.9 This is complicated by the fact that some countries that produce cybercrime lack key procedures and capabilities to keep up with that crime, despite considerable efforts.
Obviously, priority is low when it comes to individuals seeking respite from the harms of deepfake revenge porn; thankfully, the act accounts for that. H.R.5586 also establishes an “in rem” action, meaning that it will be harder for the deepfake to escape being taken down—in rem targets the deepfake itself and will be established in the event that a defendant cannot be located or the courts are unable to obtain in personam jurisdiction. Even so, the act follows in the wake of cases such as New York Times Company v. Sullivan. It emphasizes that a deepfake creator needs to know what they were doing in order to be prosecuted, something a good lawyer could use to their advantage.
There are potential issues with the new law, however. First, it states that the attorney general shall designate a single coordinator in each U.S. attorney’s office to deal with claims, prosecution, and reporting to Congress. Critics of this might suggest that one person per office is simply not enough to handle all possible claims. Additionally, H.R.5586 does not specify that local or federal law enforcement receive any type of additional training to combat deepfake production, which is perhaps a flaw. However, Sec. 7, Detection of Deepfakes, states that a deepfakes task force is to be established to combat the national security implications of deepfakes. The bill so far does not explain any additional funding for improved investigation techniques or technology that would be involved in this endeavor. Surely, a bill of this kind cannot seek to be a productive measure against deepfake tyranny when the judicial system is already so far behind in terms of technology.
There are also issues inherent to any law passed that purports to regulate forms of political commentary. Specifically, the law should be subject to First Amendment scrutiny, meaning the government would have to show that the application is narrow enough to address the problem of deepfake interference and nothing beyond the scope of deepfake production. The law cannot, in other words, be used to obstruct free speech. The newest law passed solves this problem by focusing on the presentation of a deepfake. It states that there must be (1) at least one clear verbal statement that says there are altered components, as well as a description explaining how much the record was altered; (2) a written legible statement at the bottom of the image throughout the duration that explains that it is altered, and a concise description explaining how much was altered; and (3) a link, icon, or similar tool to show the content has been altered by, or is a product of, generative AI or similar technology. These requirements alone already appear to put Google’s Veo 3 advertisements in jeopardy of litigation.
Veo 3’s audiovisual examples, however, aren’t deepfakes of real people. They’re deepfakes of fake people having conversations and doing people-like things. The problem with this law, consequently, is that it doesn’t stop businesses or individuals from using different kinds of data—perhaps questionably obtained—to create fake instances of non-real entities. The technology still exists and is allowed to be used as long as it does not depict real people or create with the intent to sway perception of, or perhaps defame, any one character. The current illustration in the new law suggests that deepfakes will have to be so heavily edited with the required warnings that it will effectively tank projects of this kind if there isn’t some sort of loophole; Google, unlike an anonymous content creator, cannot hide from the government.
A final issue inherent to criminalizing deepfakes is the potential of bad actors in government claiming that real audiovisual content is fake. In concord with the rise of social media, people are less prone to believe everything they see on the internet. The result, of course, is that it becomes easier to convince people that certain news is fake. Thus, misinformation, at its height in the digital age, remains a threat to democracy. Hypothetically, if a defendant were unable to prove that footage of an event is real, there could be extreme ramifications.
1 Jordan Peele’s Obama PSA is a double-edged warning against fake news | Vox
2 The Problem of Deepfake Pornography — Harvard Undergraduate Law Review
3 First Federal Legislation on Deepfakes Signed Into Law
4 The Problem of Deepfake Pornography — Harvard Undergraduate Law Review
5 Characterizing the MrDeepFakes Sexual Deepfakes Marketplace
6 Schick, N. (2020). Deepfakes: The Coming Infocopalyse. Hachette Book Group.
7 ‘This Person Does Not Exist’ Creator Reveals His Site’s Creepy Origin Story
8 Act MP: Why I held up ‘my’ nude photo in Parliament – NZ Herald
9 Countering the Cyber Enforcement Gap: Strengthening Global Capacity on Cybercrime.