The Take It Down Act: Constitutional or Corrosive?
By Olivia Marcoccia ‘28
"Deepfakes" are videos, photos, or audio that are generated with AI. The face-swapping technology that Deepfakes are known for has resulted in the creation of many explicit videos where the face of the original pornactor has been replaced with someone else's, usually without consent. The most famous example of this is when a pornographic video of "Taylor Swift" spread online in 2024. Not only does this technology affect celebrities in harmful, defamatory ways, but many other people are being targeted, too. Deepfakes are being used to alter bodies, not just faces. Recently, some social media users have shared experiences of the photos they posted to digital platforms such as Instagram being reposted by exes, scammers, or people with nefarious intent; these photos look just like the ones that the original authors posted, except for the fact that the reposted image has generated a nude body over the original. These are just two examples of how deepfake technology is being used to harass, expose, and target people.
As technology, such as deepfakes, evolves, legislation must evolve too. From January 2019 to July 2025, 174 laws have been enacted regarding deepfakes, all of which have been state laws. The Take It Down Act is one of the first acts passed federally in America to combat the distribution of nonconsensual intimate deepfake images. There are essentially two parts to the Act. The first part of the Act makes it illegal to knowingly publish an intimate picture of someone if…
The picture was created/taken in a context where the individual depicted in the picture could reasonably assume that they had privacy,
The picture contains a depiction of something that was not voluntarily exposed by the individual in the image,
The depiction is not a matter of public concern, and
The publication/distribution of the image was intended to cause harm or did cause harm.
Distributing or posting nonconsensual intimate deepfake images of adults, according to the Take It Down Act, can warrant up to two years of imprisonment. Intimate pictures regarding minors, under this Act, are illegal if the picture is posted with the intent to harass, abuse, humiliate, degrade the minor, or arouse any person. The maximum punishment for crimes of this nature, involving minors, is three years of imprisonment.
The second part of the Take It Down Act is a provision that requires social media platforms and websites to establish a system for users to report nonconsensual intimate content. The platforms also have to ensure that this content is removed within 48 hours of the report being filed. Hypothetically, this provision makes sense, since the stated goal of the Act is to "require covered platforms to remove nonconsensual intimate visual depictions, and for other purposes.”
93% of deepfakes are nonconsensual pornography or nudity; this shows how important an act like "Take It Down" is. With that said, many issues have been raised regarding this Act. Firstly, the constitutionality of the Act has been questioned. This is because the second part of the Act, the removal provision, does not have any safeguards. This means that anyone could report a post or picture for being a Nonconsensual Distribution of Intimate Image (NDII), regardless of the content. The platform must remove the content on a "good-faith" basis that the reporter is being truthful. Many critics of the Act have pointed out how the lack of safeguarding against misuse of the provision is an infringement of the First Amendment. An issue that is continuously pointed out regarding the lack of safeguards against false complaints in the Take It Down Act is the fact that safeguards could have easily been added to this provision; protections for false complaints are seen in other acts that are similar in nature to this one. For example, many believe that the safeguards contained in the Digital Millennium Copyright Act (DMCA), such as the notice and takedown process, would have worked just as well for this Act. The issue is not that the Take It Down Act's takedown process safeguarding is inadequate, but rather, the issue is that the safeguarding process in the Take It Down Act is nonexistent. Along with a neglect to add safeguards to the provision, critics of the Act also note the explicitly stated fact that "individuals or entities who may be harmed by the removal of lawful content will have no recourse against the platforms."
Another key issue with the Act is the argument that the second part of the Act is too broad and vague. While the first part clearly outlines what is illegal, the removal provision is ambiguous in its language. Any intimate visual depiction that the reporter claims to be nonconsensual is to be removed, according to the Act. This is a much larger range of content than what the first part of the Act outlines as illegal. The vague language means that lawful content, such as a journalist's photos of a topless protest or any commercial pornography, should be removed if reported under the Act.
Aside from the possible First Amendment issues of the Act, there are also concerns for the way that the Act appears to expand the scope of the Federal Trade Commission (FTC) and its enforcement of policy. This goes back to the vague language of the second provision. If a platform fails to remove reported content within 48 hours, regardless of the legality of the content, then the platform can be "treated as an unfair or deceptive act or practice subject to enforcement by the FTC." This, in combination with recent firings by President Trump of FTC employees, has caused possible ideological shifts within the FTC, which has caused concern for some netizens and professionals. This is especially true regarding their expanded scope under this Act.
The first part of the Act, which criminalizes the distribution of nonconsensual deepfake and real/revenge pornography, is largely agreed upon by the public and professionals to be much needed and well accepted. There have been major news stories that illustrate the need for this first prong. An example is an incident that occurred in New Jersey, in which a group of high school boys generated nonconsensual deepfakes of nude female classmates. Another example is an attempted silencing of a journalist who was investigating child abuse. This was attempted via deepfake pornography. Unfortunately, the second part of the Act seems to be drafted in a way that undermines and overshadows the usefulness of the overall Act. This could have been avoided if stronger, specific language had been used in the removal provision of the Act and if safeguards had been included.
The Take It Down Act is a good start to regulating the misinformation spread by deepfakes. While a majority of the discussion regarding deepfakes and their legality revolves around pornography or sexually explicit images, deepfakes are also being used to spread political misinformation and to harmfully scam citizens. This type of legislation is a step in the right direction in terms of legislation evolving with technology. The flaws of this bill are a huge concern, though. With enough backlash and criticism, the hope is that the bill will be repealed and then sent through again with the necessary revisions.
Olivia Marcoccia is a sophomore majoring in English.
Sources
Associated Press, and Ortutay, B. (2025). WHYY: President Trump signed the Take It Down Act, addressing nonconsensual deepfakes. What is it? Dean.house.gov. https://dean.house.gov/2025/5/whyy
Ballotpedia Staff. (2025). Ballotpedia's State of Deepfake Legislation 2025 Annual Report. Ballotpedia. https://ballotpedia.org/Ballotpedia%27s_State_of_Deepfake_Legislation_2025_Annual_Report
Cash, B. (2025). Melania Trump's Take It Down Act Signed in Washington, DC. UPI. https://www.upi.com/News_Photos/view/upi/5b48dedfac5b3bebac2861eb2b573693/Melania-Trumps-Take-It-Down-Act-Signed-in-Washington-DC/
CCRI. (2025). CCRI Statement on the Passage of the TAKE IT DOWN Act (S. 146). Cybercivilrights.org. https://cybercivilrights.org/ccri-statement-on-the-passage-of-the-take-it-down-act-s-146/
Nelson, H. (2024). Taylor Swift and the Dangers of Deepfake Pornography. Nsvrc. https://www.nsvrc.org/blogs/feminism/taylor-swift-and-dangers-deepfake-pornography
Sokler, B., et al. (2025). President Trump Signs AI Deepfake Act into Law and House Passes AI Measures — AI: The Washington Report. Mintz. https://www.mintz.com/insights-center/viewpoints/54731/2025-05-22-president-trump-signs-ai-deepfake-act-law-and-house
Take It Down Act, S. 146, 119th Cong. (2025). https://www.congress.gov/bill/119th-congress/senate-bill/146