U.S. Outlaws Non-Consensual Deepfake Content with New Federal Law
A new federal law in the U.S. is taking direct aim at the harmful spread of non-consensual explicit images—both real and digitally manipulated. On Monday, President Donald Trump signed the “Take It Down Act,” making it a crime to share intimate photos or videos of someone without their permission, including those generated by artificial intelligence.
Standing in the White House Rose Garden, President Trump addressed the growing issue of deepfakes and online abuse, especially targeting women. “Too many people—especially women—have had their lives turned upside down by fake and explicit images created and shared without their consent,” he said. “That ends today. Anyone found guilty will face up to three years behind bars.”
The law doesn’t stop at punishing individuals. Online platforms that fail to remove reported images within 48 hours could also face serious penalties.
Making a rare public appearance, First Lady Melania Trump, who previously voiced support for the law, called the legislation a crucial step toward protecting families and young people from the darker side of the internet.
“This is a win for every parent who’s worried about their child’s safety online,” she said. “We’re sending a clear message—your image is yours, and no one has the right to exploit it.”
With artificial intelligence making it easier to manipulate photos and videos, deepfake abuse has become increasingly widespread. From schoolyards to celebrity scandals, the impact is felt far and wide. High-profile figures like Taylor Swift and Alexandria Ocasio-Cortez have had their likeness misused, but experts say everyday individuals are just as vulnerable.
One mother, Dorota Mani, whose daughter was a victim, described the law as more than just words on paper. “Now I have something I can use to fight back. It’s empowering,” she said.
However, not everyone is convinced. Digital rights groups, including the Electronic Frontier Foundation, have raised concerns that the law might give the government too much control over what gets taken down online. Critics worry it could lead to overreach or be misused to silence voices under the guise of protection.
Even so, many see the act as a much-needed response to the rising tide of AI-driven harassment, bullying, and emotional harm.
Renee Cummings, a criminologist and AI ethicist, sees it as a pivotal moment. “This is a meaningful move toward justice in the digital age,” she said. “But its real power lies in how quickly and effectively it’s enforced.”
With technology evolving faster than the rules can catch up, lawmakers, families, and tech companies are now on notice—the game has changed.