The Rising Menace of AI-Generated Deepfake Pornography
Understanding the Threat and How to Defend Against It
Deepfake pornography, enabled by artificial intelligence, poses a significant threat to individuals' privacy and mental well-being, warns attorney Carrie Goldberg, an expert in online abuse and sex crimes.
Unlike revenge porn, which involves unauthorized sharing of real images, deepfakes involve fabricating content by placing someone’s face on explicit photos or modifying images to appear compromising, affecting people who have never shared explicit photos.
High-profile figures like Taylor Swift and Rep. Alexandria Ocasio-Cortez, as well as young individuals, have been targeted.
Victims are advised to preserve evidence by taking screenshots before attempting to remove such content.
Companies like Google, Meta, and Snapchat offer tools for requesting removal, and organizations like StopNCII.org assist victims.
In 2024, bipartisan efforts in the US Senate urged tech companies to combat nonconsensual explicit content, leading to a proposed bill aiming to criminalize the publication of deepfake pornography.
Society bears responsibility to act ethically and prevent misuse of this technology, Goldberg asserts, as digital safety remains a collective challenge.