Ahead of the 30th anniversary of the Violence Against Women Act, the White House announced voluntary commitments from major tech companies to combat the creation, distribution and monetization of image-based sexual abuse, including artificial intelligence-generated “deepfakes.”
Tech companies including Aylo, which operates several of the largest pornography websites, Meta, Microsoft, TikTok, Bumble, Discord, Hugging Face and Match Group signed a list of principles for combatting image-based sexual abuse, which includes nonconsensual sharing of nude and intimate images, sexual image extortion, the creation and distribution of child sexual abuse material and the rise of AI deepfakes. Examples of deepfakes, misleading media that is often sexually explicit, include “swapping” victims’ faces into sexually explicit videos or creating fake, AI-generated nude images.
The White House wrote that image-based sexual abuse “has skyrocketed,” disproportionately affecting women, children and LGBTQ people. The 2023-24 school year was marked by global incidents of children, overwhelmingly teenage girls, being targeted in deepfakes made and shared by their classmates. More nonconsensual sexually explicit deepfake videos were uploaded online in 2023 than all previous years combined.
“This abuse has profound consequences for individual safety and well-being, as well as societal impacts from the chilling effects on survivors’ participation in their schools, workplaces, communities, and more,” the White House’s announcement said.
Two digital rights nonprofit groups, the Center for Democracy and Technology and the Cyber Civil Rights Initiative, and the National Network to End Domestic Violence, a nonprofit anti-domestic violence organization, led the effort to create and sign principles to combat image-based sexual abuse. The principles include giving people control over whether and how their likenesses and bodies are depicted and disseminated in intimate imagery, clearly prohibiting nonconsensual intimate imagery in policy and implementing “effective, prominent, and easy-to-use tools to prevent, identify, mitigate, and respond to” image-based sexual abuse. Other principles include accessibility and inclusion, trauma-informed approaches, prevention and harm mitigation, transparency and accountability, and commitment and collaboration.
“If companies were doing their jobs, if they were being responsible, if they were being accountable, we wouldn’t have these epidemics,” said Mary Anne Franks, the president of the Cyber Civil Rights Initiative. “It’s certainly clear to say that this is progress because of where things were before. But if we had a responsible industry that was forced to be accountable for the kinds of abuses that their products and services are creating, then we wouldn’t be in the crisis we are in.”
The Cyber Civil Rights Initiative advocates for targeted federal and state legislation and reform of Section 230 of the Communications Decency Act, the legal shield that protects tech companies from lawsuits related to content users create and post on their platforms. Franks said that while the voluntary efforts the White House highlighted are welcome, she does not see them as a substitute for legislation. Still, he said, the Biden administration’s and Vice President Kamala Harris’ focus on gender violence, online harassment and image-based sexual abuse was “transformative” compared with the work of previous administrations and the impact it has on victims and their advocates.
Franks said other tech companies can sign onto the principles if they haven’t already. The principles are intended to be a guide for the industry.
In its announcement, the White House also mentioned recent efforts taken by tech companies like Google, which announced in July that it would derank and delist websites and content in search results containing nonconsensual sexually explicit deepfakes. Other efforts the White House noted included Meta’s removal of around 63,000 Instagram accounts found to be engaging in sextortion, a practice of soliciting sexual images and then coercing financial payments from victims under threats of distributing the material.
The efforts were taken after deepfakes were created, disseminated and monetized via the platforms, along with the perpetuation of sextortion. NBC News has reported on a massive uptick in sextortion cases since 2022, largely affecting teen boys who used Meta’s Instagram. NBC News also found that apps that advertised their abilities to create nude images of teen celebrities also ran on Facebook and Instagram for months in late 2022 and early 2023. On Google’s search engine and Microsoft’s Bing, NBC News previously found that nonconsensual sexually explicit deepfakes were available at the top of some search results.
“These issues are not new, but there are certainly new threats brought by the emergence of generative AI and how easy it is to create fake versions of these images,” said Alexandra Reeve Givens, the CEO of the Center for Democracy and Technology. “We all agree that principles aren’t enough. It really is actually about changes in practice.”