Meta’s ‘Supreme Court’ Criticizes Instagram’s Handling of AI Deepfake Porn

Mark Zuckerberg showing his tongue
Wired Photostream/Flickr

Meta’s Oversight Board, known as the social media giant’s “supreme court,” has concluded that Instagram should have removed AI deepfake pornography made of a Indian public figure, highlighting significant gaps in the platform’s content moderation policies and practices.

PCMag reports that in a recent investigation, Meta’s Oversight Board has found that Instagram failed to promptly remove an AI-generated nude deepfake of a real Indian public figure. This incident, along with a similar case involving an American woman on Facebook, has brought to light crucial issues in Meta’s approach to handling non-consensual intimate imagery.

The board’s investigation revealed that while the explicit deepfake of the American woman was quickly removed from Facebook and added to Meta’s database of flagged images, the case of the Indian woman was not addressed until the board’s inquiry began. This discrepancy in treatment raises concerns about the effectiveness and fairness of Meta’s content moderation practices.

Meta’s current strategy relies heavily on media reports to identify content for its internal database of flagged images. The Oversight Board criticized this approach as inherently reactive, potentially allowing harmful content to spread before action is taken. The board expressed particular concern for victims who are not in the public eye, as they may face greater challenges in getting their non-consensual depictions removed.

The investigation also highlighted issues with Meta’s policy language. The board found that both AI deepfakes violated Meta’s rule against “derogatory sexualized Photoshop or drawings.” However, they suggested that the policy would be clearer if it focused on the lack of consent and the harm caused by such content, rather than the specific technology used to create it.

Recommendations from the Oversight Board include rephrasing the rules to emphasize that non-consensual sexualized deepfakes of any kind are not allowed, and replacing the term “Photoshop” with a broader term that encompasses various methods of image alteration. Additionally, the board suggested moving this policy from the “Bullying and Harassment” section to the “Adult Sexual Exploitation Community Standard.”

Another area of concern identified by the board is Meta’s policy of automatically closing user reports within 48 hours if no staff member responds. This policy led to the delayed removal of the Indian woman’s deepfake, which was only taken down after the board brought it to Meta’s attention.

The board’s findings come at a time of increasing legislative action against deepfake pornography. The US Senate recently passed the Defiance Act, which would allow victims of deepfake porn to sue creators, distributors, and recipients of such images. Additionally, Sen.Ted Cruz (R-TX) has proposed the TAKE IT DOWN Act, aimed at criminalizing the publication of explicit non-consensual AI deepfake imagery and mandating its removal from platforms like Facebook and Instagram.

Read more at PCMag here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

Authored by Lucas Nolan via Breitbart July 25th 2024