Photographers: Mark Zuckerberg’s Meta’s Labels Real Pictures as AI Fakes

Mark Zuckerberg showing his tongue
Wired Photostream/Flickr

Meta’s recent initiative to label AI-generated images on its social media platforms has ignited a heated debate within the photography community, as many professionals claim their non-AI images are being incorrectly tagged.

TechCrunch reports that in February, Mark Zuckerberg’s Meta announced its plan to start labeling photos created with AI tools across its social networks, including Facebook, Instagram, and Threads. The company began implementing this feature in May, attaching a “Made with AI” label to certain images. However, the rollout has not been without its challenges, as numerous users and photographers have reported instances of the label being incorrectly applied to photos that were not created using AI tools.

The controversy surrounding Meta’s labeling approach has gained significant attention, with several high-profile cases coming to light. One notable example involves a photo of the Kolkata Knight Riders winning the Indian Premier League Cricket tournament, which was erroneously tagged as AI-generated. Interestingly, the label is only visible on mobile apps and not on the web version of Meta’s platforms, adding another layer of complexity to the issue.

Photographers have been particularly vocal about their concerns, arguing that simple editing techniques should not warrant the “Made with AI” label. Former White House photographer Pete Souza found himself at the center of this debate when one of his photos was tagged with the AI label. In an email to TechCrunch, Souza explained that a change in Adobe’s cropping tool, which now requires users to “flatten the image” before saving it as a JPEG, may have triggered Meta’s algorithm to attach the label.

Souza expressed his frustration, stating, “What’s annoying is that the post forced me to include the ‘Made with AI’ even though I unchecked it.” This sentiment is echoed by many other photographers who feel their work is being misrepresented by the labeling system.

Meta has been relatively tight-lipped about the specifics of their labeling process. In a February blog post, the company mentioned that it utilizes image metadata to detect and apply the label. Meta claimed to be developing “industry-leading tools that can identify invisible markers at scale,” specifically referencing the “AI generated” information in the C2PA and IPTC technical standards.

However, the lack of transparency regarding when the label is automatically applied has led to confusion and speculation within the photography community. Some reports suggest that Meta may be applying the label when photographers use tools such as Adobe’s Generative AI Fill to remove objects from their images.

The debate has also revealed a divide within the photography community itself. While many photographers are upset about the incorrect labeling of their work, others have sided with Meta’s approach, arguing that any use of AI tools should be disclosed to maintain transparency.

In response to the growing controversy, Meta has acknowledged the feedback and stated that they are evaluating their approach. A Meta spokesperson told TechCrunch, “Our intent has always been to help people know when they see content that has been made with AI. We are taking into account recent feedback and continue to evaluate our approach so that our labels reflect the amount of AI used in an image.”

The company also emphasized its collaboration with other tech firms, stating, “We rely on industry-standard indicators that other companies include in content from their tools, so we’re actively working with these companies to improve the process so our labeling approach matches our intent.”

Read more at TechCrunch here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

Authored by Lucas Nolan via Breitbart June 25th 2024