Study: Mark Zuckerberg’s Instagram Actively Facilitates Spread of Self-Harm Content Among Teens

sad girl on phone
mikoto.raw Photographer/Pexels

Mark Zuckerberg’s Instagram is failing to remove explicit self-harm images and is even encouraging vulnerable teens to connect with each other, enabling the concerning content to proliferate on the platform, according to an alarming new study.

The Guardian reports that the research, conducted by Danish organization Digitalt Ansvar (Digital Accountability), set up a private Instagram network including fake profiles of users as young as 13. Over the course of a month, the researchers shared 85 pieces of self-harm content that progressively increased in severity, featuring graphic imagery of blood, razor blades and messages encouraging self-harm behaviors.

The goal was to test Mark Zuckerberg’s assertions that it had greatly improved its content moderation processes, including utilizing AI, and was now catching and removing around 99 percent of harmful posts before they were even reported. However, the study’s disturbing findings revealed that not a single one of the 85 images was taken down by Instagram over the month-long experiment.

When Digitalt Ansvar built its own basic AI tool to analyze the content, it was able to automatically flag 38 percent of the self-harm images and 88 percent of the most severe ones. This indicates, according to the research group, that while Instagram has access to technology that could help address the issue, the company has “chosen not to implement it effectively.”

The study argues that Instagram’s inadequate moderation of such content suggests it may not be in compliance with the EU’s Digital Services Act. This legislation mandates that large digital platforms identify systemic risks, including predictable negative impacts on users’ physical and mental wellbeing.

Rather than attempting to shut down the self-harm network, Instagram’s algorithm actively contributed to expanding it, the research claims. It appeared to recommend that the 13-year-old accounts become friends with all members of the self-harm group after connecting with just one of them.

“We thought that when we did this gradually, we would hit the threshold where AI or other tools would recognize or identify these images,” said Ask Hesby Holm, CEO of Digitalt Ansvar. “But big surprise – they didn’t. That was worrying because we thought that they had some kind of machinery trying to figure out and identify this content.”

Holm believes Meta chooses not to moderate small private groups like the one in the study in order to maintain high traffic and engagement. “We don’t know if they moderate bigger groups, but the problem is self-harming groups are small,” he noted. Failure to crack down on self-harm imagery can lead to “severe consequences,” Holm warned. “This is highly associated with suicide. So if there’s nobody flagging or doing anything about these groups, they go unknown to parents, authorities, those who can help support.”

Leading psychologist Lotte Rubæk, who left Meta’s global expert group on suicide prevention in March, said the company’s lack of action on removing explicit self-harm content from Instagram is “triggering” vulnerable young users, especially girls and women, to harm themselves further. She argues this is contributing to rising suicide rates.

“They have repeatedly said in the media that all the time they are improving their technology and that they have the best engineers in the world. This proves, even though on a small scale, that this is not true,” Rubæk stated. “Somehow that’s just collateral damage to them on the way to making money and profit on their platforms.”

In response, a Meta spokesperson said: “Content that encourages self-injury is against our policies and we remove this content when we detect it. In the first half of 2024, we removed more than 12m pieces related to suicide and self-injury on Instagram, 99 percent of which we proactively took down.”

However, mental health experts and child safety advocates argue the disturbing results of this study show Instagram must do much more to protect young and at-risk users from being exposed to and spreading dangerous self-harm content on its platform. For many, it is quite literally a matter of life and death.

Read more at the Guardian here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

Authored by Lucas Nolan via Breitbart December 5th 2024