AI Censorship: OpenAI Boasts of GPT-4’s Content Moderation Capability

ai censorship openai boasts of gpt 4s content moderation capability
Jonathan Raa/NurPhoto via Getty Images

Already struggling to overcome its documented political bias, ChatGPT creators OpenAI are boasting of their AI technology’s capability to power content moderation, i.e. censorship.

Praising content moderation as a tool for “sustaining the health of digital platforms,” OpenAI claims that its powerful AI systems can reduce the need for humans in the content moderation process.

Via OpenAI:

Content moderation plays a crucial role in sustaining the health of digital platforms. A content moderation system using GPT-4 results in much faster iteration on policy changes, reducing the cycle from months to hours. GPT-4 is also able to interpret rules and nuances in long content policy documentation and adapt instantly to policy updates, resulting in more consistent labeling. We believe this offers a more positive vision of the future of digital platforms, where AI can help moderate online traffic according to platform-specific policy and relieve the mental burden of a large number of human moderators. Anyone with OpenAI API access can implement this approach to create their own AI-assisted moderation system.

The much talked-about tech company says it simply wants to protect the mental health of human moderators:

Content moderation demands meticulous effort, sensitivity, a profound understanding of context, as well as quick adaptation to new use cases, making it both time consuming and challenging. Traditionally, the burden of this task has fallen on human moderators sifting through large amounts of content to filter out toxic and harmful material, supported by smaller vertical-specific machine learning models. The process is inherently slow and can lead to mental stress on human moderators.

At the very end of its blog entry, OpenAI concedes that the biases of AI might be a problem when using the technology for content moderation:

Judgments by language models are vulnerable to undesired biases that might have been introduced into the model during training. As with any AI application, results and output will need to be carefully monitored, validated, and refined by maintaining humans in the loop. By reducing human involvement in some parts of the moderation process that can be handled by language models, human resources can be more focused on addressing the complex edge cases most needed for policy refinement.

As Breitbart News previously reported, the leftist bias of ChatGPT, the market-leading AI chatbot developed by OpenAI, have been well documented. OpenAI has claimed that its biases are a result of “mistakes” made by the company, although it remains to be seen if ChatGPT’s biases will be fixed.

Allum Bokhari is the senior technology correspondent at Breitbart News. He is the author of #DELETED: Big Tech’s Battle to Erase the Trump Movement and Steal The Election. Follow him on Twitter @AllumBokhari

Authored by Allum Bokhari via Breitbart August 16th 2023