Meta’s AI assistant and Google’s search autocomplete feature have come under scrutiny for providing inaccurate information related to the recent attempted assassination of former President Donald Trump. While Google claims it is trying to improve its feature, Meta has a different excuse for covering up the assassination attempt — its AI was “hallucinating.”
The Verge reports that Meta’s AI assistant has been caught covering up the attempted assassination of former President Donald Trump. The incident has raised concerns about the reliability of AI-generated responses to real-time events and the potential implications for public information dissemination.
Joel Kaplan, Meta’s global head of policy, addressed the issue in a company blog post published on Tuesday. Kaplan described the AI’s responses as “unfortunate” and explained that Meta had initially programmed its AI to avoid responding to questions about the assassination attempt. However, this restriction was later removed after users began noticing the AI’s silence on the matter.
Despite this adjustment, Kaplan acknowledged that “in a small number of cases, Meta AI continued to provide incorrect answers, including sometimes asserting that the event didn’t happen.” He assured the public that the company is “quickly working to address” these inaccuracies.
Meta AI won’t give any details on the attempted ass*ss*nation.
— Libs of TikTok (@libsoftiktok) July 28, 2024
We’re witnessing the suppression and coverup of one of the biggest most consequential stories in real time.
Simply unreal. pic.twitter.com/BoBLZILp5M
Kaplan attributed these errors to a phenomenon known in the AI industry as “hallucinations.” This term refers to instances where AI systems generate false or inaccurate information, often with a high degree of confidence. Kaplan noted that hallucinations are “an industry-wide issue we see across all generative AI systems, and is an ongoing challenge for how AI handles real-time events going forward.”
The Meta executive also emphasized the company’s commitment to improving their AI systems, stating, “Like all generative AI systems, models can return inaccurate or inappropriate outputs, and we’ll continue to address these issues and improve these features as they evolve and more people share their feedback.”
The controversy surrounding AI responses to the Trump assassination attempt is not limited to Meta. Google has also found itself embroiled in the situation, having to refute claims that its Search autocomplete feature was censoring results related to the incident. These allegations prompted a strong reaction from former President Trump himself, who took to his Truth Social platform to accuse both companies of attempting to rig the election.
Read more at the Verge here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.