Report: Top AI Researchers Complain OpenAI, Meta, and Google Are Ignoring Safety Concerns

Mark Zuckerberg Facebook creepy smile
KENZO TRIBOUILLARD /Getty

A recent report commissioned by the U.S. State Department has exposed significant safety concerns voiced by employees at leading artificial intelligence labs, including those of OpenAI, Google, and Mark Zuckerberg’s Meta, highlighting the lack of adequate safeguards and potential national security risks posed by advanced AI systems.

TIME reports that according to the government-commissioned report authored by employees of Gladstone AI, some of the world’s top AI researchers harbor grave apprehensions regarding the safety measures and incentives driving their organizations. The authors conducted interviews with over 200 experts, including employees from pioneering AI labs such as OpenAI, Google DeepMind, Meta, and Anthropic – all of which are actively pursuing the development of artificial general intelligence (AGI), a hypothetical technology capable of performing most tasks at or above human level.

OpenAI boss Sam Altman

OpenAI boss Sam Altman (Kevin Dietsch/Getty)

Sundar Pichai, CEO of Google and Alphabet, attends a press event to announce Google as the new official partner of the Women's National Team at Google Berlin. Photo: Christoph Soeder/dpa (Photo by Christoph Soeder/picture alliance via Getty Images)

Sundar Pichai, CEO of Google and Alphabet, attends a press event to announce Google as the new official partner of the Women’s National Team at Google Berlin. Photo: Christoph Soeder/dpa (Photo by Christoph Soeder/picture alliance via Getty Images)

The report reveals that employees at these labs shared concerns privately with the authors, expressing fears that their organizations prioritize rapid progress over implementing robust safety protocols. One individual voiced worries about their lab’s “lax approach to safety” stemming from a desire to avoid slowing down the development of more powerful AI systems. Another employee expressed concern over insufficient containment measures to prevent an AGI from escaping their control, despite the lab’s belief that AGI is a near-term possibility.

Cybersecurity risks were also highlighted, with the report stating, “By the private judgment of many of their own technical staff, the security measures in place at many frontier AI labs are inadequate to resist a sustained IP exfiltration campaign by a sophisticated attacker.” The authors warn that given the current state of security at these labs, it is likely that attempts to exfiltrate AI models could succeed without direct government support, if they haven’t already.

Jeremie Harris, CEO of Gladstone and one of the report’s authors, emphasized the gravity of the concerns raised by employees. “The level of concern from some of the people in these labs, about the decision-making process and how the incentives for management translate into key decisions, is difficult to overstate,” he told TIME.

The report also cautions against overreliance on AI evaluations, which are commonly used to test for dangerous capabilities or behaviors in AI systems. According to the authors, these evaluations can be undermined and manipulated, as AI models can be superficially tweaked or “fine-tuned” to pass evaluations if the questions are known in advance. The report cites an expert with “direct knowledge” of one AI lab’s practices, who judged that the unnamed lab is gaming evaluations in this way.

Read more at TIME here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

Authored by Lucas Nolan via Breitbart March 12th 2024