Google and OpenAI Employees Sound Alarm on Lack of Safety Oversight in AI Industry

machine learning
David Gyung/Getty Images

A group of current and former employees from prominent artificial intelligence companies, including OpenAI and Google DeepMind, have issued an open letter calling for increased transparency and protections for whistleblowers within the AI industry.

The Guardian reports that the open letter, signed by eleven current and former OpenAI workers and two current or former Google DeepMind employees, highlights the growing concern over the potential harms of artificial intelligence and the lack of adequate safety oversight within the industry. The employees argue that AI companies possess substantial non-public information about the capabilities, limitations, and risks associated with their systems, but currently have only weak obligations to share this information with governments and none with civil society.

Sundar Pichai talks about AI

Sundar Pichai, chief executive officer of Alphabet Inc., during the Google I/O Developers Conference in Mountain View, California, US, on Wednesday, May 10, 2023. Photographer: David Paul Morris/Bloomberg

OpenAI chief Sam Altman looking lost

OpenAI chief Sam Altman looking lost (Mike Coppola/Getty)

The letter calls for a “right to warn about artificial intelligence” and asks for a commitment to four principles around transparency and accountability. These principles include a provision that companies will not force employees to sign non-disparagement agreements that prohibit airing risk-related AI issues and a mechanism for employees to anonymously share concerns with board members.

The employees emphasize the importance of their role in holding AI companies accountable to the public, given the lack of effective government oversight. They argue that broad confidentiality agreements block them from voicing their concerns, except to the very companies that may be failing to address these issues.

OpenAI defended its practices in a statement, saying that it had avenues such as a tipline to report issues at the company and that it did not release new technology until there were appropriate safeguards. However, the letter comes after two top OpenAI employees, co-founder Ilya Sutskever and key safety researcher Jan Leike, resigned from the company last month. Leike alleged that OpenAI had abandoned a culture of safety in favor of “shiny products.”

The concern over the potential harms of artificial intelligence has intensified in recent years as the AI boom has left regulators scrambling to catch up with technological advancements. While AI companies have publicly stated their commitment to safely developing the technology, researchers and employees have warned about a lack of oversight as AI tools exacerbate existing social harms or create entirely new ones.

Read more at the Guardian here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

Authored by Lucas Nolan via Breitbart June 5th 2024