Google’s AI unit, DeepMind, has reportedly rolled out a “Robot Constitution” aiming to govern AI behaviors and ensure human safety.
The Verge reports that the DeepMind robotics team at Google has recently revealed three significant advancements aimed at enhancing the decision-making capabilities of robots, particularly in terms of safety and efficiency. One of these achievements is the development of a Robot Constitution, a new approach to AI safety inspired by Isaac Asimov’s famous Three Laws of Robotics.
This constitution is a set of safety-focused guidelines designed to instruct large language models (LLMs) in making decisions that consciously avoid tasks involving humans, animals, sharp objects, and electrical appliances. Essentially, it acts as a moral compass for AI, steering them away from potentially dangerous actions which is particularly important when these AI’s are integrated with physical robots. No one wants a robot shoving a fork in their toaster.
An interesting aspect of this initiative is Google’s AutoRT data gathering system, which integrates a visual language model (VLM) with the LLM. This combination allows the AI to understand and adapt to its environment, making appropriate decisions based on the context. According to the company, this has helped to enhance the AI’s ability to operate autonomously in various settings.
In practical terms, the Robot Constitution includes several safety mechanisms, such as automatic shutdown if the force on the robot’s joints exceeds a specified threshold and a physical kill switch for human operators. Over seven months, Google conducted over 77,000 trials with a fleet of 53 AutoRT robots in different office environments, testing various degrees of autonomy and control.
Read more at the Verge here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.