Max Tegmark, a leading scientist and AI campaigner, has cautioned that the tech industry’s lobbying efforts have diverted attention from the existential threat artificial intelligence poses to humanity.
The Guardian reports that in a recent interview at the AI Summit in Seoul, South Korea, Tegmark expressed concern that the shift in focus from the potential extinction of life to a broader notion of AI safety could lead to an unacceptable delay in implementing strict regulations on the creators of the most powerful AI programs.
Tegmark, a trained physicist, drew parallels between the current state of AI and the development of nuclear weapons in the 1940s. He referred to the creation of the first self-sustaining nuclear chain reaction by Enrico Fermi in 1942, which was a significant milestone in the development of nuclear bombs. Similarly, Tegmark believes that AI models capable of passing the Turing test, where a human cannot distinguish between a conversation with another human and an AI, serve as a warning for the potential loss of control over AI.
OpenAI founder Sam Altman, creator of ChatGPT (TechCrunch/Flickr)
The Future of Life Institute, a non-profit organization led by Tegmark, called for a six-month “pause” in advanced AI research last year due to these concerns. The launch of OpenAI’s GPT-4 model in March of that year was seen as a canary in the coalmine, indicating that the risk was unacceptably close. Despite the support of thousands of experts, including AI pioneers Geoffrey Hinton and Yoshua Bengio, no pause was agreed upon.
Instead, AI summits, such as the one in Seoul and the previous one at Bletchley Park in the UK, have taken the lead in the nascent field of AI regulation. Tegmark believes that the focus of international AI regulation has shifted away from existential risk, with only one of the three “high-level” groups at the Seoul summit directly addressing safety, and even then, it looked at a broad spectrum of risks.
Tegmark argues that the downplaying of the most severe risks is not accidental but rather the result of industry lobbying. He compares the situation to the tobacco industry’s efforts to distract from the link between smoking and lung cancer in the 1950s, which delayed regulation until 1980.
Critics have accused Tegmark of focusing on hypothetical future risks to distract from concrete harms in the present. However, he dismisses this notion, stating that tech leaders like OpenAI boss Sam Altman are in an impossible situation where they cannot stop even if they want to, as they would be replaced by their respective companies.
Read more at the Guardian here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.