Inflection AI, a prominent artificial intelligence startup, is spearheading an initiative to curtail the role of AI chatbots in political elections, ensuring that the democratic process remains fundamentally human. This is particularly important as popular chatbots like OpenAI’s ChatGPT display a brazen leftist bias.
Bloomberg reports that Inflection AI, a powerhouse in the artificial intelligence industry, has raised eyebrows and garnered attention with its proactive stance on the ethical considerations of AI in politics. The company, which has amassed over $1.5 billion in investments, is working towards limiting the influence of AI chatbots in the electoral process, particularly in the upcoming U.S. presidential race.
OpenAI logo seen on screen with ChatGPT website displayed on mobile seen in this illustration in Brussels, Belgium, on December 12, 2022. (Photo by Jonathan Raa/NurPhoto via Getty Images)
Mustafa Suleyman, the co-founder of Inflection AI, has been vocal about the company’s position, stating clearly that its chatbot, named Pi, will not be permitted to advocate for any political candidates. “Our goal isn’t to provide that public service,” Suleyman said. “It’s highly contentious and we may get it wrong, so our role is to step back from that.”
In a bid to foster a collective responsibility within the industry, Inflection AI is in active discussions with other leading AI companies. The objective is clear: to forge a consensus that prevents chatbots from recommending specific political candidates, thereby preserving the human essence of the democratic process. Suleyman emphasized the importance of this human aspect, asserting that even if chatbots were flawless, some aspects “probably got to remain a human part of the process.”
Since the advent of groundbreaking chatbots like OpenAI’s ChatGPT, the landscape has been flooded with powerful AI tools, each boasting a myriad of capabilities such as answering questions and summarizing texts. This widespread use of AI underscores the need to set clear boundaries that safeguard sensitive sectors like politics from the influence of AI. This is certainly the case in politics, due to the overwhelming leftist bias found in ChatGPT and chatbots from Google and other tech titans.
Breitbart News previously reported on an analysis of ChatGPT’s leftist bias completed by the University of East Anglia:
Researchers asked ChatGPT to impersonate supporters of various political parties and positions, and then asked the modified chatbots a series of 60 ideological questions. The responses to these questions were then compared to ChatGPT’s default answers. This allowed the researchers to test whether ChatGPT’s default responses favor particular political stances. Conservatives have documented a clear bias in ChatGPT since the chatbot’s introduction to the general public.
To overcome difficulties caused by the inherent randomness of the “large language models” that power AI platforms such as ChatGPT, each question was asked 100 times and the different responses collected. These multiple responses were then put through a 1,000-repetition “bootstrap” (a method of re-sampling the original data) to further increase the reliability of the inferences drawn from the generated text.
Lead author Dr Fabio Motoki, of Norwich Business School at the University of East Anglia, said: “With the growing use by the public of AI-powered systems to find out facts and create new content, it is important that the output of popular platforms such as ChatGPT is as impartial as possible.”
“The presence of political bias can influence user views and has potential implications for political and electoral processes.”
Breitbart News will continue to report on AI and the technology’s implications for future elections
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.