As 2024 anticipates a global election cycle, OpenAI says it wants to prevent AI misuse, bring transparency, and enhance voter access to accurate voting information.
OpenAI, the creator of the popular chatbot ChatGPT, released a new blog post outlining its approach to the 2024 elections on a global scale.
Its main emphasis is to bring transparency, enhance access to accurate voting information and prevent the misuse of artificial intelligence (AI).
While highlighting the need to protect the integrity of the collaborative nature of elections, OpenAI wants to make sure its AI service “is not used in a way that could undermine this process.”
The company said protecting the integrity of elections is an effort involving everyone, and it wants to make sure its technology “is not used in a way that could undermine this process.”
“We want to make sure that our AI systems are built, deployed, and used safely. Like any new technology, these tools come with benefits and challenges.”
OpenAI says it has a “cross-functional effort” dedicated explicitly to election-related work that will quickly investigate and address potential abuse.
Among these efforts include preventing abuse, which it defines as “misleading deep fakes,” chatbots impersonating candidates or scaled influence operations. It said one of its measures has been implementing guardrails on Dall-E to decline requests for image generation of real people, including political candidates.
In August 2023, regulators in the United States were even considering regulating political deep fakes and ads generated using AI before the 2024 presidential elections.
Politicians in the U.S. have expressed skepticism that tech companies will be able to rein in their powerful AI systems.
At a congressional hearing in May where OpenAI Chief Executive Sam Altman testified, Sen. Richard Blumenthal (D., Conn.) used a demonstration of his AI-generated voice reading a statement to highlight the risks.
“What if it had provided an endorsement of Ukraine surrendering or Vladimir Putin’s leadership?” he said.
Altman responded to some of the concerns by asserting that OpenAI’s chatbot was “a tool, not a creature,” and “a tool that people have great control over.”
OpenAI said building applications for political campaigning and lobbying is currently not allowed.
Already, a politician running for U.S. Congress already employs AI as a campaign caller to help reach more potential voters.
The AI developer said it’s also working on constantly updating ChatGPT to provide accurate information from real-time news reporting around the globe while directing voters to official voting websites for more information.
AI’s influence on elections has been a major topic of discussion already, with Microsoft even releasing a report on AI usage on social media having the potential to sway voter sentiment.
Microsoft’s Bing AI chatbot has already been under scrutiny after Europe-based researchers found that it gave misleading election information.
Google has been particularly proactive in its stance regarding AI and elections. In September, it made AI disclosure mandatory in political campaign ads, along with limiting answers to election queries on its Bard AI tool and generative search.