Sen Pete Ricketts warns of 'weaponized disinformation'
FIRST ON FOX: A new Senate Republican-led bill aims to make sure Americans are well aware of what is real online and how to spot content generated by artificial intelligence (AI).
Sen. Pete Ricketts, R-Neb., is introducing legislation on Tuesday to direct relevant federal agencies to coordinate on the creation of a watermark for AI-made content, including enforcement rules. That watermark would then be required on any publicly distributed AI images, videos and other materials.
"With Americans consuming more media than ever before, the threat of weaponized disinformation confusing and dividing Americans is real," Ricketts told Fox News Digital.
"Deepfakes generated by artificial intelligence can ruin lives, impact markets and even influence elections. We must take these threats seriously."
GOOGLE TO REQUIRE POLITICAL ADS TO DISCLOSE USE OF AI DURING 2024 ELECTION CYCLE
Nebraska Sen. Pete Ricketts' new bill is aimed at setting a federal regulatory standard for AI-made content. (Celal Gunes / Anadolu Agency via Getty Images / File)
Ricketts said his bill "would give Americans a tool to understand what is real and what is made-up."
Officials in the Department of Homeland Security, Department of Justice, Federal Communications Commission and Federal Trade Commission would be tasked with laying out the guidelines.
Earlier this month, search giant Google unveiled a new policy that would see technology known as SynthID used to permanently embed a watermark on an AI-generated image.
It comes amid concern over the pitfalls of AI’s rapid advancement as increasingly sophisticated technology becomes more accessible.
Financial markets had been shaken this year and briefly dipped when an image of what appeared to be an explosion at the Pentagon circulated on the internet in May. It turned out to be AI-generated.
The new legislation comes after a fake image of the Pentagon briefly sent markets into a tailspin in May. (Alex Wong / Getty Images / File)
There is also growing concern that hostile actors could wreak havoc on the 2024 U.S. elections by using fake AI content.
It’s part of what has prompted a flurry of AI hearings and legislation in Congress as lawmakers scramble to get ahead of the rapidly advancing technology.
But at least one expert told senators at an Energy Committee hearing last week that watermarks, while helpful to an extent, will likely not be enough to stop malign foreign actors from injecting fake AI content into American information channels.
Congress is racing to get ahead of AI technology's rapid advancement. (AP Photo / Mariam Zuhaib / File)
"There will be many open [AI] models produced outside the United States and produced elsewhere that, of course, wouldn't be bound by U.S. regulation," said professor Rick Stevens of the Argonne National Laboratory in Illinois.
"We can have a law that says ‘watermark AI-generated content,’ but a rogue player outside the [country] operating in Russia or China or somewhere wouldn't be bound by that and could produce a ton of material that wouldn't actually have those watermarks. And so it could pass a test, perhaps."
Elizabeth Elkind is a reporter for Fox News Digital focused on Congress as well as the intersection of Artificial Intelligence and politics. Previous digital bylines seen at Daily Mail and CBS News.
Follow on Twitter at @liz_elkind and send tips to