Fake articles on internet have increased by 1,000% since May
The rise of artificial intelligence has helped proliferate the spread of fake news, with the internet seeing a surge in websites devoted to disseminating misinformation.
"Artificial intelligence tools to propagate fake news are going to snowball out of control quickly," Ziven Havens, the policy director at the Bull Moose Project, a nonprofit "dedicated to building the conservative populist movement," told Fox News Digital.
The comments come as AI-created false articles have increased across the internet by 1,000% since May, going from 49 sites to more than 600 in that time span, according to a report from the Washington Post.
The report notes that AI has made it easier than ever to disseminate fake news, an operation that used to depend on large groups of low-wage workers to pump out articles that can be hard to differentiate from legitimate news sources. In one example cited by the Washington Post, a story about Israeli Prime Minister Benjamin Netanyahu alleged that the world leader's psychiatrist had died and hinted that the prime minister may have been involved.
DOCTORS SAY AI IS 'GAME-CHANGER' FOR NO 1 KILLER OF AMERICANS
Israeli Prime Minister Benjamin Netanyahu (Abir Sultan/Pool Photo via AP/File)
Despite the article being fake, it was recirculated across the internet in multiple countries and languages and was shared widely across social media channels.
The misinformation often seeks to damage the reputation of world leaders or political figures, something Havens believes will be particularly troublesome for Republicans moving forward.
"Ever since the start of the Trump campaign in 2016, the right has been under constant attack by fake news and false narratives," Havens said. "AI will compound this problem 10-fold. Instead of combating a liberal talk show host, Republicans will have the herculean task of combating hundreds, if not thousands, of articles wrought with false information daily."
In response, Havens said Congress should step in to regulate AI, being careful not to infringe upon the free-speech rights of Americans while also putting up guardrails to keep them safe from fake content.
Samuel Mangold-Lenett, a staff editor at The Federalist, which is a conservative online magazine, echoed similar concerns, telling Fox News Digital that it is important to safeguard free speech when considering how to attack misinformation.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
"We must remain committed to preserving free speech online. This includes so-called misinformation," Mangold-Lenett said. "The use of AI for disseminating deepfake pornography and for exploiting people in other ways, of course, ought to be illegal. But we have to become more AI-literate as a society, and we have to become more skeptical of the narratives we are being fed from every direction."
President Biden signs an executive order focused on government regulations on artificial intelligence in the East Room of the White House on Oct. 30, 2023. (Demetrius Freeman/Washington Post via Getty Images)
The problem isn't limited to the words generated for an article, according to the Washington Post report, which noted that AI can be used for everything from generating fake imagery to creating fake news anchors in videos. The content can look so real that users may not know they are being duped.
But Jon Schweppe, the policy director of American Principles Project, an organization that advocates on behalf of American families, doesn't believe the problem is particularly new.
"Misinformation isn’t a new problem. We’ve always had lies and the lying liars who tell them," Schweppe told Fox News Digital. "But in America, those people can be held legally responsible when they libel someone else and damage their reputation."
To combat the use of AI to create such fake content, Schweppe argued for updated laws that would allow those harmed by AI to hold others accountable.
BILL GATES ON AI: WE HAVE TO STAY AHEAD OF THE BAD GUYS
"AI creators should be held legally responsible for their creations. If an AI hallucinates and consistently lies and defames someone, even a public figure, that person should have a right to their day in court," Schweppe said.
As fake news has proliferated across the internet, the creators of such websites have often gotten creative with how they create a sense of legitimacy. In some instances, real writers are used to create legitimate content that appears alongside other articles generated by AI. Other times, human writers and AI team up to create a website devoid of any legitimate content.
Phil Siegel, the founder of the Center for Advanced Preparedness and Threat Response Simulation (CAPTRS), told Fox News Digital that while the fake news phenomenon is nothing new, AI can make it easier to churn out more content.
Phil Siegel, founder of CAPTRS, told Fox News Digital that while the fake news phenomenon is nothing new, AI can make it easier to churn out more content. (Fox News/File)
"This is going to get worse until rules are in place to manage it. And it probably can’t be stopped, but it can be slowed by having laws, regulations and requirements for using AI in news," Siegel said. "Think about theft … you can’t stop it, but you can set punishments that manage it. For these types of misinformation, watermarking of content, rules by the FTC and FEC with teeth and fines, and criminal charges if people are cheated or physically harmed because of this fraud."
Meanwhile, Pioneer Development Group Chief Analytics Officer Christopher Alexander pointed out that AI cannot create misinformation on its own.
"The notion that AI inherently spreads disinformation is a ridiculous assertion. AI is prompted by human beings and produces material. Like a firearm, AI can be a tool or a threat, depending on who is using it," Alexander told Fox News Digital. "This cartoonish, black-and-white view that 'AI is bad' presumes that all humans now are suddenly so truthful and reliable that AI is a unique problem."
Alexander said that humans working for what are typically viewed as legitimate news sources have also propagated misinformation, something that has drawn less attention and ire from lawmakers.
"While deepfakes are frustrating, the only way to stop AI-generated content is authoritarian censorship, which then makes us even worse than the propaganda we are trying to combat," Alexander said.