This repeated pattern of labeling inconvenient truths as 'fake' erodes public trust
In an era where artificial intelligence (AI) and digital manipulation are advancing at an unprecedented pace, the term "deepfake" has become a significant part of the industry lexicon. Deepfakes, highly realistic yet false depictions created using AI, represent a profound challenge for both technology and society.
While these tools can be used for education, creativity and even mental health support, their misuse can lead to misinformation, fraud and abuse. However, incorrectly using the term "deepfake" itself can also have dangerous consequences as illustrated by President Biden’s press secretary, Karine Jean-Pierre.
Jean-Pierre recently labeled a series of real, viral videos of Biden as "deepfakes." This move was met with a wave of criticism, highlighting a significant issue: real, genuine footage is not a deepfake, and this was a sloppy attempt by the White House to discredit videos.
White House press secretary Karine Jean-Pierre and President Biden (Getty Images)
Such mischaracterizations can erode public trust and undermine the government’s efforts to combat actual deepfake technology. As Fox News contributor Guy Benson pointed out, while the videos of Biden may be characterized as unflattering or taken out of context, labeling them as deepfakes is misleading and constitutes misinformation.
KARINE JEAN-PIERRE DOUBLES DOWN ON 'CHEAP FAKE' BIDEN VIDEOS: 'SO MUCH MISINFORMATION'
It’s hard to define deepfakes, but they are basically AI-altered images or recordings to deceptively misrepresent someone as doing or saying something that was not actually done or said. But deepfakes certainly are not real videos that make a politician look bad.
Calling a real video a deepfake is a dangerous tactic by the White House as it can backfire by fostering cynicism and distrust among the public. When officials mislabel real footage as manipulated, it can appear as though they are attempting to obscure or deflect from legitimate issues.
As Sen. Mike Lee, R-Utah, and other critics have noted, transparency and honesty are crucial in maintaining public trust, especially in an age where such trust is at a low point.
Moreover, the mischaracterization of real videos as deepfakes undermines the serious work being done to address the real dangers posed by true deepfakes. For instance, deepfakes have been used to create misleading political content, impersonate individuals for financial scams and even produce non-consensual explicit material.
CAN YOU SPOT ELECTION DEEPFAKES? HERE’S HOW NOT TO BE DUPED
Addressing these threats requires precise definitions and targeted legislative action, not the dilution of the term to save political skin.
Unfortunately, mislabeling real content as fake is not a new practice among Democrats. We saw this with the Hunter Biden laptop story, initially dismissed as Russian disinformation, only to later be verified.
This repeated pattern of labeling inconvenient truths as "fake" erodes public trust. The recent examples of government pressure on social media platforms to block content, as seen with the Twitter Files revelations, further illustrate the dangers of such tactics. When the government intimidates online services to silence certain narratives, it undermines free expression and the public's ability to make informed decisions.
CLICK HERE FOR MORE FOX NEWS OPINION
Adding to the complexity, pending AI legislation in Congress, along with Biden's broad executive order on AI, threatens to overregulate this burgeoning field. Biden’s executive order risks stifling innovation through heavy-handed regulations. It is critical that any new legislation carefully balances the need for oversight with the imperative to foster innovation.
Fortunately, existing laws already provide a strong foundation for addressing AI-related harms. We don't need redundant regulations that could hamper America's leadership in AI development.
The Federal Trade Commission (FTC) can look out for consumer welfare by enforcing laws against unfair and deceptive practices, including those involving AI and digital manipulation. Laws against fraud, harassment and election interference apply equally to malicious uses of deepfake technology.
However, there are areas where legal updates are necessary. For example, the Stop Deepfake CSAM Act would clarify that AI-manipulated sexual images exploiting real minors are illegal under existing federal child pornography statutes. The Stop Non-Consensual Distribution of Intimate Deepfake Media Act additionally would protect Americans from having their likenesses used in fabricated sexual content without consent.
Public officials should not carelessly throw around the term "deepfake" to distract the public and undermine critical legislative efforts. It is essential that policymakers maintain clear and precise language when discussing digital manipulation technologies to ensure that our policies effectively address the real threats without stifling innovation.
CLICK HERE TO READ MORE FROM CARL SZABO
Carl Szabo is vice president and general counsel for NetChoice, and professor of internet law at the George Mason University's Antonin Scalia Law School.