Government must hold those who weaponize tools, including AI, against us accountable
Today’s hot trend for policymakers is talking about artificial intelligence. This incredibly powerful technology is here to stay, and new research shows that most of us are optimistic about how generative AI will be able to improve our lives.
But there are some new and concerning threats to which policymakers must pay attention. This includes a horrific misuse of this positive tech: bad actors abusing AI to put real people in sexually explicit situations, including minors.
This criminal use of AI tools is not just reprehensible; it could destroy a person’s life and dignity. There is currently a gap in the law addressing such conduct that may allow bad actors to go free, and policymakers need to address it now to ensure the culprits can be appropriately held to account.
The criminal use of AI tools is not just reprehensible; it could destroy a person’s life and dignity.
Hundreds of laws already make illegal many of the ways bad actors can abuse AI tools. If AI is used to commit fraud, we have laws against fraud. The same is true for using AI for illegal discrimination, and the list goes on.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
Every existing law applies to AI like it does to its offline corollaries. But gaps do exist, and Americans are getting hurt.
One such gap remains when it comes to prosecuting predators handling sexually explicit images of minors. Today’s laws require such images to be real, live-shot photos. But abusers are using AI tools to escape justice by claiming in court that since the sexually explicit image of a real minor is "created by AI," it’s not "real," and thus not criminal. Unfortunately, the existing letter of the law is letting abusers escape the purposes of the law.
At the same time, other bad actors leverage AI to generate "deepfakes" of innocent parties in compromising, sexually explicit positions. Think of this as a modern day photoshopping of a person’s head on another’s body – except now, the fake is more difficult to distinguish.
Once again, the laws on this are murky at best. Harassment and defamation of character are existing ways to prosecute an offender, but since this is "AI" and not a real photo, a legal gap exists that must be filled.
GOP LAWMAKERS SOUND ALARM OVER AI USED TO SEXUALLY EXPLOIT CHILDREN
These holes in our laws could provide a dangerous haven for criminals, allowing them to hide behind the letter of the law while eviscerating its spirit. Thus, legislative attention is warranted and urgently needed. Lawmakers should act, not to ban or overregulate AI with a Red Tape Wishlist, but to patch these holes in our existing laws to ensure criminals can’t work around them to abuse innocent people.
The first step for policymakers is to enact the Stop Deepfake CSAM Act. This simple bill updates existing child protection laws to make clear that it’s illegal if a criminal uses AI to "modify" sexual images of children. If they are using a real child in it, even if the rest is AI generated, this is child pornography. It is illegal, and policymakers must ensure these predators go to prison.
Next is to approve the Stop Non-Consensual Distribution of Intimate Deepfake Media Act. This updates existing state privacy laws to make it clear that it is illegal to distribute AI images of an identifiable person with the intent to cause harm – addressing serious concerns about the horrific harassment of "deepfakes."
CLICK HERE FOR MORE FOX NEWS OPINION
These changes empower law enforcement to punish bad actors using AI for nefarious purposes. This isn’t to say that just making something illegal will stop it, but this is a positive step to getting recompense for harmed parties.
The pursuit of justice in the digital age must be dynamic and recognize the new dimensions AI introduces into criminal activity. This isn't merely a legislative update; it's a moral imperative.
The government must hold those who weaponize tools, including AI, against us accountable, which ensures that the digital world is an extension of our commitment to dignity, safety and justice.
AI presents us with a great opportunity to improve our lives in so many ways. Our leaders must make sure that optimism and opportunity are allowed to thrive. Lawmakers must act now to protect the innocent and affirm that in our digital experience, America carries forward the principles that define us as a civilized society.
AI must remain a force for good, not a tool to be illegally abused by the reprehensible.
CLICK HERE TO READ MORE FROM CARL SZABO
Carl Szabo is vice president and general counsel for NetChoice, and professor of internet law at the George Mason University's Antonin Scalia Law School.