Call of Duty, a shooter video game published by Activision, has started using artificial intelligence to monitor what players say during online matches in order to flag and crack down on "toxic speech" more effectively as online gaming looks poised to become the newest frontier of censorship.
Activision said recently on its blog that Call of Duty is doubling down on its fight against "hate speech" and other types of "toxic and disruptive behavior" among players in online chatrooms by enlisting the help of AI to identify and police player conduct.
"Call of Duty’s new voice chat moderation system utilizes ToxMod, the AI-Powered voice chat moderation technology from Modulate, to identify in real-time and enforce against toxic speech—including hate speech, discriminatory language, harassment and more," the company said in the post.
The speech policing algorithms, which online players have no ability to turn off, will monitor and record what they say in order to identify speech that the company deems unfit for its virtual game spaces.
Strict penalties await for violators of Activision's online speech rules, which bar derogatory comments based on race, sexual orientation, or "gender identity or expression."
Punishment for rule-breakers ranges from temporary suspensions and account renaming to permanent bans and stat resets.
"Any user who is found to use aggressive, offensive, derogatory, or culturally charged language is subject to penalty," the company warns in its Call of Duty Security and Enforcement Policy.
"Cyber-bullying and other forms of harassment are considered extreme offenses and will result in the harshest penalty," it added.
AI-Powered Censorship
Around 60 million people—mostly men—play Call of Duty every month, according to activeplayer.io, and a common part of the multiplayer online gaming experience is banter.
But Activision has been keen to keep that banter within guardrails, as stipulated in its Code of Conduct.
"We do not tolerate bullying or harassment, including derogatory comments based on race, gender identity or expression, sexual orientation, age, culture, faith, mental or physical abilities, or country of origin," the rules stipulate.
"Communication with others, whether using text or voice chat, must be free of offensive or harmful language. Hate speech and discriminatory language is offensive and unacceptable, as is harassment and threatening another player," the Code of Conduct further states.
Brandon "Dashy" Otell of OpTic Texas during the Call of Duty League Pro-Am Classic in Columbus, Ohio on May 5, 2022. (Joe Brady/Getty Images)
A beta version of the speech policing algorithm has already begun in North America in Call of Duty: Modern Warfare II and Call of Duty: Warzone.
A full version of the AI-powered speech enforcer will be rolled out worldwide along with the upcoming release of Call of Duty: Modern Warfare III on Nov. 10, the company said.
The launch of Modern Warfare II last year saw the introduction of an overhauled in-game reporting system that provided more ways to report offensive behavior and gave new tools to Activision's moderation team to "combat toxicity," the company said in an earlier post.
The upgraded system allowed moderation teams to respond to reported violations of its Code of Conduct by, for example, restricting player features or muting them globally from all in-game voice chat features.
Since the introduction of its enhanced speech policing features last year, Activision says it has (as of Aug. 30) restricted voice and/or text chat to over 1 million accounts.
Data from the company indicates that around 20 percent of players did not re-offend after getting hit with a first warning.
Activision's move to use AI to turbo-boost its crackdown on "toxic speech" comes amid a growing appetite in corporate America to suppress expression deemed too offensive and out of sync with the norms of the day.
That, in turn, comes as recent polling shows a growing trend in the share of Americans who favor government restrictions on false information online.
At least 55 percent of Americans in 2023 think that the U.S. government "should take steps to restrict false information online, even if it limits freedom of information," according to a July 20 Pew Research Center survey.
"Fifty-five percent is a majority and is a milestone that shouldn't be ignored," James Gorrie, author of "The China Crisis," wrote in a recent op-ed in The Epoch Times.
"If more than half of adults can’t think for themselves and prefer censorship to free speech, it seems apparent that the reeducation of America over the past several decades is nearly complete," he added.
In 2021, the share of adults who said the government should censor fake news, even at the cost of freedom of expression, was 48 percent. And in 2018, that share was just 39 percent, suggesting that support for censorship has risen steadily in recent years.