The evolving landscape of artificial intelligence in journalism and content creation has sparked a heated debate, with media outlets accusing AI companies of unfairly leveraging their content. Now, legislation may be on the way to regulate the space.
The Hollywood Reporter reports that the media industry recently expressed serious concerns over the use of their content by AI companies to train their chatbots, such as OpenAI’s ChatGPT. Many worry that training chatbots on this data could harm their financial stability and editorial integrity. The issue can be traced back over a decade when tech companies started sharing news content without compensating the creators, which many publishers claimed led to a decline in media industry revenues.
This issue has come to a head recently as The New York Times has launched a lawsuit targeting AI chatbot makers, including OpenAI, Microsoft, Google, Meta, and more. The lawsuit, grounded in allegations of copyright infringement, accuses the tech giants of using the Times‘ content to train their AI tools, including ChatGPT and Microsoft’s Copilot, without permission. The NYT argues that this has led to a diversion of web traffic and consequently, a substantial loss in advertising, licensing, and subscription revenue.
However, not all are convinced that the use of news content in AI training data is an issue. Some argue that this practice is protected under the “fair use” principle of intellectual property law, while others consider it a violation of copyright.
During a Senate Judiciary Committee meeting, lawmakers supported proposals requiring AI companies to establish licensing agreements with news organizations in a move aiming to safeguard the intellectual property of media outlets. Senator Richard Blumenthal stressed the need for a licensing regime. He stated, “We need to learn from the mistakes of our failure to oversee social media and adopt standards.”
Roger Lynch, CEO of Condé Nast, made a compelling case for legislative clarity during the hearing. He urged Congress to “clarify that the use of our content and other publications’ content for training and output of AI models is not fair use,” stating that he believes that resolving this issue will allow the free market to facilitate appropriate licensing deals. Echoing this sentiment, Senator Josh Hawley described the need for compensation as “imminently sensible,” adding: “Why shouldn’t we expand the regime outward to say anyone whose data is ingested and regurgitated by generative AI — whether in name, image or likeness — has the right to compensation?”
A major point of discussion in the hearing was the necessity of new legislation to address AI’s impact on journalism. Curtis LeGeyt, CEO of the National Association of Broadcasters, opined, “I think it’s premature. If we have clarity that the current laws apply to generative AI, the market will work.” However, Lynch expressed frustration over the prolonged legal processes, stating: “A major concern is the amount of time to litigate, appeal, go back to the courts, appeal, and maybe make it to the Supreme Court to settle. Between now and then, many media companies will go out of business.”
Jeff Jarvis, a journalism professor, argued against “protectionist legislation for a struggling industry,” instead advocating for a broader interpretation of the fair use doctrine. The Chamber of Progress, an organization that represents tech giants, has advocated for the interpretation of Section 230 of the Communication Decency Act to provide immunity to AI companies from copyright infringement claims. However, Senator Blumenthal has argued that AI firms should not be protected under Section 230 if they are sued for content produced by AI tools.
Read more at The Hollywood Reporter here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.