Featured

‘Nauseatingly Frightening:’ Major Law Firm Warns Lawyers of the Dangers of Using AI in Legal Filings

judge pounding the gavel
Wasan Tita/Getty

Morgan & Morgan, one of the largest injury law firms in the United States, has issued a stern warning to its attorneys about the dangers of relying on AI tools when citing case law in court filings. One of the firm’s lawyer filed an important legal brief filled with fake citations that ChatGPT had “hallucinated.”

Ars Technica reports that in an internal letter shared in a court filing, Morgan & Morgan’s chief transformation officer cautioned the firm’s more than 1,000 attorneys that citing fake AI-generated cases in court documents could lead to serious consequences, including potential termination. This warning comes after one of the firm’s lead attorneys, Rudwin Ayala, cited eight cases in a lawsuit against Walmart that were later discovered to have been generated by ChatGPT, an AI chatbot.

The incident has raised concerns about the growing use of AI tools in the legal profession and the potential risks associated with relying on these tools without proper verification. Walmart’s lawyers urged the court to consider sanctions against Morgan & Morgan, arguing that the cited cases “seemingly do not exist anywhere other than in the world of Artificial Intelligence.”

In response to the incident, Ayala was immediately removed from the case and replaced by his supervisor, T. Michael Morgan, Esq. Morgan expressed “great embarrassment” over the fake citations and agreed to pay all fees and expenses related to Walmart’s reply to the erroneous court filing. He emphasized that this incident should serve as a “cautionary tale” for both his firm and the legal community as a whole.

Morgan added, “The risk that a Court could rely upon and incorporate invented cases into our body of common law is a nauseatingly frightening thought.” He later admitted that AI can be “dangerous when used carelessly.”

The use of AI in the legal field has become increasingly common, with a July 2024 Reuters survey revealing that 63 percent of lawyers have used AI and 12 percent use it regularly. However, the incident at Morgan & Morgan highlights the importance of responsible AI use and the need for lawyers to independently verify the information generated by these tools.

To prevent similar incidents from occurring in the future, Morgan & Morgan has implemented new policies and safeguards. The firm’s technology team and risk management members have met to discuss and implement further measures to ensure the proper use of AI. Additionally, a checkbox acknowledging AI’s potential for hallucinations has been added to the firm’s internal AI platform, requiring attorneys to acknowledge this risk before accessing the tool.

Breitbart News previously reported on a lawyer in New York who faced sanctions after filing a legal brief filled with false citations generated by ChatGPT.

Read more at Ars Technica here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

via February 20th 2025