
Apparent AI mistakes force two judges to retract separate rulings
NEWYou can now listen to Fox News articles!
Two U.S. judges in separate federal courts scrapped their rulings last week after lawyers alerted them to filings that contained inaccurate case details or seemingly “hallucinated” quotes that misquoted cited cases — the latest in a string of errors that suggest the growing use of artificial intelligence in legal research and submissions.
In New Jersey, U.S. District Judge Julien Neals withdrew his denial of a motion to dismiss a securities fraud case after lawyers revealed the decision relied on filings with “pervasive and material inaccuracies.”
The filing pointed to “numerous instances” of made-up quotes submitted by attorneys, as well as three separate instances when the outcome of lawsuits appeared to have been mistaken, prompting Neals to withdraw his decision.
TRUMP TARIFF PLAN FACES UNCERTAIN FUTURE AS COURT BATTLES INTENSIFY

In Mississippi, U.S. District Judge Henry Wingate replaced his original July 20 temporary restraining order that paused enforcement of a state law blocking diversity, equity and inclusion programs in public schools after lawyers notified the judge of serious errors submitted by the attorney.
They informed the court that the decision “relie[d] upon the purported declaration testimony of four individuals whose declarations do not appear in the record for this case.”
Wingate subsequently issued a new ruling, though lawyers for the state have asked his original order to be placed back on the docket.
“All parties are entitled to a complete and accurate record of all papers filed and orders entered in this action, for the benefit of the Fifth Circuit’s appellate review,” the state attorney general said in a filing.
A person familiar with Wingate’s temporary order in Mississippi confirmed to Fox News Digital that the erroneous filing submitted to the court had used AI, adding that they had “never seen anything like this” in court before.
Neither the judges’ office nor the lawyers in question immediately responded to Fox News Digital’s requests for comment on the retracted New Jersey order, first reported by Reuters. It was not immediately clear if AI was the reason for that erroneous court submission in that case.
FEDERAL JUDGE EXTENDS ARGUMENTS IN ABREGO GARCIA CASE, SLAMS ICE WITNESS WHO ‘KNEW NOTHING’

However, the errors in both cases — which were quickly flagged by attorneys, and prompted the judges to take action to revise or redact their orders — come as the use of generative AI continues to skyrocket in almost every profession, especially among younger workers.
In at least one of the cases, the errors bear similarities to AI-style inaccuracies, which include the use of “ghost” or “hallucinated” quotes being used in filings, citing incorrect or even nonexistent cases.
For bar-admitted attorneys, these erroneous court submissions are not taken lightly. Lawyers are responsible for the veracity of all information included in court filings, including if it includes AI-generated materials, according to guidance from the American Bar Association.
In May, a federal judge in California slapped law firms with $31,000 in sanctions for using AI in court filings, saying at the time that “no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material.”
Last week, a federal judge in Alabama sanctioned three attorneys for submitting erroneous court filings that were later revealed to have been generated by ChatGPT.
JUDGES V TRUMP: HERE ARE THE KEY COURT BATTLES HALTING THE WHITE HOUSE AGENDA

Among other things, the filings in question included the use of the AI-generated quote “hallucinations,” U.S. District Judge Anna Manasco said in her order, which also referred the lawyers in question to the state bar for further disciplinary proceedings.
“Fabricating legal authority is serious misconduct that demands a serious sanction,” she said in the filing.
New data from the Pew Research Center underscores the rise of AI tools among younger users.
CLICK HERE TO GET THE FOX NEWS APP
According to a June survey, roughly 34% of U.S. adults say they have used ChatGPT, the artificial intelligence chatbot — roughly double the percentage of users who said the same at the same point two years ago, in 2023.
The share of employed adults who use ChatGPT for work has spiked by a whopping 20 percentage points since June 2023; and among adults under 30, adoption is even more widespread, with a 58% majority saying they have used the chatbot.