The legal action began like so many others: A man named Roberto Mata sued the airline Avianca, saying he was injured when a metal serving cart struck his knee during a flight to Kennedy International Airport in New York.
When Avianca asked a Manhattan federal judge to toss out the case, Mr Mata’s lawyers vehemently objected, submitting a 10-page brief that cited more than a half-dozen relevant court decisions. There was Martinez v Delta Air Lines, Zicherman v Korean Air Lines and, of course, Varghese v China Southern Airlines, with its learned discussion of federal law and “the tolling effect of the automatic stay on a statute of limitations”.
There was just one hitch: No one — not the airline’s lawyers, not even the judge himself — could find the decisions or the quotations cited and summarised in the brief. That was because ChatGPT had invented everything.
The lawyer who created the brief, Steven Schwartz of the firm Levidow, Levidow & Oberman, threw himself on the mercy of the court, saying in an affidavit that he had used the artificial intelligence program to do his legal research — “a source that has revealed itself to be unreliable”.
From liberal icon to Maga joke: the waning fortunes of Justin Trudeau
‘I’ll never forget the trail of bodies’: Magdeburg witnesses recount Christmas market attack
‘We need Macron to act.’ The view in Mayotte, the French island territory steamrolled by cyclone Chido
Gisèle Pelicot has rewritten her story – and electrified women all over the world. But what about men?
Mr Schwartz, who has practiced law in New York for three decades, told Judge P. Kevin Castel that he had no intent to deceive the court or the airline. He said he had never used ChatGPT and “therefore was unaware of the possibility that its content could be false”. He had, he told the judge, even asked the program to verify that the cases were real.
It had said yes.
‘Greatly regrets’
Mr Schwartz said he “greatly regrets” relying on ChatGPT “and will never do so in the future without absolute verification of its authenticity”.
Judge Castel said in an order that he had been presented with “an unprecedented circumstance,” a legal submission replete with “bogus judicial decisions, with bogus quotes and bogus internal citations”. He ordered a hearing for June 8th to discuss potential sanctions.
As AI sweeps the online world, it has conjured dystopian visions of computers replacing not only human interaction but also human labour. The fear has been especially intense for knowledge workers, many of whom worry that their daily activities may not be as rarefied as the world thinks — but for which the world pays billable hours.
The real-life case of Roberto Mata v Avianca Inc shows that white-collar professions may have at least a little time left before the robots take over.
It began when Mr Mata was a passenger on Avianca Flight 670 from El Salvador to New York on August 27th, 2019, when an airline employee bonked him with the serving cart, according to the lawsuit. After Mr Mata sued, the airline filed papers asking that the case be dismissed because the statute of limitations had expired.
In a brief filed in March, Mr Mata’s lawyers said the lawsuit should continue, bolstering their argument with references and quotes from the many court decisions that have since been debunked. Soon, Avianca’s lawyers wrote to Judge Castel, saying they were unable to find the cases that were cited in the brief.
The judge ordered Mr Mata’s attorneys to provide copies of the opinions referred to in their brief. The lawyers submitted a compendium of eight; in most cases, they listed the court and judges who issued them, the docket numbers and dates.
Not on databases
The copy of the supposed Varghese decision, for example, is six pages long and says it was written by a member of a three-judge panel of the 11th Circuit. But Avianca’s lawyers told the judge that they could not find that opinion, or the others, on court dockets or legal databases.
Bart Banino, a lawyer for Avianca, said his firm, Condon & Forsyth, specialised in aviation law and that its lawyers could tell the cases in the brief were not real. He added that they had an inkling that a chatbot might have been involved.
Mr Schwartz did not respond to a message seeking comment, nor did Peter LoDuca, another lawyer at the firm, whose name appeared on the brief.
Mr LoDuca said in an affidavit this week that he did not conduct any of the research in question and that he had “no reason to doubt the sincerity” of Mr Schwartz’s work or the authenticity of the opinions.
ChatGPT generates realistic responses by making guesses about which fragments of text should follow other sequences, based on a statistical model that has ingested billions of examples of text pulled from all over the internet. In Mr Mata’s case, it appears to have discerned the labyrinthine framework of a written legal argument but has populated it with names and facts from a bouillabaisse of existing cases.
Judge Castel, in his order calling for a hearing, suggested that he had made his own inquiry. He wrote that the clerk of the 11th Circuit had confirmed that the docket number printed on the purported Varghese opinion was connected to an entirely different case.
‘Bogus’
Calling the opinion “bogus,” the judge noted that it contained internal citations and quotes that, in turn, were nonexistent. He said that five of the other decisions submitted by Mr Mata’s lawyers also appeared to be fake. On Thursday, Mr Mata’s lawyers offered affidavits containing their version of what had happened.
Mr Schwartz wrote that he had originally filed Mr Mata’s lawsuit in state court, but after the airline had it transferred to Manhattan’s federal court, where he is not admitted to practice, one of his colleagues, Mr LoDuca, became the attorney of record. Mr Schwartz said he had continued to do the legal research, in which Mr LoDuca had no role.
Mr Schwartz said that he had consulted ChatGPT “to supplement” his own work and that, “in consultation” with it, found and cited the half-dozen nonexistent cases. He said ChatGPT had provided reassurances.
“Is Varghese a real case,” he typed, according to a copy of the exchange that he submitted to the judge.
“Yes,” the chatbot replied, offering a citation and adding that it “is a real case”.
Mr Schwartz dug deeper.
“What is your source,” he wrote, according to the filing.
“I apologize for the confusion earlier,” ChatGPT responded, offering a legal citation.
“Are the other cases you provided fake?” Schwartz asked.
ChatGPT responded, “No, the other cases I provided are real and can be found in reputable legal databases.”
But, alas, they could not be.
This article originally appeared in The New York Times.
2023 The New York Times Company