Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Crikey
Crikey
National
Cam Wilson

People are citing non-existent cases and giving rambling statements in court. Judges blame AI

Australians are filing false or misleading evidence in court cases which judges suspect is being produced by ChatGPT and other generative artificial intelligence products.

In cases before Australian supreme courts, family courts and tribunals, judgements seen by Crikey show that judicial officers are slamming people, companies and legal representatives for filing submissions that cite dozens of cases that don’t exist or aren’t relevant, or because they’re bizarrely written and completely nonsensical.

In many cases, judges are unable to definitively say how the submissions were written but suggest that AI might have played a role because of tell-tale signs like the inclusion of outright fabrication of legal material and the distinctive language used.

Lawyers and other legal representatives have admitted to using AI technology incorrectly. In a family law case about property orders, a lawyer named as “Mr B” told Judge Amanda Humphreys that he had used the “legal software package” LEAP to produce a list of cases that, as it turns out, did not exist.

“I asked if LEAP relies on artificial intelligence. He indicated that it does, answering ‘there is an artificial intelligence for LEAP,'” Humphreys wrote in her judgement.

She gave Mr B a month to respond to her proposal to refer him to the Legal Services Board for investigation about filing false information, but a subsequent judgement has not been published.

In another case, 600 pages of “rambling, repetitive, nonsensical” material was filed by a legal agent at a Queensland Civil Administrative Tribunal hearing about a dispute over a strata contract that the presiding officer, Member Robert King-Scott, suspected was written by artificial intelligence.

Sometimes, it is ordinary people who appear to be using the technology in court. In an appeal to failed defamation proceedings in front of the Supreme Court of Tasmania, Justice Alan Michael Blow said the submissions made by the appellant, Dr Natasha Lakaev, were “surprising” because they cited a case that does not exist — “Hewitt v Omari [2015] NSWCA 175”. In another example, a submission incorrectly (and ironically) claimed another case was about false evidence (it was actually about international child abduction law).

In his judgement, Justice Blow suggested that these errors may have been the result of AI: “When artificial intelligence is used to generate submissions for use in court proceedings, there is a risk that the submissions that are produced will be affected by a phenomenon known as ‘hallucination’.” Lakaev did not respond to a request for comment.

In a long-running wrongful termination case, an application to stop law firm MinterEllison from representing Jo-Anne Finch’s opponent was denied after she provided a list of 24 previous times it had occurred, which were all non-existent. 

“I asked Ms Finch if she had used ChatGPT to generate the names, citations and summaries of authorities that she claimed were examples of cases where MinterEllison had been restrained from acting for a client. She said that she did not know what ChatGPT was,” Judge Heather Riley wrote. The judgement says that Finch, who did not respond to an Instagram message, apologised for the “serious issue” of misleading the court.

Judges have also disregarded personal references they suggest were written by AI. ACT Supreme Court Judge David Mossop decided he could give “little weight” to a reference given by the defendant’s brother because he believed its writing style was consistent with the use of AI-generated text. 

“One would expect in a reference written by his brother that the reference would say that the author was his brother and would explain his association with the offender by reference to that fact, rather than by having known him ‘personally and professionally for an extended period’,” Mossop said. The defendant’s lawyer did not respond to a request for comment.

There have even been cases where ChatGPT prompts appear to have been left in documents tendered to the court. In a proceeding filed by the ACCC, the consumer watchdog pointed out that a resolution from a company, DG Institute, ended with “use British spelling please ChatGPT”. 

Justice Ian Jackman responded by saying that he did not believe that using artificial intelligence was a “significant matter” in the proceedings, but also gave some advice for people considering the technology to help with legal matters. 

“I recognise that artificial intelligence does have a role to play in certain aspects of legal drafting,” his judgement said.

“In my view, the important aspect, in circumstances where artificial intelligence is used, is that any such draft is scrutinised and settled by a legal practitioner, and there is nothing to suggest that that was not done in the present case.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.