Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - US
The Guardian - US
World
Leyland Cecco in Toronto

Canada lawyer under fire for submitting fake cases created by AI chatbot

phone with open ai logo on it in front of a screen of text
The judge found Ke took steps to correct her errors. Photograph: Michael Dwyer/AP

A lawyer in Canada is under fire after the artificial intelligence chatbot she used for legal research created “fictitious” cases, in the latest episode to expose the perils of untested technologies in the courtroom.

The Vancouver lawyer Chong Ke, who now faces an investigation into her conduct, allegedly used ChatGPT to develop legal submissions during a child custody case at the British Columbia supreme court.

According to court documents, Ke was representing a father who wanted to take his children overseas on a trip but was locked in a separation dispute with the children’s mother. Ke is alleged to have asked ChatGPT for instances of previous case law that might apply to her client’s circumstances. The chatbot, developed by OpenAI, produced three results, two of which she submitted to the court.

The lawyers for the children’s mother, however, could not find any record of the cases, despite multiple requests.

When confronted with the discrepancies, Ke backtracked.

“I had no idea that these two cases could be erroneous. After my colleague pointed out the fact that these could not be located, I did research of my own and could not detect the issues either,” Ke wrote in an email to the court. “I had no intention to mislead the opposing counsel or the court and sincerely apologize for the mistake that I made.”

Despite the popularity of chatbots, which are trained on extensive troves of data, the programs are also prone to errors, known as “hallucinations”.

Lawyers representing the mother called Ke’s conduct “reprehensible and deserving of rebuke” because it led to “considerable time and expense” to determine if the cases she cited were real.

They asked for special costs to be awarded, but the judge overseeing the case rejected the request, saying such an “extraordinary step” would require “a finding of reprehensible conduct or an abuse of process” by the lawyer.

“Citing fake cases in court filings and other materials handed up to the court is an abuse of process and is tantamount to making a false statement to the court,” wrote Justice David Masuhara. “Unchecked, it can lead to a miscarriage of justice.”

He found that opposing counsel was “well-resourced” and had already produced “volumes” of materials in the case. “There was no chance here that the two fake cases would have slipped through.”

Masuhara said Ke’s actions produced “significant negative publicity” and she was “naive about the risks of using ChatGPT”, but he found she took steps to correct her errors.

“I do not find that she had the intention to deceive or misdirect. I accept the sincerity of Ms Ke’s apology to counsel and the court. Her regret was clearly evident during her appearance and oral submissions in court.”

Despite Masuhara’s refusal to award special costs, the law society of British Columbia is now investigating Ke’s conduct.

“While recognizing the potential benefits of using AI in the delivery of legal services, the Law Society has also issued guidance to lawyers on the appropriate use of AI, and expects lawyers to comply with the standards of conduct expected of a competent lawyer if they do rely on AI in serving their clients,” a spokesperson, Christine Tam, said in a statement.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.