A family law disagreement was developing in a Vancouver courtroom at some time in late 2023, as these cases usually do: documents were being exchanged, arguments were being prepared, and both parties were attempting to present their client’s case in the strongest possible light. Westside Family Law attorney Chong Ke required case law to back up her claims. Significant assets were at stake in the disagreement, which is the type of high-net-worth family case where legal precedent is important and where the caliber of the research might affect the outcome. To locate pertinent cases, she used ChatGPT. She received two from the AI. She filed them with the Supreme Court of British Columbia. They were both nonexistent.
In Canadian legal circles, what transpired next has become one of the most frequently referenced instances of what occurs when a professional assigns a professional work to a tool that is ignorant of its own limitations. When preparing their response, the opposing attorneys, Lauren and Fraser MacLean of MacLean Law, discovered a problem with the citations. Legal citations can be independently verified. Databases such as CanLII or Westlaw can be used to look up their individual case titles, reporting series, and paragraph numbers. The MacLeans searched, but discovered nothing. Nothing, not out-of-date cases, not unclear precedents. There was just no legal record of the cases Ke had submitted to the court. With total assurance, ChatGPT had created them in the authoritative, fluid manner that makes big language model outputs so simple to confuse for reality.
| The Chong Ke AI Hallucination Case — Key Facts | |
| Lawyer Involved | Chong Ke of Westside Family Law, Vancouver — used ChatGPT to research case law for a high-net-worth family dispute heard in the BC Supreme Court in late 2023 |
|---|---|
| What Happened | ChatGPT “hallucinated” — fabricated — two case law citations that did not exist; Ke submitted both to the BC Supreme Court as legitimate legal authority without independently verifying their existence |
| Who Caught It | Lawyers Lauren and Fraser MacLean of MacLean Law — identified the fictitious nature of the citations while preparing their opposing case and formally challenged the fraudulent references |
| Court Ruling (Feb 2024) | Justice David Masuhara reprimanded Ke — describing the submission of fabricated cases as an abuse of process and stating it was “tantamount to making a false statement to the court”; Ke was ordered to personally pay costs to the opposing party |
| Court’s Key Statement | “Generative AI is still no substitute for the professional expertise that the justice system requires of lawyers” — the court found no intentional malice but criticized Ke’s lack of professional diligence in verifying AI outputs |
| Ke’s Response | Described the experience publicly as a “living nightmare” — one of the first Canadian lawyers to face formal court sanction specifically for submitting AI-generated fabricated legal citations |
| Related BC Legal AI Cases (2024–2026) | |
| CanLII vs. Caseway AI | Canadian Legal Information Institute sued legal tech startup Caseway AI in 2024 for allegedly scraping its legal database — the case settled in March 2026 |
| Vancouver Author Class Action | A Vancouver author filed a 2025 class action against Nvidia, Meta, and others — alleging the unauthorized use of copyrighted books to train large language models |
The incident was not considered a technicality in the February 2024 verdict by Justice David Masuhara. He characterized the filing of fictitious cases as an abuse of process, calling it “tantamount to making a false statement to the court”—language that carries considerable weight in a legal culture based on professional duties centered on thoroughness and honesty. Ke was mandated to reimburse the other party’s expenses. The court observed that there was no proof of deliberate malice and that this was a disastrous failure of verification rather than an attempt to deceive.
However, the decision made it plain that being unaware of the capabilities and limitations of AI tools does not absolve you of your professional obligation to make sure the evidence you present to a court is authentic. Ke subsequently referred to the event as a “living nightmare,” which makes sense but falls short of adequately capturing the impact it had on her career. About six months prior to the case’s arrival in Canada, a very identical scenario had occurred in a federal court in New York, when two attorneys filed a personal injury lawsuit using ChatGPT-generated case citations and were sanctioned upon the discovery of the fictitious cases.
The Canadian legal community was aware of the parallel. These were signs of a profession that had, in some places, embraced potent AI tools without creating equivalent frameworks for handling their known failure modes; they were not isolated incidents in jurisdictions with peculiar legal cultures. One of the most well-known and generally recognized drawbacks of contemporary AI systems is hallucination, or the propensity of language models to provide information that sounds believable but is entirely fake. Additionally, when skilled professionals are working fast, under pressure, and without enough skepticism, it seems to still be able to surprise them.

Since then, the courts in British Columbia have been handling a number of legal issues pertaining to AI. In 2024, a legal technology startup named Caseway AI was sued by CanLII, the Canadian Legal Information Institute, which offers free access to Canadian legal databases and serves as a vital research tool for lawyers nationwide. The lawsuit claimed that Caseway AI had scraped CanLII’s database without permission in order to train its legal AI product. In March 2026, that case was resolved. Separately, in 2025, a Vancouver author filed a class action lawsuit against Nvidia, Meta, and other tech firms, claiming that copyrighted books had been used without permission to train large language models. This action joined an increasingly widespread and unabated global wave of copyright lawsuits against AI developers.
Looking at this cluster of cases coming out of British Columbia and Vancouver, it seems that the area is addressing a set of issues that any jurisdiction with an established legal system will eventually have to deal with, albeit maybe sooner and more openly than some. The Chong Ke case is not primarily about the error of one attorney. It concerns the consequences of a profession’s instruments changing more quickly than the standards for their responsible usage. The BC Supreme Court’s declaration that “generative AI is still no substitute for the professional expertise that the justice system requires of lawyers” should be interpreted more as a philosophy that the legal profession as a whole is still working to operationalize than as a critique of Ke in particular.
Alongside the worry, it’s difficult to avoid feeling some pity. The tools are really helpful. The failure mode is quite risky. It’s obvious that the profession is still figuring out where to draw the line.