Dame Victoria Sharp, President of the King’s Bench Division of the High Court of England and Wales, met with Justice Jeremy Johnson on a Friday afternoon in June 2025 and rendered a decision that was, in its own cautious manner, a minor warning sign for the entire legal system. They had been alerted to two cases involving attorneys who had presented legal arguments using nonexistent case citations. There were eighteen entirely made-up case references in one, a £90 million lawsuit against Qatar National Bank. Five fictitious incidents were mentioned in the other, a housing claim against the London Borough of Haringey. In both cases, the attorneys were reported to their professional regulators. And Sharp made it clear in his writing that advice was no longer sufficient.
The debate in the nation’s legislative and judicial chambers has intensified since that June decision. More strong than a judicial warning, Labour MPs and cross-party organizations have been advocating for an emergency audit of the use of AI in the creation of legal documents and, increasingly, legal punishments. The issue is not hypothetical. In early 2025, researchers monitoring AI hallucinations in court documents discovered that bogus citations were occurring at a rate of about two per week. The rate had increased to two or three each day by the spring of that year. The issue was not getting better. It was speeding up.
Key Information: AI Misuse in UK Legal Proceedings
| Field | Details |
|---|---|
| Country | United Kingdom |
| Key Ruling | High Court warning — Dame Victoria Sharp, June 6, 2025 |
| Cases Involved | Ayinde v London Borough of Haringey (5 fake citations); Al-Haroun v Qatar National Bank (18 fake citations) |
| Value at Stake (QNB Case) | £90 million lawsuit |
| Penalty Range Identified | Public admonishment → wasted costs → contempt of court → perverting the course of justice (maximum: life imprisonment) |
| Referrals Made | Lawyers in both cases referred to Solicitors Regulation Authority and Bar Standards Board |
| Dame Sharp’s Key Statement | “Guidance on its own is insufficient to address the misuse of artificial intelligence” |
| Parliamentary Response | Cross-party group of 48 MPs called for urgent AI legislation in early 2026 |
| Labour Party Position | Pushing for a “statutory code” for AI — moving away from voluntary compliance |
| Hallucination Rate Growth | AI fake citations rising from 2 per week to 2–3 per day by spring 2025 (Sterne Kessler, Jan 2026) |
| UK Courts Using AI | Crown Prosecution Service, police forces, case management — AI deployed at every stage of the criminal process |
| Reference — IBA Analysis | International Bar Association on UK AI court risks |
The pattern was starkly described in the High Court verdict. Justice Sharp stated that generative AI techniques that are freely accessible, such as ChatGPT, “are not capable of conducting reliable legal research.” They cite nonexistent sources and provide “apparently coherent and plausible responses to prompts,” yet those answers “may turn out to be entirely incorrect.” They cite passages that don’t exist in authentic sources. Additionally, because the outputs appear professional—formatted like actual legal citations, with names that sound like actual judges—busy junior attorneys under pressure may completely overlook them. Instead of doing their own investigation, the attorney in the Qatar National Bank case reportedly relied on a client’s findings. Sharp referred to that as “extraordinary.” She stated that it ought to be the other way around in a court document.
A larger change in the way Labour and its supporters have begun to discuss AI governance is reflected in the growing parliamentary pressure surrounding AI in UK courts. The party has been shifting away from language about voluntary industry codes in favor of a legislative framework that would hold AI companies accountable for the harm their products cause and require them to reveal test data. Citing what they called the “astronomic” increase in internet harms—a category that now includes the creation of fake legal content—a cross-party group of 48 MPs urged for immediate AI legislation in early 2026. There is a perception in Westminster that the judicial warnings by themselves have not been sufficient to alter conduct; attorneys are still looking for the quick fix, and AI technologies continue to produce convincing-sounding fiction on demand.

The environment in which AI is currently employed in the UK court system makes the request for an emergency audit more pertinent. This goes beyond attorneys skimping on their research. Microsoft Copilot has being tested for staff use by the Crown Prosecution Service. AI is being used by police departments for facial recognition and predictive analytics. In 2026, the HM Prison and Probation Service will launch a new AI tool for sentence planning and risk assessment. AI is currently used at every stage of the criminal justice system. Even though submitting a fictitious case citation is a major danger, there are other risks. It is that, in the absence of sufficient human scrutiny between the algorithm and the result, AI-generated content may find its way into actual sentencing recommendations, risk assessments, and bail determinations.
It’s difficult to ignore how much of the issue revolves around verification—or lack thereof. In the Haringey case, a junior barrister acknowledged that after the fake citations were provided, a senior colleague had suggested that she remove her research list and use a reputable legal search engine. She was unable to provide an explanation for the citations. According to Justice Sharp, she had “not provided to the court a coherent explanation for what happened.” People are most concerned about this gap—not the malevolent use of AI, but its irresponsible application. The partner who believes the research was done correctly, the junior attorney who believes the results, and the tool that crafts a case with assured prose. Three silent stages to a never-before-seen filing.
Whether Parliament will act swiftly enough to establish formal audit requirements before further harm is done is still up for debate. The most important proposed AI governance framework in the UK, the Artificial Intelligence and Data Act, is still in draft form. Guidelines have been released by the Bar Council and the Law Society. Mandatory disclosure regulations are being considered by the SRA. However, as Dame Victoria Sharp herself stated in June 2025, counsel is not enough on its own. The courts have started to publicly state this. The question now is whether Parliament will pay attention before the next batch of fictitious cases emerges in a file that is much more significant than a bank dispute or a housing claim.