Ontario Court to Rule on Whether AI Chat Logs Can Be Evidence in Fraud Case

It has an almost cinematic quality. A man who is the subject of a federal investigation sits down at his computer and types his most private legal concerns into an AI chatbot instead of calling his attorney or speaking with a paralegal. He poses queries. The AI responds. The documents are saved by him. Maybe he believes that this is personal. Even cautious. The FBI then arrives with a warrant.

Bradley Heppner, a financial services executive in Dallas accused of securities and wire fraud in the Southern District of New York, essentially experienced that. Heppner had been researching the government’s investigation into his behavior using Anthropic’s Claude, a consumer AI chatbot rather than a secure legal platform, prior to his arrest. He fed the AI data that he had received from his own defense lawyers. 31 documents were produced. Thought, at some level, he was being strategic. He was essentially informed by Judge Jed Rakoff on February 10, 2026, that he had been compiling his prosecutor’s list of exhibits.

Key Information Table

DetailInformation
Case ReferenceUnited States v. Heppner (SDNY, Feb. 10, 2026)
Key FigureBradley Heppner, former Dallas financial services executive
ChargesSecurities fraud and wire fraud
Presiding JudgeU.S. District Judge Jed Rakoff, Southern District of New York
AI Tool UsedAnthropic’s Claude (consumer version)
Documents Seized31 AI prompt-and-response documents
Core Legal QuestionWhether AI chatbot conversations qualify for attorney-client privilege or work-product protection
Court’s RulingAI chat logs are neither privileged nor work-product protected
Canadian ParallelToronto lawyer Jisuh Lee referred to Ontario Attorney General for criminal contempt over AI-hallucinated case law
Relevance to OntarioOntario courts increasingly confronting AI evidence questions in fraud and civil matters
Reference WebsiteNational Law Review – Bonnie and Claude Ruling

The decision, which is the first of its kind to address the issue directly, found that AI chatbot conversations lack both work-product protection and attorney-client privilege. Claude does not practice law. By any reasonable legal definition, the conversations were not confidential. Furthermore, Heppner’s subsequent sharing of those documents with his real lawyers did not provide them with any retroactive protection. Judge Rakoff, one of the nation’s most watched federal judges, was not inclined to act otherwise because the law doesn’t operate that way.

This logic is now being applied to Canada. The same uncomfortable frontier is being faced by Ontario courts, which were already shaken by a different and embarrassing incident involving a Toronto lawyer who submitted AI-hallucinated case citations and then lied about it. It is no longer speculative to ask whether AI chat logs can be used as proof in fraud cases. It is falling onto dockets. Additionally, the response does not currently favor the individuals who typed the questions.

There is a certain irony in all of this that is difficult to ignore. People use AI because it feels like a private space where they can test theories, think aloud, and ask questions they’re too shy or embarrassed to ask anyone else. It’s a conversational interface. It reacts quickly. It doesn’t pass judgment. However, Anthropic’s own terms of service explicitly state that user inputs and outputs may be shared with government regulatory bodies. Whisper-in-a-confession-booth privacy does not exist here. No such thing ever existed. The majority of users either didn’t read the fine print or didn’t think it applied to them.

Each of the four reasons Judge Rakoff outlined for why the privilege in Heppner’s case failed is worth considering. An AI has no legal license, no duty of loyalty, and no professional responsibility obligations, so first of all, no attorney was involved. Second, you cannot say that you were looking for legal advice from a platform that expressly declines to offer it because the tool itself disclaimed offering legal advice. Third, Anthropic’s policy permits user data to be shared with regulators and utilized for model training, so there was no confidentiality. Fourth, Heppner’s defense team acknowledged that he had acted solely on his own initiative, despite the work-product doctrine requiring attorney direction.

An additional layer is added by the circumstances in Ontario. Not only did Toronto lawyer Jisuh Lee use AI carelessly, but she also used it to prepare court filings, submitted documents with fake case citations, claimed not to have used ChatGPT, and, months later, sent an unsolicited letter acknowledging that she had lied out of “fear and sheer embarrassment.” Justice Frederick Myers of the Ontario Superior Court referred the case to the province’s attorney general for criminal contempt proceedings, characterizing it as a “very unusual case” that could have long-term effects on the administration of justice. In court documents, a lawyer may be charged with criminal contempt for the first time in Canadian legal history due to AI-generated hallucinations. It’s telling in and of itself that no one seems completely certain.

The legal profession has been slow to prepare for the kind of reckoning that is emerging on both sides of the border. AI has been used by attorneys. AI has been used by clients. The courts are now methodically dismantling the ambiguous assumption of privacy that both groups have been operating under. According to the McGuireWoods analysis that followed the Heppner ruling, using an AI chatbot to ask legal questions is legally equivalent to searching for them on Google or checking out a book from the library. Perhaps useful. Protected, no. The practical ramifications are substantial for anyone navigating fraud litigation in Ontario or elsewhere. All communications with AI-based tools, including prompts, inputs, and outputs, will now be routinely requested in discovery requests.” Whether defendants used AI to plan their defenses or conduct research for their schemes will be examined in deposition questions. Courts are increasingly willing to treat a chatbot’s memory as a witness if a debtor or defendant hired it as a thinking partner to plan a strategy, research legal exposure, or organize asset transfers. A very literal and thorough witness.

It’s still unclear whether Canadian judges will adopt Rakoff’s logic in its entirety or how far Ontario courts will push this. The evidentiary framework, privacy expectations, and ongoing discussions regarding the integration of AI into courtroom procedures are all unique to the Canadian legal system. However, the path of travel appears straightforward. With a series of quiet rulings that most people won’t hear about until it’s too late, rather than a big announcement, the era of treating AI conversations as consequence-free private thinking is coming to an end.

It seems like the legal profession is just starting to realize what it has allowed into the room as it watches all of this play out. AI tools are conversational enough to encourage candor, specific enough to appear like research, and sophisticated enough to feel like advisors. However, the courts are drawing a clear distinction between private and private-feeling. And that distinction can be crucial in a fraud case.

Show Comments (0) Hide Comments (0)
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments