The Holding: On February 17, 2026, Judge Jed Rakoff of the Southern District of New York ruled in U.S. v. Bradley Heppner that conversations with an AI chatbot are not protected by attorney-client or work product privileges.
The Facts: Mr. Heppner is a former CEO of a publicly traded company and was indicted on charges of making false representations to investors and defrauding them of $150 million. Prior to his arrest, Mr. Heppner generated several AI conversations with Claude to discuss his defense strategy and prepare him for his grand jury testimony. Mr. Heppner argued that his AI conversations were privileged, as he generated the AI chats utilizing information he learned from his counsel, the AI chats were generated to facilitate his receipt of legal advice, and he provided the AI chats to his counsel. However, his counsel conceded that Mr. Heppner generated the AI chats at his own behest.
The Court’s Rationale: The Court ruled that Mr. Heppner’s conversations with Claude were not privileged, as the communications were not between Heppner and counsel. Furthermore, the Court held that Mr. Heppner had no reasonable expectation of confidentiality while utilizing Claude, as its privacy policy warns users that it utilizes users’ inputs and outputs to train its machine learning models, and that its data may be disclosed to third parties. The Court also dismissed Mr. Hepner’s contention that he utilized Claude for the purpose of obtaining legal counsel, as he admitted that he did not utilize Claude at his counsel’s direction. Additionally, when federal prosecutors asked Claude if it could provide legal advice, it replied: “I’m not a lawyer and can’t provide formal legal advice or recommendations.” Correspondingly, the Court denied Mr. Heppner’s claims of attorney-client privilege and work product privilege.
The Implications: Although Mr. Heppner’s matter is a case of first impression, this issue will continue to recur as litigants increasingly rely on generative AI. Workers experiencing discrimination or wage theft should be weary of utilizing generative AI for legal advice, as their outputs can be discoverable and weaponized in litigation.