Two federal courts reached opposite conclusions on AI and privilege in the same week. The difference wasn't the AI tool, it was the architecture. If privileged text reaches a third-party server in readable form, you have a disclosure problem no enterprise agreement fully solves. The alternative: ensure privileged content never leaves your environment. This post breaks down the rulings, the risks, and the architectural choice that makes the privilege question disappear.
Table of Contents
- 01 The Ruling That Changes Everything
- 02 "Just Opt Out of Training. Problem Solved."
- 03 What "No Training" Doesn't Fix
- 04 So Why Should AI Be Different?
- 05 Warner v. Gilbarco: Same Week, Opposite Result
- 06 The Risk Calculus
- 07 The IP Exposure Nobody's Talking About
- 08 This Isn't Theoretical
- 09 The Position That Makes the Question Disappear
- 10 The Wrong Conversation
- 11 What ABA Opinion 512 Actually Says
- 12 The Numbers Behind the Anxiety
- 13 Three Approaches to Confidential Text
- 14 Beyond Pseudonymization: The Detection Stack
- 15 What the Market Will Demand
- 16 Three Questions for Monday Morning
On February 10, 2026, Judge Jed Rakoff of the Southern District of New York issued a ruling in United States v. Heppner that should fundamentally change how every lawyer thinks about AI and confidentiality.
The defendant used consumer AI to create documents for a criminal investigation. The government moved to compel production; the judge agreed the documents lacked privilege protection under either attorney-client privilege or work product doctrine.
"The defendant had disclosed it to a third-party, in effect, AI, which had an express policy of sharing user content with third parties."
- Judge Jed Rakoff, United States v. Heppner (S.D.N.Y. 2026)The ruling centered on the AI tool's privacy policy permitting training use and third-party disclosure, destroying the confidentiality requirement essential to both attorney-client privilege and work product protection.
The immediate pushback after the ruling was predictable:
- Enterprise AI tiers don't train on customer data by default
- Consumer plans offer training opt-out checkboxes
- Gmail, Google Drive, and iCloud present identical third-party transmission issues
The Harvard Law Review offered a sharper critique: Judge Rakoff treated "Claude…more like a non-attorney human than a tool."
The training argument is a red herring. Whether AI trains on your data is the wrong question. The real issue runs deeper, and "no training" doesn't fix it.
Three critical problems persist even with training disabled:
- Server Transmission. Readable text reaches provider servers. Unlike encrypted cloud storage, LLMs must ingest full text: decrypt, tokenize, and process it within provider infrastructure. Your privileged content exists in readable form on someone else's servers.
- Safety Monitoring. All major providers run safety classifiers on inputs and outputs. Anthropic retains flagged content and safety classifier results even under zero-data-retention agreements. Employees can access flagged content for review.
- Retention Carve-outs. Enterprise agreements include exceptions for legal compliance, abuse detection, and safety. Anthropic's zero-data-retention offering retains "User Safety classifier results in order to enforce their Usage Policy." Flagged conversations are retained up to two years.
Even with the strictest enterprise agreement, your text reaches provider servers in readable form, passes through safety classifiers, and may be retained under carve-out provisions. "No training" addresses one vector while leaving three others open.
The Google Docs Comparison
Cloud storage has fifteen years of case law establishing that privilege survives third-party hosting when enterprise agreements are in place. Courts developed frameworks; lawyers have settled precedent to rely on.
AI's Unique Exposure
AI has two contradictory rulings from the same month in 2026. The future may treat enterprise AI like cloud storage, but that precedent doesn't exist yet. The present uncertainty is dangerously acute.
Cloud storage: 15 years of case law. AI: two contradictory rulings from the same week. The trajectory may converge, but betting your client's privilege on that trajectory is a different kind of risk.
On February 17, 2026, just one week after Heppner, Magistrate Judge Anthony Patti in the Eastern District of Michigan held that AI-assisted work product was protected, treating AI as a tool (like a word processor) rather than a third party.
Why the Courts Diverged
| Variable | Heppner (Privilege Lost) | Warner (Privilege Preserved) |
|---|---|---|
| Court | S.D.N.Y. (Judge Rakoff) | E.D. Mich. (Mag. Judge Patti) |
| Context | Criminal defendant, no attorney | Pro se litigant as own counsel |
| AI characterization | Third party | Tool |
| AI tier | Consumer, permissive training policy | Not examined by court |
| Privacy policy reviewed? | Yes - found permissive | No |
| Attorney direction | None | Self-directed as own counsel |
The National Law Review observed that these courts adopted "incompatible frameworks." Both may be correct on their specific facts, which is precisely the problem.
"An enterprise license without documentation infrastructure is effectively compliance theater, not privilege protection."
Enterprise AI users currently have defensible but not settled privilege arguments. The practical risks:
- You're asking to be the test case. Enterprise scenarios are untested. No court has ruled on whether a properly configured enterprise AI tier preserves privilege.
- Opposing counsel knows the arguments. Heppner gives them a template. They may force privilege litigation mid-case, creating cost and distraction at the worst time.
- The "reasonable efforts" standard evolves. What counts as reasonable today may not satisfy tomorrow's expectations. Courts look at what was available, not just what was standard.
The privilege discussion focuses on client data. But firms upload something equally valuable: their institutional knowledge.
- Playbooks: Risk frameworks revealing decision thresholds: "For deals under $10M, accept uncapped IP indemnification"
- Clause libraries: Preferred language refined over years of negotiation
- Fallback positions: What the firm ultimately accepts: "If counterparty rejects termination for convenience..."
- Risk scoring frameworks: Expose firm priorities and red lines to anyone with access
This institutional IP flows into AI systems subject to the same retention carve-outs and third-party access as client data. You cannot pseudonymize a playbook: the position itself is the confidential information. System prompts remain vulnerable to adversarial extraction attacks; researchers have documented techniques for revealing system prompts.
Three incidents in a single week demonstrated that every layer of the AI supply chain presents exposure:
Anthropic Source Code Leak
A packaging error in npm version 2.1.88 shipped a 59.8 MB source map exposing ~512,000 lines of internal TypeScript, including system prompt architectures, feature flags, and telemetry pipelines. No customer data or model weights were exposed, but the incident demonstrated that system prompts (the same structure as playbooks) are vulnerable to exposure through simple human error.
LiteLLM Supply Chain Attack
Hackers compromised PyPI publishing credentials and injected malicious code into versions 1.82.7 and 1.82.8. The code harvested SSH keys, .env files, cloud credentials, and AI API keys. Mercor, a $10B startup contracting with OpenAI and Anthropic, lost 4TB of data, including 939GB of source code and 211GB of user databases containing identity verification passports.
3.7 Million AI Chat Logs Exposed
Sears customer service chatbot logs and 1.4 million audio transcripts were left publicly unsecured, including names, phone numbers, and home addresses. A misconfigured storage bucket, nothing sophisticated.
Three different attack surfaces in one week. No amount of contractual protection prevents a misconfigured build script. Even with trustworthy providers, middleware, proxy libraries, and infrastructure dependencies create vulnerability layers that no single agreement can cover.
If privileged content never reaches the AI provider, the privilege analysis never arises. No third-party disclosure to evaluate. No privacy policy to parse. No framework conflicts between districts.
"Why are we sending privileged text to a third party at all, when we don't have to?"
The question isn't whether enterprise agreements are strong enough. The question is whether the architecture requires them to be.
The industry standard for evaluating legal AI focuses on vendor certification: SOC 2 Type II, ISO 27001, AES-256 encryption. These protect the infrastructure, but not the question of whether privileged information left client control.
The Heppner ruling isn't about infrastructure security. It's about whether privileged information reached a third party. Security certifications are legally irrelevant to the privilege analysis. SOC 2 tells you the server is secure. It doesn't tell you whether sending text to that server constitutes a disclosure.
In July 2024, the ABA issued Formal Opinion 512: "Generative AI Tools", applying existing Model Rules, particularly Rule 1.6 (Confidentiality), to AI use. The core requirement: lawyers must make "reasonable efforts" to prevent "inadvertent or unauthorized disclosure."
The Four "Reasonable Efforts" Requirements
- Understand training use: Know whether your client data is used to train the model
- Understand storage and access: Know how data is stored and who (including third parties) can access it
- Evaluate information sensitivity: Assess the sensitivity of the specific information being submitted
- Read terms of service: Actually read and understand the provider's terms
Opinion 512 draws a line between securing data that third parties hold versus ensuring third parties never hold it. Heppner enforced exactly this distinction. The most reasonable effort is ensuring privileged text never leaves your control.
The legal profession knows there's a problem. The data confirms it:
Awareness exists; action doesn't. Lawyers understand the risk but lack architectural alternatives. The tools they've been offered assume full-text transmission is the only option.
| Dimension | Full-Text Transmission | Redaction | Pseudonymization ✓ |
|---|---|---|---|
| What reaches the AI? | Complete document: names, terms, amounts | Text with [REDACTED] gaps | Text with consistent placeholders (PARTY_A, AMOUNT_1) |
| Privileged text leaves your environment? | Yes | Partially | No |
| Semantic context preserved? | Full | Destroyed | Full |
| AI can reason about the contract? | Yes | Poorly | Yes |
| Heppner-safe? | No | Partially | Yes |
| ABA 512 "reasonable efforts"? | Uncertain | Yes | Yes |
| Court examination reveals: | Full client content on provider servers | Gaps that may still identify parties | Only placeholders, no privileged content |
Approach 1: Full-Text Transmission (The Default)
Most legal AI tools send complete documents, party names, deal terms, dollar amounts, to provider servers. Security protects transit and storage; enterprise agreements may restrict training. But the text still reaches the provider in readable form. Heppner ruled against exactly this architecture.
Approach 2: Redaction (The Blunt Instrument)
Removes party names, dollar amounts, and identifying details. Preserves confidentiality but destroys the semantic context AI needs: "[REDACTED] shall indemnify [REDACTED] for losses up to [REDACTED]..." provides insufficient information for meaningful analysis.
Approach 3: Pseudonymization (The Structural Solution)
Replaces sensitive entities with consistent placeholders before transmission: "PARTY_A shall indemnify PARTY_B for losses up to AMOUNT_1." The AI reasons about relationships, obligations, and risk, while the mapping table (PARTY_A = Acme Corp) stays local. Privileged information never leaves the client environment.
See how ContractKen's Moderation Layer implements pseudonymization, the architectural choice that makes the privilege question disappear.
Effective pseudonymization requires multi-layer detection. No single method catches everything. The layers cover each other's gaps:
- Named Entity Recognition (NER): Identifies party names, individuals, organizations, and locations using models trained on legal text. Catches entity variations and co-references.
- Structured Data Detection: Dollar amounts, dates, email addresses, phone numbers, tax IDs. Regex and pattern matching captures structured identifiers that NER may miss.
- Organization-Specific Terms: Project codenames, product names, matter IDs, internal references. Custom dictionaries maintained per client, only the organization can identify these.
A multi-layer architecture where each layer covers the gaps of the others. The AI receives semantically complete text with zero privileged identifiers.
Four forces are converging:
- Regulatory Pressure. EU AI Act high-risk system rules take effect August 2, 2026. Penalties: up to €35M or 7% of global revenue. Legal AI processing privileged information likely qualifies as high-risk.
- Client Awakening. The 60% transparency gap will close. In-house teams will ask: "How does your AI handle our privileged information?" Firms unable to answer architecturally, not contractually, will lose work.
- Insurance Response. The legal malpractice market is responding to AI risk. Privilege waiver from inadequate AI architecture will trigger coverage restrictions and premium increases.
- Court Precedent. Heppner's holding is narrow. Warner's is narrow. But the reasoning in both applies broadly. More courts will follow.
The GC's Monday Morning Checklist
If the answer to #1 is "yes" and the answer to #2 is "trust our enterprise agreement," Heppner just showed you how a court assesses that position.
Want to answer these questions for your team? Walk through the architecture with us: 15 minutes, no pitch deck.
"The privilege you waive today doesn't come back tomorrow."
Sources
- United States v. Heppner, No. 24-cr-00475 (S.D.N.Y. Feb. 10, 2026)
- Warner v. Gilbarco Inc., No. 24-cv-12345 (E.D. Mich. Feb. 17, 2026)
- ABA Formal Opinion 512: "Generative AI Tools" (July 2024)
- Debevoise & Plimpton, "Court Rules AI Use Waived Privilege" (Feb. 2026)
- Gibson Dunn, "AI and Privilege After Heppner" (Feb. 2026)
- Sidley Austin, "Implications of AI Privilege Rulings" (Mar. 2026)
- Harvard Law Review Blog, "US v. Heppner and AI Privilege" (2026)
- Anthropic Privacy Center - Data Retention and Safety Monitoring
- ACC / Everlaw Survey on AI Use in Legal Departments (2026)
- Wolters Kluwer, "Future Ready Lawyer 2026" Survey
- The Hacker News, VentureBeat, The Register - LiteLLM supply chain attack coverage (Mar. 2026)
- TechCrunch, Neowin - Anthropic source code leak coverage (Mar. 2026)
Sharma, Amit. "AI and the Loss of Privilege: US v Heppner." ContractKen Blog, April 2, 2026. https://www.contractken.com/post/ai-and-the-loss-of-privilege-us-v-heppner



.avif)


