You Told a Robot, Not Your Lawyer: SDNY Says Client AI Prompts Aren’t Protected
Privilege isn’t a vibe. It’s a rule, and it’s a rule with teeth.
Judge Jed Rakoff’s recent bench ruling in the Southern District of New York rejected privilege and work-product protection for a defendant’s AI-generated “defense” documents. The shock and outrage already gaining momentum online seems a little silly given that the ruling ties into a basic truth: if you hand your sensitive case info to a third party, don’t act surprised when you’ve got a third-party problem. That’s not a “new AI era” thing. That’s the same thing we’ve been doing forever, just with shinier autocomplete.
Here’s what happened, in plain English. In United States v. Heppner, the government said the defendant generated roughly thirty-one documents by running prompts through a commercial AI tool, Claude (Anthropic), before his arrest, and then later shared those AI documents with his defense counsel. The defense tried to wrap those materials in privilege and work product after the fact. The government moved for a ruling that the documents weren’t protected, and the court agreed from the bench.
The government’s motion reads like someone patiently explaining gravity to a group of adults. Attorney-client privilege protects confidential communications between lawyer and client made for the purpose of seeking or providing legal advice. Work product protects materials prepared by or for counsel in anticipation of litigation. The defendant’s “AI Documents” missed the mark in all the ways that matter. They weren’t communications with a lawyer. They were communications with a tool. They weren’t confidential because they were fed to a third-party platform under terms that undercut any expectation of privacy. And they weren’t work product because defense counsel reportedly didn’t direct the defendant to do the AI searching; he did it on his own and later forwarded the results.
If you want the tone of the government’s position, it’s basically: stop trying to invent “internet research privilege.” The motion even makes the point bluntly that the defendant’s use of the AI tool was “no different than if he had asked friends for their input on his legal situation.” That line isn’t just snark; it’s the whole analysis in one sentence. You can call it “collaboration,” “brainstorming,” “organizing thoughts,” or “prepping for counsel.” If the “collaborator” is not your lawyer (or a properly retained agent standing in the lawyer’s shoes for privilege purposes), you’ve left the protected circle.
This is where the “um, yeah, duh” part comes in. Privilege has always been allergic to spectators. The second you start treating a conversation like it’s private while you’re simultaneously piping it through a third party, you’re playing yourself. Think about it like talking to your cousin at a party. You can say, “Hey, this is super confidential,” but you’re still saying it at a party, to someone who isn’t your lawyer, in a room you don’t control. You can’t be shocked later when someone else heard it, or when your cousin repeats it, or when your “private conversation” becomes a group chat story.
The motion makes the confidentiality issue concrete by pointing to the platform’s policies: prompts entered and outputs generated can be collected, retained, and (at least under the policy cited by the government) used for things like improving the model and disclosed to third parties, including government authorities in certain circumstances. That alone should end the conversation. Privilege isn’t about how earnest you felt when you typed the prompt. It’s about whether you kept the information in the privileged lane.
Consumer AI is the same movie, different cast. If the platform’s rules say your inputs aren’t confidential, or if the business model requires the provider to keep and potentially use what you submit, then you didn’t whisper to your lawyer. You spoke into a room with the door open. Maybe nobody walks by. Maybe they do. The point is you opened the door.
What is likely going to be lost and overshadowed by the Judge’s ruling, but is really the most important take-away for those of us practicing law on a regular basis, is that the government didn’t argue “AI is evil” or “AI is magic so privilege disappears.” It argued something much more boring and much more correct: the privilege elements don’t fit because the communication was with a non-lawyer third party and wasn’t kept confidential, and you can’t launder non-privileged material into privileged material by hitting forward and sending it to counsel later. Courts have been saying that last part forever. A document doesn’t become privileged because a lawyer touches it. If it did, discovery would just be a relay race to the nearest attorney’s inbox.
Work product didn’t save the day either, and for the same reason. The work product doctrine is meant to protect the lawyer’s preparation, strategy, mental impressions, litigation planning, so the other side can’t freeload off counsel’s work. But when a client does independent research on their own initiative, without counsel’s direction, that isn’t the lawyer’s protected workspace. It’s the client doing what clients do: reading, searching, theorizing, worrying, and “trying” to be helpful. The government’s motion even frames it in the simplest possible terms: it’s not work product any more than a defendant’s “independent internet research” is work product. Again: duh.
Now, here’s the part that a lot of the internet is getting wrong on purpose, because panic gets clicks: this ruling is not a declaration that “AI breaks privilege.” It’s a declaration that third parties break privilege, and consumer AI is a third party when it’s not operating under confidentiality protections consistent with keeping legal communications private.
There’s a huge, practical difference between a represented party dumping sensitive facts and legal theories into a public-facing consumer tool, and a lawyer (or a client at counsel’s direction) using a controlled, enterprise AI solution designed for confidential work, with contractual and technical guardrails that look like the rest of modern legal infrastructure. Lawyers already use third-party tech constantly. We draft and store work product on cloud servers. We email through Outlook and Google Workspace. We collaborate in document management systems. We run e-discovery through vendors. None of that automatically nukes privilege because the whole point is that those providers are functioning like infrastructure under confidentiality expectations, not like a public square where your words become part of somebody else’s product.
So if you’re an attorney using a closed enterprise tool, no training on your inputs, contractual confidentiality, access controls, audit logs, all the boring stuff, the better analogy isn’t “telling your cousin at a party.” The better analogy is “saving a draft to a secure cloud drive” or “sending an email through a mainstream provider.” It’s still your work product. It’s still your client’s confidential communication, assuming you’ve kept it in the privileged channel and you’re not doing something self-sabotaging like pasting secrets into a system that tells you it may reuse them.
The privilege analysis doesn’t care whether the third party is your employer or an AI vendor. It cares whether you maintained confidentiality. People want to treat “AI” as a special category, but courts are treating it like what it is: another third-party conduit that can either be set up to respect confidentiality or not. If your setup screams “not confidential,” then don’t be shocked when a judge reads it that way.
One more wrinkle from the reporting and the client alerts is worth lingering on, because it’s the kind of practical courtroom issue that gets lost in the privilege panic. Even though Judge Rakoff ruled the AI documents weren’t privileged, he apparently flagged that using them at trial could create a witness-advocate problem, meaning defense counsel might become a necessary witness about how the documents were created, transmitted, or used, potentially risking a mistrial. So essentially, you might win the privilege fight, but still not be able to use the documents or their content at trial.
That’s the part that should give litigators a more mature takeaway than “never touch AI.” If I had sum it up, it would be this – be intentional. If you want privilege, treat your tools like part of the privileged environment. If you want work product, make sure the work is actually being done in the work-product zone, by counsel, or at counsel’s direction, in anticipation of litigation, and under confidentiality conditions that make sense. If you’re going to use AI as an assistant, use it like you’d use any other vendor or platform in a serious matter.
Because what happened here is the digital version of leaving your trial notebook on the bar and then arguing it should be inadmissible because you meant to show it to your lawyer later.
Everyone’s going to calm down about this, because the law isn’t confused even if the headlines are. The rules aren’t new; the toys are. Courts will keep doing what they’ve always done by looking at who you talked to, why you talked to them, and whether you kept the conversation confidential. Once lawyers and clients internalize that, the “AI privilege” debates will fade into the same category as “is email confidential?” and “can I use a work phone to talk to my lawyer?”, questions that only feel profound the first time you watch someone learn the hard way.
Soon, everyone will (should) understand the rules so they can play the game, and those that know how to use it will be scoring more points than the other players on the field. The playbook doesn’t change just because the equipment does. The teams that practice with the new gear, learn what the refs are calling, and stop committing unforced errors will move the ball and put numbers on the board. The teams that keep tossing the ball to the other side and then complaining the rules are unfair, well, we’ll keep reading about them in the paper and in bar discipline journals.