Can What You Type Into AI Be Used Against You in a Lawsuit?
New Title

AI can be useful for brainstorming, organizing information, and speeding up routine tasks.
But when a problem has real legal consequences, using AI instead of a lawyer can create a second problem on top of the first one: you may be creating a written record that is inaccurate, damaging, and potentially discoverable. Recent court decisions make clear that judges are not treating AI chats as some special protected zone. They are applying the usual rules about privilege, confidentiality, work product, relevance, and discovery.
The real risk is not just “Can they get the prompt?”
That is part of the issue, but it is not the whole issue.
The bigger risk is that people use AI the way they should be using a lawyer: to test facts, explore liability, assess strategy, and decide what to do next. Once that happens, the prompt history itself can become a problem.
A person dealing with a business dispute, employee issue, contract problem, or threatened claim may think they are being efficient by pasting facts into an AI tool and asking what the law says. In reality, they may be creating material that is harder to protect, easier to misinterpret, and more dangerous than they realize.
What the recent cases show
The most concerning recent decision for New York readers is United States v. Heppner from the Southern District of New York. There, the court held that the defendant’s written exchanges with Claude were not protected by attorney-client privilege and were not protected work product. The court emphasized that the AI tool was not a lawyer, the exchanges were not confidential in the way privilege requires, and the materials were not prepared by or at counsel’s direction. The fact that the defendant later gave the materials to his lawyers did not retroactively make them privileged.
That does not mean every AI prompt is always discoverable. In Warner v. Gilbarco, a federal court in Michigan refused to compel production of a pro se plaintiff’s ChatGPT-assisted materials and treated them as protected work product, in part because they reflected the plaintiff’s mental impressions prepared for litigation. The court also rejected the idea that use of ChatGPT automatically waived work-product protection.
And in Concord Music Group v. Anthropic, the court addressed prompt-output discovery in a different context and ordered production of a large sample of prompt-output pairs under a discovery protocol. That case was not a privilege ruling, but it is another reminder that AI interactions can become ordinary discovery material when they are relevant to the claims or defenses in the case.
What this means in practice
The lesson is not that AI should never be used.
The lesson is that AI is not a safe replacement for confidential legal advice, especially when the issue involves risk, exposure, or strategy.
That matters because people often type things into AI that they would never say as casually in an email to counsel. They speculate. They overstate. They use bad facts. They ask the wrong question. They test aggressive ideas. Then all of that sits in a written record.
If litigation or an investigation follows, that record may become relevant for at least three reasons.
First, it may show what the person knew, believed, intended, or was worried about. Second, it may contain admissions, inconsistencies, or sloppy wording that an adversary would love to see. Third, it may be harder to shield than a true attorney-client communication.
Why using AI instead of counsel can backfire
Here is the practical problem. A lawyer does more than answer a question. A lawyer spots the issue behind the issue, filters facts, protects confidentiality where possible, frames advice in light of risk, and helps avoid creating bad evidence. AI does not do that. AI may generate a polished-looking answer, but it does not create privilege just because the subject is legal, and it does not exercise legal judgment. The Heppner decision is a strong reminder of that distinction.
That means the person who relies on AI instead of counsel may be exposed in two ways at once: they may get a bad or incomplete answer, and they may create a discoverable record of the problem while doing it.
A few examples
- A business owner thinks a partner is taking money without permission. Instead of calling counsel, they paste financial facts into an AI tool and ask whether the conduct is fraud, breach of fiduciary duty, or embezzlement.
- An executive considering a new job pastes an employment agreement into AI and asks whether the non-compete is enforceable and whether leaving could cost them equity.
- A company manager dealing with a complaint uses AI to draft an internal response and includes details about personnel issues, prior conduct, and what the company is “really worried about.”
In all three situations, the person may think they are just getting organized. But they may also be generating a written trail that could later become part of the case.
The most important risks
1. You may lose the confidentiality you thought you had
Recent courts have shown real skepticism toward claims that AI chats are automatically confidential or privileged, particularly where a third-party platform is involved and counsel was not directing the work.
2. You may create harmful evidence
Even if a prompt is never admitted into evidence, it may still become part of a discovery fight, shape the other side’s strategy, or reveal more than you intended.
3. You may rely on an answer that misses the real issue
Legal problems usually turn on nuance, timing, documents, strategy, and facts that need judgment, not just a quick response that sounds plausible.
4. Your employees may be creating risk without realizing it
If team members are feeding contracts, dispute facts, employment issues, or internal concerns into AI without guardrails, the business may be creating avoidable discovery and confidentiality problems.
Bottom line
Recent cases do not say that every AI prompt is automatically discoverable. But they do say something important: courts are not giving people a free pass because the information was typed into an AI tool instead of sent in an ordinary document.
So the safest practical takeaway is this:
If the issue is important enough that you are tempted to ask AI what your legal risk is, it is probably important enough to ask a lawyer instead.
This article is for general informational purposes only and is not legal advice. The law in this area is developing, and outcomes depend on the facts, the platform used, the claims at issue, and the court involved.
Recent Posts




