- The Breakdown
- Posts
- đȘ Your chatbot keeps receipts
đȘ Your chatbot keeps receipts
Everything you tell AI can be used against you



Your chatbot keeps receipts
After finance executive Bradley Heppner was accused of looting $150 million from a company he led into bankruptcy, he did what any of us would: ask Claude to prepare a legal defense.
We donât know exactly how much he confided in the chatbot. He might have therapeutically confessed.
But even if he proclaimed his innocence, chat logs can be damning. Prosecutors could use them to highlight inconsistencies in his narrative, for example, or show that he understood the laws he was breaking.
AI chats are timestamped records of your thinking, which is not something you want to hear in a court of law.
So when investigators found 30 pages of defense strategy Claude had drafted for Heppner, prosecutors moved to obtain them and his defense fought to keep them out.
This raised a novel legal question.
The defense argued that the documents should be protected by attorney-client privilege, noting that Heppner had subsequently sent them to his attorneys, thereby â they hoped â making them confidential communications.
That did not hold up in court. âBecause Claude is not an attorney,â Judge Jed S. Rakoff ruled, âthat alone disposes of Heppnerâs claim of privilege.â
Claude can pass the bar, but it canât be your lawyer.
The judge ruled further that because his chats were recorded by a third party (Anthropic), Heppner had no âreasonable expectation of confidentialityâ in his communications with Claude.
Had he read Claudeâs terms and conditions, he would have known this: âWe may also disclose personal data to third parties in connection with claims, disputes or litigation, when otherwise permitted or required by law.â
No one reads terms and conditions, of course, so I donât blame him for missing that.
Also, the experience of using Claude makes the opposite impression. Chatbot prompts and responses feel private in a way that emails, text messages, or Google searches do not â less like youâre using a third-party service and more like youâre talking to a friend.
Feelings matter in this case.
In Katz v. United States, Justice John M. Harlan II articulated a âtwofold requirementâ for when our privacy is protected by the Fourth Amendment: âFirst that a person have an actual (subjective) expectation of privacy and, second, that the expectation be one that society is prepared to recognize as âreasonable.ââ
Claude seems to fulfill the first requirement: Chatting with Claude feels private â maybe even more so than talking to a friend (because surely machines are better at keeping secrets than humans).
But that appears to be a moot point, because the courts say it does not fulfill Harlanâs second requirement: Itâs not reasonable to think your chats wonât be used against you.
We learn that from another precedent-setting case, where Judge Sidney Stein ruled that chatbot conversations are afforded less privacy than wiretapped phone calls â because chatbot users have âvoluntarily disclosedâ their conversations to the provider of the chatbot.
This is like the Bank Secrecy Act, but for AI prompts: You choose to share your Claude chats with Anthropic, so they can do whatever they want with them.
In Heppnerâs case, investigators seized his Claude-generated defense notes when they searched his home. (Iâm guessing heâs over 50, because he seems to have printed them out.)
Next time, they might not have to. The cases cited here suggest that prosecutors can probably obtain your chat logs from Anthropic or OpenAI just by asking for them â without a court order, even.
This seems at least as invasive as the Bank Secrecy Act giving the government access to our bank records.
The Bank Secrecy Act is part of the inspiration for crypto, which attempts to make money private by cutting out the third-party middlemen that answer requests from law enforcement.
Now, Vitalik suggests we do something similar for chatbots.
One option is to run your chatbots locally. With a high-end laptop, you can download an open-source large language model that will generate its answers on your own device.
Anthropic canât share chat logs it doesnât have.
Vitalik reports, however, that these local models can only do basic tasks, like summarizing a PDF or searching this newsletter for typos.
For more advanced tasks â like prepping your defense against allegations of financial fraud â youâd have to harness the computing power of an Anthropic or OpenAI data center.
He therefore proposes the development of a âmulti-layered defenseâ for our chats with remote LLMs.
This could start with âzero-knowledge proof APIsâ that prevent Anthropic or OpenAI from knowing who we are. âMixnetsâ could shuffle IP addresses, obscuring the origin of each individual request we send. Computation could be run in âtrusted execution environmentsâ (TEEs) to ensure thereâs no malicious code snooping on your queries. And local LLMs could provide âinput sanitationâ by scrubbing any personal data from our prompts before sending them out to a datacenter.
âIf done well,â Vitalik concludes, âAI can actually create a future with much stronger privacy and security.â
But is there enough of a market for anyone to bother developing these things?
One lesson weâve learned from crypto is that people donât care very much about financial privacy.
But chatbots capture something more intimate than money: our thoughts. So people may come to demand stronger privacy for their language models than their bank accounts.
In the meantime, remember:
A chatbot might be your friend. But itâs not your lawyer.
â Byron Gilliam

Introducing Blockworks Investor Relations, an IR platform built for onchain businesses.
The latest Blockworks offering brings together analytics, a branded investor relations site, and integrated advisory support into a single platform. The result is a more efficient way to share your story, build trust with investors, and engage a global audience from day one.
Check out our cofounder Michael Ippolito's keynote at DAS NYC launching the new IR platform.


