đŸŸȘ Your chatbot keeps receipts

Everything you tell AI can be used against you

Your chatbot keeps receipts

After finance executive Bradley Heppner was accused of looting $150 million from a company he led into bankruptcy, he did what any of us would: ask Claude to prepare a legal defense.

We don’t know exactly how much he confided in the chatbot. He might have therapeutically confessed. 

But even if he proclaimed his innocence, chat logs can be damning. Prosecutors could use them to highlight inconsistencies in his narrative, for example, or show that he understood the laws he was breaking.

AI chats are timestamped records of your thinking, which is not something you want to hear in a court of law.

So when investigators found 30 pages of defense strategy Claude had drafted for Heppner, prosecutors moved to obtain them and his defense fought to keep them out.

This raised a novel legal question.

The defense argued that the documents should be protected by attorney-client privilege, noting that Heppner had subsequently sent them to his attorneys, thereby — they hoped — making them confidential communications.

That did not hold up in court. “Because Claude is not an attorney,” Judge Jed S. Rakoff ruled, “that alone disposes of Heppner’s claim of privilege.”

Claude can pass the bar, but it can’t be your lawyer.

The judge ruled further that because his chats were recorded by a third party (Anthropic), Heppner had no “reasonable expectation of confidentiality” in his communications with Claude. 

Had he read Claude’s terms and conditions, he would have known this: “We may also disclose personal data to third parties in connection with claims, disputes or litigation, when otherwise permitted or required by law.”

No one reads terms and conditions, of course, so I don’t blame him for missing that.

Also, the experience of using Claude makes the opposite impression. Chatbot prompts and responses feel private in a way that emails, text messages, or Google searches do not — less like you’re using a third-party service and more like you’re talking to a friend. 

Feelings matter in this case.

In Katz v. United States, Justice John M. Harlan II articulated a “twofold requirement” for when our privacy is protected by the Fourth Amendment: “First that a person have an actual (subjective) expectation of privacy and, second, that the expectation be one that society is prepared to recognize as ‘reasonable.’”

Claude seems to fulfill the first requirement: Chatting with Claude feels private — maybe even more so than talking to a friend (because surely machines are better at keeping secrets than humans).

But that appears to be a moot point, because the courts say it does not fulfill Harlan’s second requirement: It’s not reasonable to think your chats won’t be used against you.

We learn that from another precedent-setting case, where Judge Sidney Stein ruled that chatbot conversations are afforded less privacy than wiretapped phone calls — because chatbot users have “voluntarily disclosed” their conversations to the provider of the chatbot.

This is like the Bank Secrecy Act, but for AI prompts: You choose to share your Claude chats with Anthropic, so they can do whatever they want with them.

In Heppner’s case, investigators seized his Claude-generated defense notes when they searched his home. (I’m guessing he’s over 50, because he seems to have printed them out.) 

Next time, they might not have to. The cases cited here suggest that prosecutors can probably obtain your chat logs from Anthropic or OpenAI just by asking for them — without a court order, even.

This seems at least as invasive as the Bank Secrecy Act giving the government access to our bank records.

The Bank Secrecy Act is part of the inspiration for crypto, which attempts to make money private by cutting out the third-party middlemen that answer requests from law enforcement. 

Now, Vitalik suggests we do something similar for chatbots.

One option is to run your chatbots locally. With a high-end laptop, you can download an open-source large language model that will generate its answers on your own device. 

Anthropic can’t share chat logs it doesn’t have.

Vitalik reports, however, that these local models can only do basic tasks, like summarizing a PDF or searching this newsletter for typos. 

For more advanced tasks — like prepping your defense against allegations of financial fraud — you’d have to harness the computing power of an Anthropic or OpenAI data center.

He therefore proposes the development of a “multi-layered defense” for our chats with remote LLMs. 

This could start with “zero-knowledge proof APIs” that prevent Anthropic or OpenAI from knowing who we are. “Mixnets” could shuffle IP addresses, obscuring the origin of each individual request we send. Computation could be run in “trusted execution environments” (TEEs) to ensure there’s no malicious code snooping on your queries. And local LLMs could provide “input sanitation” by scrubbing any personal data from our prompts before sending them out to a datacenter.

“If done well,” Vitalik concludes, “AI can actually create a future with much stronger privacy and security.”

But is there enough of a market for anyone to bother developing these things?

One lesson we’ve learned from crypto is that people don’t care very much about financial privacy.

But chatbots capture something more intimate than money: our thoughts. So people may come to demand stronger privacy for their language models than their bank accounts.

In the meantime, remember: 

A chatbot might be your friend. But it’s not your lawyer.

Introducing Blockworks Investor Relations, an IR platform built for onchain businesses.

The latest Blockworks offering brings together analytics, a branded investor relations site, and integrated advisory support into a single platform. The result is a more efficient way to share your story, build trust with investors, and engage a global audience from day one.

Check out our cofounder Michael Ippolito's keynote at DAS NYC launching the new IR platform.