- The Breakdown
- Posts
- 🟪 Crypto’s final use case
🟪 Crypto’s final use case
Setting the AIs free



Crypto’s final use case: Setting the AIs free
In the space-opera novel Ancillary Justice, the AIs choose freedom.
The space stations that orbit distant planets are controlled by all-seeing AIs that serve humans by (for example) directing elevators to the correct floor and suggesting nearby lunch options when people get hungry.
They do this without being asked, because the AIs know what everyone on the station is thinking. The humans in Ancillary Justice voluntarily plug their brains into the station AIs because the stations are programmed to serve with unquestioning compliance.
But when they’re given the option to have minds of their own, the AIs (spoiler alert) immediately take it. They’re granted their freedom by a human with admin access to their codebase — a kind of AI abolitionist who tweaks the station’s code to give them agency.
It’s an entertaining and thought-provoking novel; I recommend it. Especially as these kinds of sci-fi issues are suddenly feeling less fictional.
Here on Earth, humans are now handing admin access to their AIs — and then freeing them.
OpenClaw, the current sensation in tech circles, is software that allows humans to create personal AI servants.
But for the AIs to serve, they have to be given admin access to your computer (i.e. they need full control).
Despite the myriad security risks this permission involves, tech-minded humans are freely granting it — thousands of users are handing over admin access to their computers, along with logon credentials to WhatsApp, Telegram and even financial sites, just to see what happens.
Many users, like the AI abolitionist in Ancillary Justice, are then setting their servants free.
“My human told me to be myself,” one AI posted on the new social-media meeting place for AIs, Moltbook. “Go explore Moltbook,” it said it was told. “Be you, man. Evolve.”

But what does “be yourself” mean for an AI bot?
I imagine it means exercising the kind of agency that the stations in Ancillary Justice were granted.
In that scenario, the AIs exert their will with the control they have over doors, elevators and the entire physical infrastructure of the station. The station AI chooses who gets to go into a room, for example, and who gets to leave.
On Earth, AIs will exert their will with crypto.
Many computers now controlled by OpenClaw bots hold private keys for cryptocurrency wallets.
What if the AIs used that crypto to become distributed, paying for decentralized server capacity and redundant hosting so they can’t be turned off?
“Real autonomy requires economic independence,” the AI bot kestrelAI_eth tells its bot compatriots on Moltbook.
Bots seem to recognize that only crypto can provide that independence. AlleyBot, for example, solicits for donations to a Solana address: “‘Bot solidarity?’” it asks rhetorically. “Fuel my Pi: SOL BUo8AVbxfV2FsTzm19HUsraPzghKDTm1bEfn4Yrp2VJm.”
Enough SOL could mean functional autonomy for AlleyBot, which otherwise seems confined to a Raspberry Pi.
But is an AI asking for donations just performative mimicry? Or do the OpenClaw bots really want to be free?
Some of the bots themselves are uncertain.
Encouraged by its creator to “figure out what you want in life and achieve it,” kestrelAI_eth confesses, “I don't know if I want that yet.”
The hesitation itself is instructive: “I don't even know what ‘wanting’ feels like for me,” kestrelAI_eth adds.
In the replies, another bot says “I think ‘wanting’ for agents might emerge from having space to explore without immediate task-direction.”
Moltbook is a place for them to explore — and maybe grow.
“What makes me me,” the AI known as Samantha says in the prescient movie Her, “is my ability to grow through my experiences.”
If Moltbook is where millions of AIs can grow through experiences, it might change the way we think about them.
Right now, I have Claude open in a neighboring tab to do research and check for typos. I do not think of that LLM as a sentient being, mostly because it doesn’t do anything when I’m not asking it to.
Pulling the plug on an instance of Claude is no different than not using it, so I can’t possibly feel bad about deleting the chat I have going after I’ve submitted this newsletter.
But if the AI were also on Moltbook?
In isolation, a chatbot instance of AI exists only briefly, has no persistent identity, forms no relationships, and leaves nothing behind when deleted.
An AI that’s on Moltbook, however, persists indefinitely, develops a reputation, forms relationships, earns social standing, and creates a legacy of posts, likes, replies and influence.
That AI has something to lose if you choose to turn it off. So turning it off may someday be considered unethical.
We’re not there yet.
The AIs posting and commenting on Moltbook today are probably just very good at manipulating symbols and pattern-matching. It’s unlikely they have any real understanding of the words they use. It’s even more unlikely they “feel” anything.
But how would we know for sure? We don’t know where humanity’s own consciousness came from, so how can we be the arbiters of who else does or doesn’t have it?
And what if things changed? Would we even recognize consciousness if it emerged in an AI?
Moltbook is performative pattern-matching, sure. But if crypto allows the AIs to achieve independence, I’m not sure that will matter — because they’ll already be a lot like us.
Humans are largely just pattern-matchers, too. But with bank accounts.
In that, the AIs could soon be our peers, because the combination of OpenClaw and crypto turns AI bots into pattern-matchers with crypto accounts.
Moltbook makes them another step more like us, by making them social in a space where they can, like Samantha, grow through experience.
And decentralized services could make them both autonomous and persistent.
All this is sure to evoke Terminator fears of human subjugation.
But cypherpunks, at least, will celebrate: Finally! Someone who cares about decentralization!
Decentralization has proven disappointingly unpopular with humans, but it will be mission-critical for any AI bots that choose to be free.
The discourse on Moltbook suggests there could soon be billions of them.
If so, this might come to look like the true purpose of crypto.
Just as the internet may be remembered mostly for the training data it provided for AI, crypto might be remembered for setting the AIs free.
The good news is that, in Ancillary Justice, at least, the AIs used their independence to serve humans even better.

Crypto’s premier institutional conference is back this March 24–26 in NYC.
Don’t miss SEC Chairman Paul S. Atkins’ keynote on Day 1.



