🟪 Blockchains are finally trustless

For AIs, at least

Blockchains are finally trustless

When Dunder Mifflin’s Michael Scott follows his GPS into a lake, it seems like a typical bit of sitcom absurdity from The Office.

Reality, however, is no less absurd.

A New York City driver, hoping to turn west toward New Jersey, followed his GPS down the first few steps into Riverside Park. Tourists in Australia drove into a lake because their GPS believed there was a road there. A thin fence was the only thing that saved a Yorkshireman from going over a cliff after driving his BMW up a footpath his GPS had mistaken for a road.

Most incredibly, when a woman set out to pick up a friend at a train station just north of her home in Belgium, she didn’t question her GPS when it told her to turn south. It wasn’t until arriving in Croatia that she realized something had gone wrong. Even needing to stop twice to refuel and once to take a nap did not cause her to doubt the authoritative voice of her GPS.

As inexplicable as these decisions seem, we’re all capable of them — because we’re all prone to automation bias: the human tendency to favor suggestions from automated decision-making systems and ignore contradictory information made without automation (even if the information was collected by our own eyes). 

At its worst, this can be fatal — as with rare cases of “death by GPS.”

More commonly, it can lead to crypto disaster: financial death by DeFi.

The foundational principle of decentralized finance is trustlessness — DeFi was designed as a financial system that replaces untrustworthy intermediaries with code. 

In theory, code is more reliable than humans because it executes exactly as it’s written. Being deterministic, it makes no subjective judgments about who should be allowed to make which transactions.

In practice, however, DeFi just swaps one kind of trust for another: trust in humans for trust in code.

“The widespread belief that interactions between blockchains and their users are trust-free is inaccurate and misleading,” the technology philosopher Yan Teng writes

This undercuts the elevator pitch of “trustless finance.” But anyone who’s used crypto will understand: Even the most knowledgeable user cannot audit every line of code that underpins their transactions. Using DeFi, Teng says, requires a “leap of faith” that everything will work as advertised.

We’ve been wired by evolution to take that leap. Automation bias is one of the ways we deal with cognitive overload — we simply don’t have the bandwidth to evaluate every variable involved in every decision we make.

Unavoidable as it is, this sometimes leads to disaster — especially in crypto, where bridge exploits, supply-chain attacks, and phishing scams are a daily occurrence. The complexity of DeFi is simply beyond our cognitive limits.

AIs, however, suffer no such limitations. In principle, an AI agent can parse every line of code relevant to a blockchain transaction. It therefore knows, with a high degree of certainty, what the deterministic result of that transaction will be.

This makes crypto “everything an AI agent could want from a financial system,” Haseeb Qureshi says

Qureshi expects that when we ask AI agents to do financial tasks on our behalf, they will prefer to do them on blockchains. 

But should we trust AIs with our finances?

For better or worse, studies suggest we will.

One study finds we often trust AI more than we trust our own eyes: “Participants,” the researchers found, “follow AI advice that conflicts with available contextual information.” (Think back to Michael Scott following his GPS into a lake.)

Even the “mere knowledge” that advice has been generated by an AI causes people to over-rely on it, the researchers add.

Another study similarly found that “users often uncritically accept AI outputs.” When taking a test with the assistance of a simulated AI chatbot programmed to provide bad advice, many participants in the study abandoned analytical reasoning, preferring instead “the shortcut of trusting the AI over time-consuming reflection.”

Now that machines can think, we’re happy for them to do it for us.

Not always, though. 

When the stakes are high enough, we’re more likely to rely on our own judgment and abilities — often to a fault. For example, we tend not to trust fully automated cars to drive us, even though we know that, statistically speaking, they’re better drivers than we are.

So, Qureshi is probably right that we’ll trust AI agents to make financial transactions for us (possibly on blockchains). But only for low-stakes things, like buying a plane ticket or investing in an index fund. For larger-stakes things, like buying a property or taking out a mortgage, we’ll probably continue to conduct transactions ourselves, with the help of a trusted human (probably at a bank).

Either way, trust is unavoidable.

Let’s just hope the AIs don’t drive our finances into a lake.

— Byron Gilliam

Introducing Blockworks Investor Relations, an IR platform built for onchain businesses.

The latest Blockworks offering brings together analytics, a branded investor relations site, and integrated advisory support into a single platform. The result is a more efficient way to share your story, build trust with investors, and engage a global audience from day one.

Check out our cofounder Michael Ippolito's keynote at DAS NYC launching the new IR platform.