On Trust Infrastructure

Trust underpins civilisation. It expands the scope and complexity of the pursuits within our collective reach by allowing us to act as a group, to align, and even to coordinate over large temporal and geographic distances.

Unfortunately, the more adept we are at leveraging it, the more expensive it is to establish and maintain: as people’s social contexts expand and turn over evermore rapidly, the foundations on which trust is built get eroded.

So we (gradually and iteratively) found ways to decrease our dependence on it. We did what we could to replace trust with formalised, rigid bureaucracies and “blind” processes, which we tasked with the risk assessments and rule-policing we no longer had the capacity to take on ourselves.

The internet made things worse. Its digital nature shrank the surface area for trust-building while further expanding our social context, and—for reasons inherent to both its design and its ideological foundations—it eroded the legitimacy of the imperfect trust-substitutes we’d devised. So over the past three decades, instead of fighting back, we gave in: if we could no longer effectively build trust for ourselves, nor delegate the work to institutions, we would just become more individualistic.

Out of necessity, we elevated radical and absolute transparency to a sacred status, hoping to gain non-interactive ways to evaluate social situations. We made all of our social spaces open and global, feeds where strangers with little to no goodwill can peer into our lives, and us into theirs. We’ll all be the better for it, we told ourselves.

Even countercultural movements like the early second generation cypherpunks moved past the ideals and rituals of their forefathers in order to build systems aimed at being maximally trustless and transparent. The aspiration was coordination without risk. You shouldn’t even need to trust the systems which were built to facilitate cooperative action: if you can observe everything, including all the immutable rules that you will subject yourself to, and if you can see what everyone else is doing, you can independently verify you won’t get screwed, without relying on anyone else.

But this turns out to be too restrictive. Too few of the interactions we want or need to have with others can be formalised up front. In practice, crypto—in theory the scene most dedicated to these new ideals—is rife with backroom deals and insiders organising in private telegram groups, even though blockchains remain completely legible to anyone who cares to look.

The problem isn’t that private coordination is inherently suspicious or bad. The problem is that radical transparency as a sacred value doesn’t effectively constrain bad behaviour, and frequently constrains good and well intentioned behaviour. The earnest and principled get hamstrung, while the powerful and sociopathic happily break all of your taboos while demanding strict adherence from you. We seem to have meme’ed ourselves into a distributed version of the dictator’s trap, which incumbents are only too happy to exploit. We have become anti-competence because the competent cannot act without trust, and we have become anti-trust.

Evermore enforced transparency cannot be the solution. It’s eventually self-defeating: opportunities for obfuscation and misrepresentation grow as the volume of information grows, and this is bound to become worse in a world of generative AI. It’s telling that the (still essential) groups which do emerge in this environment do so initially out of the limelight, leaning on fragile and quickly evaporating local signals of trustworthiness. In other words: what emerges does so within small clusters reliant on the remnants of the very cultural foundations we find ourselves no longer able to reliably build anew and maintain at scale. Transparency and trustlessness are no alternatives. They’re red herring memes, parasitic.

We also cannot go back, nor should we want to. Revelation is a movement which only ever goes in one direction. The institutions and structures which have gotten us this far are no longer fit for purpose and have become bottlenecks, but it would require dismantling too much of modern society’s achievements to change this fact.

Instead, our goal should be to find ways to increase the surface area for trust building once again. We need new trust infrastructure, adapted to the world as it is.

**

Trust is built gradually, over multiple rounds of an iterated game. You progressively get granted more trust as people’s confidence in their ability to predict your behaviour improves, and as your alignment with their preferences and values gets demonstrated.

Starting from a completely blank slate is too costly, so we rely on shortcuts. Credible commitments help—if you make the penalties for untrustworthy behaviour and likelihood of enforcement high, you can often bypass many early rounds of this game. (“You should trust me because I’ve put the money in escrow with a neutral third party.” Or “You have the right to ask the police to evict me if I don’t pay on time.“)

But as we’ve seen above, not every interaction can be structured in this way, and for those set on malfeasance, formal constraints are often porous.

So instead, we tend to use some fuzzy combination of established common context and proof of historical behaviour as groundwork. The more relevant examples someone can point to, the more attributes they share with you, and the more embedded they are within the same network you are embedded within, the easier it is for you to assume alignment.

Unfortunately, it’s precisely this second category of mechanisms which has become harder to leverage.

The supply chain of information about historical behaviour has broken down and become quite weird: the information you need is often locked away in contexts distinct from the one where interactions are happening, and you are often not connected with the people who have the information you want. Knowing someone no longer reliably implies knowing people who know them, a historical oddity.

The ability to establish common context (i.e. the ability to establish common knowledge) has also degraded in our online-mediated communities. Signals of in-group membership get eroded quickly, with mops and sociopaths having a full view of them from the start. Common knowledge also depends on a certain amount of synchronicity, and a shared space. If I am to know that you know that I know, we need to be able to “see” each other, at the very least. Think about browsing twitter: no two feeds are the same, and while likes, retweets, username emojis, and mutual follows gives some room for signalling, it is lossy and constraining. Have you seen my retweet of the post you liked yesterday?

In the physical world, it’s easier to draw some conclusions about who you’re sharing a space with, even when you don’t personally know anyone else around. These assumptions allow you to infer norms, and ultimately helps foster deeper cultures. On the internet, spaces which want to achieve the same thing have to hide themselves completely, or risk suffering some version of an Eternal September.

At best, the above restricts the kind and the quality of trust which can emerge; at worst it creates more opportunities for the ill-intentioned to lie or affinity scam you, explicitly creating distrust.

We are all itinerant salesmen, putting up our own signs about our own trustworthiness. I don’t want to live in that world. I want to live in a world where we maximise the surface area available for trust-building, and where we collectively regain access to and mastery over the above fundamental building blocks.

What we’ve lost–or more accurately, what we’ve outgrown–is infrastructure which created some sort of connective tissue between personal or social data, and related useful metadata. Without it, the raw information (which is obviously still there and still being produced) became detached from its contextual moorings, and we find ourselves no longer able to derive the same benefits from it as we used to. Fixing our trust infrastructure, in other words, requires fixing our data infrastructure.

To do this, we must solve for three problems: contextual richness, expressibility, and a lack of strict and powerful audience controls.

Contextual richness

It’s rarely raw information you want to share. Almost always, you also want to share associated metadata. The numbers 1 9 9 4 have less value than knowing 1994 is my birth year, which has less value than a government document certifying that fact.

And yet, it’s surprisingly difficult to convey this kind of context for most of the information we care about today, especially across environments.

Say you want to share some key fact about your twitter X network graph on reddit, or proof of your physical address on Signal. How do you do it?

If you’re very lucky, the service or platform where these data exist and the one where you want to make use of them are deeply integrated (or provide some kind of attestation service) enabling trusted transfer of information between them. More often than not, though, you have no better option than ever-easier-to-fake images or (often authentication-gated) links liable to rot.

In practice, this means you constantly have to ask for permission to make use of information which, by all intents and purposes, should be yours to use as you please.

Expressibility

The same things which limit your ability to share information in a contextually rich way also limit your ability to shape and transform it prior to sharing it, while retaining its meaning.

It’s easy to imagine situations where you want to prove something derived from some data, without revealing the data itself. You may want, for example, to prove to your new landlord you have always paid your rent on time, without wanting to hand over the details of every financial transaction to this near stranger. Or you may want to signal expertise on a topic by proving a certain amount of Reddit karma in a particular subreddit without revealing your username and post history.

Unless your bank or Reddit run a service for these very specific use cases, you will have no way to express this information about yourself in the way you would prefer to. Its value is fundamentally limited by this gap in expressibility: if you want to use it outside of the silo it’s housed in, you need to either share all of it, or none of it. It’s useless to you for anything which requires something in between.

Strict and powerful audience control

Of course, what you want to express and how you want to express it depends on your audience. The same information may need to be presented in different ways to different people.

It’s striking to me how crude our audience control levers are, nowadays. You can make something public, or you can share it with some predefined set of emails. Within silos, you may get something somewhat more nuanced, but not by much.

I often find myself wanting to share something with groups which are at once more fluid and more strictly scoped. Why can’t I post a blog post which can only be seen by people who’ve corresponded with Tyler Cowen over email? Why can’t I filter Amazon reviews by people in my income bracket?

We haven’t ended up in this inadequate equilibrium through ill-intent or thoughtlessness, we’ve just not had access to the right levers to pull. The root of the problem is easy to find and gesture at: a lack of tools that give us more control over what information we share, when, and with whom, and which allow for expressive, context-rich messages. These are topics which have received an enormous amount of attention over the years, but most proposed solutions have clustered at the cultural or regulatory layer. So far they’ve all failed, and made our lives worse in the process.

Because this is the kind of messy problem which requires coordination among an enormous number of unaligned players, we’ve ended up trying to either force companies to behave in certain ways (GDPR’s Right of access, for example), or choosing ”exit” by building ideological competitors to established networks. Of course, the former seems to just entrench current market leaders, and the latter ends up being adopted almost exclusively by people whose main shared interest is the implementation details of their social network (historically, an insufficiently sticky glue).

To generalise: the failure of these proposed solutions has largely been caused by a focus on mandating certain behaviours, instead of augmenting our social toolkit.

Thankfully it seems we may have a way out, through some pretty clever maths.

Two excellent articulations of the promise of this maths comes courtesy of 0xPARC’s Gubsheep. In short: ZK-snarks and programmable cryptography allow us to escape the constraints which have so far hampered our efforts, by giving us a coordination-light and permissionless way to rebuild the connective tissue between data and metadata which we’ve lost.

It’s worth emphasising: through zero-knowledge cryptography, we can get contextual richness, expressibility, and strict audience controls without having to align the whole of the world to a standard or even asking for the permission of the current guardians of our data. It gives individuals the ability to transmit any data (even transformed) and proof of its context as long as it’s delivered somewhere over https or via email or via NFC chips embedded in physical objects.

Zero-knowledge cryptography creates affordances for trust, in other words.

These tools and technologies are still early and fledgling. They enable things that are closer to the earliest Apple ][s than to the iPhone, or to CACHE’s CBBS rather than Twitter. But even at this stage, they let us glimpse a way forward, towards a more trust-full future.

Maybe you would like to ensure your local exchange/barter community is only accessible to residents of a particular neighbourhood, but you feel awkward asking people to share any private information. Programmable cryptography means you can ask aspiring members to prove they’ve received an email with their address from a defined list of sources (their credit card statement, or their utility bill) without having to share private correspondence with anyone. It’s all local to their devices.

You can run this local exchange/barter community on a hosted forum or (if you’re idealistic and technically savvy) on a local server. But these zero-knowledge proofs are portable, and you can choose to use them in different contexts, too. If you want to plug into the much maligned financial infrastructure which has been built over the past 5 years (in order to lend money to members of this community at preferential rates, while still being able to enshrine an interest rate or a repayment schedule, for example), you can. Depending on your needs, you may be able to navigate the tradeoff space of blockchains more effectively: because this is a smaller, inherently higher trust environment, you can choose to prioritise speed without worrying about the costs that normally introduces.

As a nice bonus, creating these tighter networks gives you more opportunities for bootstrapping tools and platforms which depend on a certain level of user density.

Most importantly, these zero-knowledge proofs allow you to extend all of the above behaviours to digital neighbours: mutuals on twitter, people you’ve exchanged multiple emails back and forth with, members of your [insert closed platform] group chat. By giving you the ability to carry over rich context from one platform to another, they effectively make it possible to shape and bend the entire infrastructure of the internet into something more local.

You can merge the physical and digital easily, create group chats open only to people living in your city and who have been subscribed to a particular Substack for over a year. Or a forum only accessible to eligible US voters who have donated to your favoured candidate. Or a signal group only visible and accessible to people who can prove that they are employees at a particular company (through neutral infrastructure, i.e. without revealing anything to said company) .

Zero-knowledge cryptography, in other words, helps transition the internet and blockchains from being tools for activism towards being tools for conspiracy. It enables more spaces which are visible to some, but private or illegible to most. Recall the toy example of being able to write a blog posts visible and accessible only to people who’ve had a number of back and fourths with Tyler Cowen via email.

In practice, this world that we can just about glimpse is one where behaviours which are currently only possible among close friends become accessible to groups of strangers. It’s not only serious conspiracies which are enabled by better trust infrastructure, but also small and silly ones.

I’m enamoured with the prospect of being able to import game items unlocked through effort and skill to other, unrelated contexts (even homebrewed games). I would love to see games which can only be played by people who’ve reached a certain level in another game, developed by someone else. I want to see what level of cultural depth we can reach if we no longer have to constantly be building for some Generalised Other, and instead feel comfortable building for someone we know.

It’s a world which is permissionless, but not trustless. More social, but also more private. Deeper and richer… and maybe a little bit more fun, too.


I'm a co-founder of Aztec, and created the Polaris Fellowship. I angel invest in ideological founders. I occasionally host essays on this site.

I'm most interested in talent, trust, privacy, and the future.

I'm also currently thinking about cryptography, the process of becoming, the history of technolgy, and the culture of technology.

You can follow me on Twitter.
© 2024, Arnaud Schenk