Share
Explore

The Death of DAOs

Today’s Decentralized Autonomous Organizations (DAOs) might be more accurately called CMDs — Centralized Manual Disorganizations. Despite promises to do the opposite, DAOs have managed to recreate all the weaknesses of both democracy and dictatorship in a single institution. Unlike a dictatorship, DAOs lumber along at glacial decision making speeds, demonstrate little unique insight nor domain understanding, spend for marginal returns, and are incapable of synthesizing — much less executing against — a . Meanwhile, despite exhibiting all the maladies of democracy, they hardly deliver its purported benefits. DAOs are infamous for passing decisions with the support of powerful participants while ignoring entirely the preferences or information held by the long tail of members — in fewer words, they are and cloistered while still managing to be inefficient and uninsightful, unoriginal, uncreative.
This matters not because DAOs owe us anything, but because DAOs represent a hope for the world, really a hope for me, that governance can be done differently. That power can be apportioned bottom-up. That the plurality of perspectives can be taken into account. That decisions can be made as fast and parallel as private markets while remaining as transparent and accountable as a public blockchain. That surprising and even unpopular insights can receive consideration. That the best idea can win.
These failures are not to blame on DAO founders. In very few cases was their intention to create a centralized entity which they control. In fact, in many cases a founder has tried to further decentralize their DAO only to find that it comes at too steep of a cost to the coordinative capacity of the organization. Despite our industry’s great hopes to the contrary, it seems empirically true that decentralization trades off steep and non-convexly with coordination.
image.png
Need this be the case? Is there a reason to believe that some law of nature banishes us to a mostly feudal world where only a few small groups exert the lion’s share of grip on the steering wheel while the rest must be content receiving The Path from on-high? Is this the best that we can do?
Probably not.
We are surrounded by highly efficient, decentralized, collectively intelligent systems. Nature is suffuse with them. From beehives to mycelial networks to bird murmurations to the bodymind you use to read this article and amble about. Every living organism is evidence that we’ve yet to even glimpse the frontier of coordination and decentralization in human systems.
image.png
(Wait, I’m more decentralized than a bird? Well yes, you’re bigger and you’re integrating more information in a highly parallel way. Wait a bird is decentralized? Well yes, more than a single-celled organism like bacteria..)
Clearly, far from there being a natural law holding us back, nature knows something we don’t.

Toward Decentralized Autonomous Organizations

Toward Distributed Autonomic Intelligences

Toward Emergent Inferential Organisms

So what is it, what are we missing? Is there like one simple idea that can tell us how to build better governance? One magic key that will unlock the design for unbounded, fluid, symbiotic, emergent complexity as we see in nature?
Actually yeah, there is.
A growing synthesis across many fields as disparate as machine learning, economics, neuroscience, biology, and even military theory points toward a unifying framework through which to view what an organism made of many individual elements must do in order to render itself collectively intelligent. The idea moonlights under different names in each field (Bayesian inference, decision theory, predictive coding, good regulator theorem, niche construction, OODA loop) but the core idea remains the same:
A good organism is one that encodes a model of the world (sometimes it is a model of the world, as in the case of a virus), uses that model to make predictions about how the world will look in the future, and either updates its beliefs about the world when it’s surprised by what it sees, or takes actions in the world to make sure it doesn’t experience anything surprising.
One clever group has found a way to express the entirety of this behavior in a single quantity: a measure called free energy.
The details of this equation don’t matter too much for our purposes. Essentially it says exactly what we said before:
A good organism is one that encodes a model of the world (sometimes it is a model of the world, as in the case of a virus), uses that model to make predictions about how the world will look in the future, and either updates its beliefs about the world when it’s surprised by what it sees, or takes actions on the world to make sure it doesn’t experience anything surprising.
(Wait but is surprise really enough to incorporate all the preferences that an organism may have? Well, you’ll notice that you’re rarely surprised by sipping tepid tea, it’s near boiling water that evokes the reaction from you. This helps us see that our preferences express themselves as expectations about the future, and deviation from those expectations are surprising.)
Specifically, it says that a successful organism is one that’s capable of making that quantity smaller — it’s minimizing the free energy — and the way it does so is by reducing both ambiguity (the left term) and reducing risk (the right term). It’s able to do so by learning about the world (“exploring”) and by taking actions to make its preferred world more likely (“exploiting”).
Let’s say you’re willing to entertain this free energy mumbo jumbo, what does it teach us about DAOs? Perhaps there are many lessons to learn from this paradigm that apply to the creation of DAOs, but at the very least one lesson stands out starkly: more intelligent organisms like us don’t make rigid models of the world — like viruses do — but are capable of dynamically creating and modifying a living model of the world. And so far, none of our DAOs have a way to do this. They don’t create shared world models which then autonomously produce predictions and allocate energy (like money) throughout the organization. They don’t permit asynchronous contributions from many players to update the DAO’s collective belief. They don’t enable the expression of preferred outcomes which are aggregated and attended to in the final decision. The closest that we get to this today is through the use of forums like Discourse and Discord to have conversations that potentially change the mind (the “world model”) of participants in the DAO, and later those participants vote on proposals to inform the DAO’s actions. But we’ve yet to see a way in which the world model of the DAO itself is modified by the participants, nor where the world model directs the actions of the DAO autonomously.
image.png
What might this look like if we were to marry the conversation had within the DAO directly to the actions it takes? At the very least it would mean that the standard way of interacting with DAOs would have to change. The job of the participant would become less about approving explicit proposals, and much more about tending to a garden of evidence, stories, and beliefs that the DAO is using to take actions. This transforms participation in a DAO from being a game primarily about power — who has the most tokens? — into a game of curiosity and experimentation. That sounds more fun anyway.

So What?

So what? Does this mean we should put an AI in charge of the treasury and just copy paste the forum conversation into its context window and let it automatically spend money? Well, that’s an idea. The problem such an idea faces is that not all information is created equal. As soon as forum posts begin dictating treasury allocations clever spammers and scammers will figure out how to manipulate the AI so as to allocate the treasury to themselves. In these contexts where data is sometimes untrustworthy, most of the effort has to be allocated to figuring out whose contributions to trust, and by extension, who to trust. You might think to set your AI to the task of figuring out who to trust. But you can already know that anyone who doesn’t receive the AI’s favor will claim it’s rigged, and you won’t be able to tell them why exactly it’s fair because the AI is a black box. To answer this question, a protocol has to be able to blend knowledge of the data (the normal checks of internal and external consistency) with the incentives of players, and it must do so in a transparent and credibly neutral way.
This is a fascinating design space. One which could incorporate ideas from information theory and incentive design and machine learning all at once. There’s one approach which in particular seems interesting to me, let’s call it SMC for the three coarse grain components: conviction staking, prediction markets, and consensus mechanisms.
image.png
SMC describes a high level scheme for collective intelligence. The key idea it proposes is that a DAO governed by an SMC would first see its members stake against the outcomes they want for the DAO and the actions they believe the DAO could take to achieve those outcomes, then they can bet in relevant prediction markets which inform the DAO which actions to fund, then they resolve the prediction markets by way of a consensus mechanism.
Of course, this is not quite a new idea. Several prediction market based governance systems have been proposed previously, all with varying degrees of success. Worse still, what I actually have in mind is not quite a prediction market, and not quite a traditional consensus mechanism. But this will do pedagogically.
At the top of the stake is a mechanism called conviction staking. Put simply, a contributor to the DAO stakes against a proposal / statement / policy / desired outcome and in return increases the influence of the outcome or the likelihood of the execution of the action. Holding the stake for longer steadily enables the time weighted average of the stake to increase up to the threshold where it’s enacted by the DAO.
image.png
The stake score of these stakes can be deflected by prediction markets. Prediction markets link relevant proposals and outcomes to one another, permitting participants to bet on the future outcome of an action.
This only can work if:
there’s a game theoretic reason why players will link relevant (and not irrelevant) prediction markets
the prediction markets can be high level enough that they’re informative (i.e. non-Tetlockian resolution criteria)
These are technical problems that will need to be solved, and we believe there are solutions within reach, namely epistemic leverage plus iterative deepening.
image.png
Finally, there will be certain prediction markets (called “observables”) that make predictions about variables that have sufficiently low ambiguity that they can be resolved by a Schelling point based consensus mechanism. For those unfamiliar, a Schelling point based consensus mechanism simply asks participants what outcome they believe everyone else is going to pick, and those that pick differently from the consensus lose some of their stake. Once the conclusion is chosen, thanks to the relevance encoded between this and other markets, the signal from that point can propagate through the rest of the graph and directly influence other markets and the policies they affect.
image.png
Potentially, this consensus mechanism need not directly slash players for being wrong, and could instead provide players with tokens in each of the outcomes, and the players that disagreed with one another would only be “intersubjectively” slashed (contextually soft forked), meaning they’d still have influence with those that agreed with them. This would permit more plurality, while still localizing incentives (because, presumably, accurate beliefs outcompete inaccurate ones).
For such a consensus mechanism to work, two technical innovations are required.
it must be possible to detect sincere signals of whether a variable is expected to be low ambiguity
players must be able to initiate the resolution when the observable has been seen
(I’m told EigenTOKEN has some potentially interesting ideas on this front.)
This brings us to the bottom of the mechanism set, the leaves on the tree. You can see how the consensus mechanism at the bottom of the stack grounds the governance in real world observables; how prediction markets allow the mechanism to bring future expectations to influence the present; and how conviction staking allows participants to sincerely (in Sybil and bribe resistant ways) express preferences for certain outcomes and to execute policies.
You might have an important remaining question: if we’re going to use staking to influence the execution of a proposal, won’t that unfairly favor the richest participants in the DAO? How do you actually ensure that the observations of the consensus mechanism are properly connected to the right prediction markets? How do you ensure the right prediction markets inform relevant policies? And how do we do all that while making the overall system epistemically sensitive — sensitive to good information — instead of intrinsically plutocratic and wealth centralizing? For this we need an additional component.

Epistemic Leverage

For this we reach for a novel mechanism called epistemic leverage. In short, the purpose of epistemic leverage is to give players additional influence in the game in exchange for making statements about how they can find out that they’re wrong.
You can think of this as an attempt at a mechanistic implementation of Karl Popper’s Problem of Demarcation: a theory is trustworthy if it’s capable of being falsified.
The way this story plays out in a mechanism is like so:
Player A stakes a statement with $100, gets 100 score. e.g. “We should do something to stop climate change.”
Player A really cares about this but they don’t have more money to spend, so they get leverage, they stake the blue issue to say if it were true it would change their mind a bit. In exchange for taking on this additional risk, they get additional influence. Leverage for 10, gets bonus 10 score. e.g. “Animals will adapt to the changing climate easily, without causing mass loss of species.”
Player B doubts that Player A is actually going to change their mind. They think they’re just doing it for the additional influence. Player B doubts the leverage for 3, reducing Player A’s score by 3^2. (Why is Player B doubting? They earn yield on their doubts.)
It turns out, Player A was being honest about being willing to change their mind. They learn some new information which prompts them to partially change their mind. Of course, when they self slash it causes them to lose money, but by slashing themselves it also slashes their doubter. In this particular case, by self slashing their score actually rises.
image.png
visualizing epistemic leverage
Need something more tactile? Here’s a Colab Notebook where you can
So, you can see, this game is giving us a host of interesting properties:
it’s systematically favoring those players that are believed to be intellectually honest. After all, if you think someone really will change their mind as a result of new information then you’ll be much more nervous to doubt them — you could get slashed!
it’s providing an incentive for players to work to be seen as intellectually honest. That’s how all the other successful players are getting their influence.
it addresses our concerns about plutocracy. Now there are two ways to have influence in the game: one is to stake more money (after all, skin-in-the-game is a sincere signal) and the other is to be known as an intellectually honest player while making falsifiable arguments
finally, what shall we do with the slashed funds? In general, funding epistemic infrastructure is hard. Journalism, data collection, education, etc. Now, every time there’s surprising information a player has a selfish incentive to slash their stake, and that money has to go somewhere. Some of it will go to the protocol as revenue (which is good incentive alignment, because the protocol now makes money when people change their mind — i.e. when it exports the social norm of intellectual humility). The rest of it can be allocated among those who were already party to the informative position. Potentially, this can mean a new kind of revenue stream for journalists and analysts and bullshit detectors.
(there’s actually a fifth thing this does for us, which has to do with it being the primary driving force in giving our network of arguments connectivity. But I’m finding that I’m not sure how to articulate succinctly so I’ll leave it for now.)
To be clear, this is not a complete mechanism, it’s merely a gadget that would be a component in a larger mechanism set. Hopefully you can see why it seems promising as part of a system that can attend to voices for epistemic reasons, rather than than merely charisma or money.
Together, these mechanisms sketch an intriguing DAO governance system. One that can be sensitive to epistemic information. One that aggregates preferences over outcomes to inform action based on settled beliefs (“exploitation”) as well as to forage for information (“exploration”), and is grounded in an internal oracle by way of a consensus mechanism. The DAO’s internal world model is encoded in prediction markets and their relationships, permitting DAO members to attend profitably to maintaining the structure of the beliefs of the DAO because they know it will go on to inform the decisions it makes. In this way, DAO members are providing the elements necessary to solve simultaneously for ambiguity and risk, thereby approximating a model-based, free energy minimizing approach to governance. This is much closer to a nature inspired, unbounded, parallelizable, incentive compatible infinite game for governance. One that has promise to remain credibly neutral, inspectable, decentralized, autonomous and even organized while nonetheless enabling a high degree of coordinative capacity and individual sovereignty. This, or something like it, is the future of DAOs.

To find more and follow along:
or

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.