Share
Explore

Epistemic Leverage Writeup

Epistemic leverage consists of 3 gadgets in our mechanism set:
A modified LMSR, basically just a typical AMM which allows you to buy shares (as is expected) as well as dispute the relevance between shares (what we call Carroll Mechanisms in the doc I shared). You can review
Epistemic leverage, basically a way to increase the system's registered number of sold units of your preferred shares by disclosing model information (what would be informative to you). It's like financial levarege, but you're risking reputation. The action of using epistemic leverage is called restaking, and it's essentially doubling down on information that would change your mind. You are incentivised to be an honest actor because there is a counterparty to the trade (called the doubter) who is calling your bluff. So in this game you want to have a reputation for being someone that changes their mind. Here's
A consensus mechanism. We haven't settled on a consensus mechanism yet. Were planning for a while to use EigenLayer's intersubjective staking, but you could imagine that might not be sufficient due to lock-in effects, but there are many consensus mechanisms to choose from, even from production-ready APIs like Kleros and UMA. Consensus is actually quite easy, it's dissensus that's hard — what do you do with disagreement?
In fact, this is the key idea to understanding why this cascade is trustworthy: consensus mechanisms are really powerful. They underly Proof of Stake, Proof of Work, Polymarket's market resolution system called UMA, and many other systems.
So, why can't we use them for all truth? Why can't we just say, "Is the Earth flat?" or "Should Trump be president?" and then play a consensus game? The problem (and the solution) is that consensus mechanisms are highly sensitive to higher-order beliefs. If you can shift what people think most other people think, then you can manipulate the market.
See for example this thread about the problems with UMA:
So, why doesn't this problem plague Ethereum's Proof of Stake system?
What differentiates PoS and PoW from something like this is that PoS deals with low ambiguity questions. "Is this hash the same as that hash?" The low ambiguity of the question (the low likelihood that someone will disagree) permits you to play a consensus mechanism game, without worrying about possible manipulation (this property is sometimes also called attributability, because you're able to attribute an errors or manipulation to a particular player).
If you accept this argument, then you should also accept that this simplifies our problem. Now, we no longer need to find a mechanism that allows us to converge on the truth, we already have one of those so long as we have low ambiguity questions, instead we merely need to be able to convert a high ambiguity question into a low ambiguity question that we can feed into the consensus mechanism.
So, for the rest of this I'll assume you've bought that argument, which permits me to boldly claim that you should expect this mechanism to resolve to true and accurate conclusions because of the properties of consensus mechanisms; and the great news about this claim is that there are already 2+ trillion dollars to be hacked if this statement could be shown false.

So then the question is:
Can you detect high ambiguity in questions and then distill from those concrete resolvable subquestions?
My claim is that that's what the first two gadgets, Carroll LMSRs and epistemic leverage, do together.
I'll make a high level argument for it and then I'll walk you through a concrete cascade.
High level, epistemic leverage is giving out more power (for lack of a better word) to participants in exchange for them saying what would change their mind.
We can trust that the signals from epistemic leverage are sincere because:
participants have no reason to indicate falsely relevant counterinformation, because they'll lose in the Carroll markets
if participants are thought to be obstinate about relevant and true counterinnformation then there's upside for counterparties who can 'doubt' them (call their bluff).
participants don't want to self-slash, because they lose their shares to the people holding the counterparty shares (and they don't want to hold the counterparty shares because they are in conflict with the preferred shares)
This therefore gives us an ambiguity pump, a way to start with a high level statement, and then steadily reveal the constituent definition and indicators of that statement by way of the topology of the market graph built by participants playing game theoretically optimal moves (there may be multiple and multiple overlapping definitions, but that's fine, welcome to humanity). Sometimes this is why I say epistocracy is like markets meeting category theory. We don't actually have the ability from the governance market's perspective to peer into the meaning of each statement (that's for AI to do, which we don't want to trust in this context because we will want to govern AI with this) but we can look at the topology of incentives and glean the relevant information.
Once we have played that game to its terminus we'll have generated many many low ambiguity statements which we can feed to consensus mechanisms. As those consensus mechanisms converge, that changes where you can find "valleys" in our incentive landscape — stable attractors in topology space — which changes what the price is for shares asking the highest level questions of the market.
Ok, now to the concrete example:
participant creates a new market statement, which says, "We should build a bridge over the Riva river."
then, to gain more favor, the participant 100% restakes on the statement, "The commerce that passes over the Riva river would not change if there were a bridge." They are indicating that they would change their mind if this were true.
the shareholders in "commerce won't change" restake on the statement, "a month long experiment where ferry transfers are subsidied to be free saw an increase in commerce by less than 2%" (remember, this doesn't actually mean they believe this statement is true, just that they would change their mind if it were).
Now let's do one of the cascades (it's 1:30 am here and I'm not making someone happy by staying up so late on my computer, I'll leave it to you to generalize to other scenarios):
Participants place their bets on how the month long experiment will resolve, this moves the market prices around
If enough people have placed their bets and enough people have restaked then there will be lots of connectivity in the markets. The liquidity from that disagreement let's assume is enough to provide the month long subsidy for the experiment. In general, because of the structure of this governance market, disagreement is subsidy for experimentation and information foraging
Finally, the experiment is run and a generalized consensus starts to emerge, the markets begin to resolve. First the lowest markets resolve (this would be markets even deeper than the market about the experiment), as those resolve there's a small drama that plays out at each level of the market. Let's look at just the interaction between the shares associated with "commerce won't change" and "experimental subsidies", we'll call them C and S.
C shareholders begin to feel the price of their shares falling and therefore the price for "build the bridge" rises, which is what they don't want. They have two options, change their positions, or change their epistemic leverage. Let's say they choose to increase their epistemic leverage to keep the price of "build the bridge" low
Now, people begin to see that the favor of S is quite high, and it's considered tightly coupled but that the holders of C yet willing to self-slash, so S holders take the opportunity to 'doubt' the C holders, to take the counterposition, which pays out more yield the higher the favor and relevance of S is to C. In this case it's high, so the yield is quite sweet so long as the C holders don't self-slash.
The C holders experience that as both a drain on their position, and a weakening of the price of their shares — the exact thing they were trying to avoid. They can now attempt to do one of two things, they can either acquiesce or protest.
If they acquiesce the game is over, they pay out the payment to the counterparty players (hopefully the ones that funded the month long experiment were wise enough to also buy shares so they could receive the payout from the the C holders) and
If they choose to protest, they have two options, they can either can dispute the relevance of S, participating in the Carroll Mechanism to essentially say, "These two shouldn't have been tightly coupled after all." However, generally at this point disputing the relevance won't be a profitable pursuit, since they sat on their epistemic leverage for such a long time. So really this leaves their only option to stem the bleeding by dispute the veracity of S. They can do this by buying more shares in S or using epistemic leverage on counterparty shares to S. The thing to notice is this: that's great news! We've just moved one level deeper in the stack, and now we're likely dealing with even more concrete claims (they'll have to be, if they've proven to be informative to the S holders).
So, we've succeeded in ratcheting once more toward low ambiguity questions, eventually one of which will be so low ambiguity that we can resolve it with a consensus mechanism. We have our ambiguity pump!
Therefore, we get one of two outcomes every time we have a conflict like this, either we get funding for information foraging (running experiments, collecting data, doing research, collecting opinions) or we reduce the ambiguity of our question, getting one step closer to resolving our markets and taking action at the top level (in this case, building our bridge). We're surfing the explore-exploit tradeoff frontier using markets!
Now consider what this would have been like for questions like these:
Does sugar consumption cause obesity and type 2 diabetes?
Do vaccines cause autism?
Do opioids cause suicides and deaths of despair?
Do cigarrettes cause cancer?
Is climate change human caused?
It's not that this sort of mechanism changes anything about the science that must be done in order to answer these questions. Instead what it does is increase the credible neutrality of the process by which we reach answers, it does this by creating balanced incentives, rewards for accuracy, transparent reasoning, and is permissionless to participate in (no pedigree required).
It's not just that this mechanism increases the credibly neutrality of the conclusion (by modulating the influence of money), it's also directing funding to wherever there is disagreement, because that disagreement essentially represents a bounty which can be claimed for reducing ambiguity or risk.
That's helpful because it means disagreement becomes the heat that powers an engine for generating insights. This means that on net, we should expect people that use this to have better bridging behaviors: they'll converge faster to agreement and at a lower cost.
In a nearterm world, this will become especially important because it won't just be humans we want to incentive align and to transparently trust, but agents, too.
Language summary:
point — a statement with associated shares that can be linked to one another with the Carroll mechanism
stake — purchasing the shares of a particular point. It's a bit more accurate to call it staking that purchasing, since shares seem to imply you want some upside, when with these markets you might just be buying (staking) to increase the likelihood that some event will occur
restake — use epistemic leverage to get more influence by indicating that another particular point would be informative
slash/self-slash — give up a position they took by retaking, double down on a point by risking some money, influence, and reputation
doubt — take the counterparty position to a restake. A doubter is calling the bluff of the restaker. A doubter gets two things for doubting: they get paid by the $$ that was restaked, and they get to decrease the extra influence a point got from the player retaking for it. However, the doubter loses the $$ they doubted if the restaker slashes
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.