Some people believe that there is such thing as a “fact.” A thing that is unarguably true and has been definitively established.
This mechanism’s approach to discovery assumes that progress is the discovery of error.
If something really is a “fact” there will be no further error to discover.
This way, there is no definitive answer. Only the current answer. It also means anyone with new ideas can reignite the discovery process by doubting the consensus. Since in this forum disagreement is non-free and creates funding for research, doubt of the consensus is a very valuable signal.
How do I HODL and moon (how do I get rich)?
At this point, you don’t.
There’s no financial incentive for making accurate claims. There are payments offered to verifiers, but this is natural, you’re paying someone to do work. You can only lose money by intrusting and being wrong, or not lose money by failing to be wrong.
Then why will people ever intrust? Because they want to modify the collective policy. E.g. Philip Morris wants cigarettes to be less regulated and is willing to pay to influence the collective decision. However, in this system, unlike the current one, money flows are transparent, funding is independent, and decisions are ultimately based on evidence not influence.
Doesn’t this mean I could lose money on a claim I didn’t make?
Yes. Someone can pay you with tokens intrusted in a claim that resolves against you. That would cause you to lose money.
This is highly likely to happen and it’s an intentional part of the design. It invites you to care about the claims with which you’re paid.
Furthermore, when you benefit from the public goods that arise from belief alignment with a community, it also entangles you with the consequences of that community’s agreements and beliefs.
This is also the purpose of discounts. As you saw in
, discounting a claim allows you to hedge (at some cost to yourself) against that claim.
Can this be used for price discrimination based on beliefs?
Imagine a company discovers that people who intrust in claim Y tend to have more money than people who do not intrust in claim Y. They examine claim Y and discover that they don’t think it will resolve against them (it’s durable in their favor, and perhaps it’s symmetrically durable). This presents an opportunity: discount claim Y with a stance which will increase the cost of goods for the people who intrusted in claim Y but leave the price otherwise unchanged.
Is this belief-based price discrimination? Yes. Is it a problem? No. The company is either:
Contributing to sincere inference about claim Y being non-durable.
Arbitraging a vanity claim. People intrust in Y because it looks nice, not because it’s meaningful
In both cases we get something we want.
In the first case, we might imagine Y is a claim that might resolve. In order for the company to believe this they must have has performed inference on the claim. They’re accepting some cost (because there will be less demand for their good, but in exchange they get to charge more for their goods to this select group. Essentially, their bet is, “Claim Y is wrong and we can get paid by the group that intrust it by recognizing that.” From the network’s perspective this is healthy behavior: they are taxing some wealthier category of people in order to fund research via disagreement funding.
In the second case, we might imagine that Y is durable but so is ¬Y. In this case, Y might be something like the claim, “Flonktoids are non-immertozic.” Fascinating, perhaps, but ultimately meaningless. If some group of people is buying claim Y then it’s a vanity claim, it’s purely for signal. It’s ok to charge people more for something they buy for signal purposes, that’s called a luxury good.
Will this make my family reunions worse?
Won’t this make for:
harder compensation conversations?
unendurable political conversations at the dinner table?
a more complex shopping experience?
Possibly, and it will also make each of these better in one specific way.
Money conversations are always hard. But claims control for a variable in the conversation. Before priced disagreement, you had to think:
What are the economics? How much can I afford for this role? How will people evaluate whether their pay is fair?
What are the optics? What idiocy (from my perspective) am I implicitly supporting by having this person on my payroll?
This dampens concerns about ideological alignment by making them explicit. Everyone already selects their employees and financial partners based on largely implicit tests of ideological alignment that are easily gamable (suffer from inauthenticity) and non-specific (net fishing vs spear fishing).
What claims allow is a particularist approach to the price of our disagreement.
Something similar is true for the discomfort of disagreement at family dinners. Has your family ever been relationally riven by a bet that
2025
will experience 4% inflation instead of 7%? I’d guess not.
There’s something about concrete bets that enable clear thinking and alleviate disagreement. A bet ensures you’re not in disagreement with one another but with something real in the world. It makes tangible the fact that rhetoric will not persuade reality, unlike your relatives.
My bet (while we’re at it) is that it’s those that don’t intrust in claims that will prove to be the most unendurable at those dinners.
Finally, purchasing goods remains largely unchanged. A product will have a price + tax + some disagreement premium. However, that disagreement premium can be viewed as a credible signal. If you disagree with the stance of some company you’re purchasing from which they have priced you’ll both feel it in the purchase price. It also will mean that companies can be more substantial and less performative about their beliefs. Instead of the brand positioning of Corporate Social Responsibility, they can intrust and price claims like anyone else — a much more credible signal of their values because it actually costs them something.
In all these cases, making explicit the price of disagreement helps to ease these conversations. After all, disagreement is always expensive, whether we choose to pay the tab productively, or fold it into unquestionable political stances, ambiguous selection processes, and unproductive disagreements.
What about redemption?
Inevitably, intrustors will be found to be wrong. They’ll have intrusted some claim, agreed to some resolution terms, and then after the work was done, the verifier will essentially tell them, “so sorry, you got it wrong this time.”
But clearly, the intrustors who are found to be wrong contributed something valuable to the discovery process. Is it really right to just say, “they were wrong, they lose, too bad”?
One possible direction you could go to recognize their contribution is to reward the recognition of error. In some sense, what it would be saying is, “hey, thank you for funding the discovery that this doesn’t work.” Too much of science is focused on getting “positive results.” This tendency is well known to incentivize false positives, but there are likely even greater scientific costs due to the opportunity cost of not mapping the negative space. At the very least, a lack of a map of the negative space (experiments that didn’t work out) probably results in repeat work that is otherwise invisible to us, in addition to a reduction in the rate of new insight that comes from, “we tried this and it didn’t work. Why not?”
But how do I completely protect myself from the downsides of claims that resolve against me?
You cannot. Because the cost of erroneous beliefs is not appropriately measured in terms of dollars but rather in terms of human experience. There is no proper hedge against global nuclear annihilation (although global annihilation insurance would be a great product to sell. Either you have no claims you must pay, or you’re exempt by writ of your non-existence). In the scenario where the rest of the world believes something which you know with certainty is erroneous your best strategy is to try to take actions to change their mind, either by pricing their claims with a stance, making counter claims, or forming a partially disconnected society that is independent from the financial and social costs of greater society’s mistakes. This is exactly the behavior we want to solicit from someone who has unique insight on some widely subscribed erroneous belief: tell us.
Now what?
Now go make up your own scenarios and play with the mechanisms in the