Share
Explore

How not to solve governance

A story of failure and mechanisms
For those of you that know me, you’ve watched me work obsessively these last seven years on the problem of governance. Today I’ll unveil the master plan, including why it doesn’t work. Despite my best efforts, I’m stuck. And the best thing to do when you’ve failed is to admit it publicly and in excruciating detail.
For those who don’t know me, hi, I’m Connor McCormick. I first became interested in the question of governance in 2013 when I briefly moved to Ecuador to live with a local family. While I lived there I saw incredible industry and creativity that operated at a backdrop of extreme corruption. And that corruption repeatedly undermined and sapped energy from the creative work of our community. Experiencing corruption first-hand convinced me that there is no problem more important to solve.
My Ecuadorian family owned a factory manufacturing water filters which we would sell (and donate thanks to support from The Red Cross) to nearby communities. In Ecuador, water borne illnesses are a common cause of disease and especially childhood mortality. In a process that almost felt like magic, using our hands and a hot kiln we were able to transform clay and wood chippings into clean water. The filters that came out of our process have since been awarded the highest water-purity rating by the European Union. It’s some of the most rewarding work I’ve done.
The labor was exhausting and manual, but no part was more taxing than working the press which squeezed the clay mixture into the flower-pot shape of the finalized water filter. The press had to be operated by hand by pumping a long broken metal rod, and so we were extremely excited when finally a newly commissioned hydraulic press arrived, a beautiful bright red construction built in a nearby metal shop that would allow us to double the output of the factory with no more than the press of a button.
It lived two days. That night, while the groundskeeper was away, a huge circular saw chopped through the cinderblocks that made up the wall of the factory. By the next morning, the only remaining sign of our hydraulic press was the furrow it left in the dirt and the tracks of the truck that carried it away. Later we would learn that the thieves had stolen it for nothing more than to melt it down and sell the metal scraps.
I’ve never felt more indignation. My frustration was only exacerbated by the fact that everyone else was so equanimous as we went back to pumping the press by hand. This sort of thing just happened here, they told me. What would the police do? I demanded. They just laughed. Pulling the thread of the police’s inaction caused people to check their surroundings. In the privacy of our car bouncing down the highway I finally coaxed an answer. Over the thunder of the engine they whispered an explanation of corruption, and incompetence, and a president that couldn’t be criticized. And I wasn’t advised, so much as could sense, that this was the sort of topic to avoid in mixed company.
Our beautiful red hydraulic press was one more casual casualty of corruption. The $10,000 machine — no small amount of money, least of all in Ecuador where the average annual salary is barely double that — never even got a paint chip in the course of making a filter before it was once again an ingot for sale on some shelf.
Of course, this story is not unique to Ecuador. This archetype plays out in all countries, on all different magnitudes and domains. Whether it’s insider trading in Congress, or kickbacks to regulators, or banks too big to fail, or apathy on climate change, or companies rushing out misaligned AI in a bid for market share, or the mounting risk of a nuclear war that threatens to leave nothing more to posterity than frozen shadows on the wall. The prize for solving the problem of governance is the ability to address all of those big problems, but my vendetta is much simpler. For me, governance is about getting my red hydraulic press fucking back.
And yet, as I’ve already spoiled, I don’t know how to solve governance. It’s a somewhat difficult problem that we as humanity have yet to solve after several tens of thousands of years of experimentation. Instead, all I have to offer you are a handful of mechanisms that you’ve never seen before, tessellated in ways you haven’t imagined, to achieve emergent results that are tantalizing yet not quite sufficient. As we proceed, I’m going to describe some mechanisms to you and I’ll also describe the kinds of behavior I think they produce. As much as possible, I’ll try to restrain myself from justifying the mechanisms to you, but sometimes I just won’t be able to help myself. By the way, this is about mechanisms, so it’s going to require a little bit of thinking energy, and we’ll need to both think about the math, as well as switch to thinking about the emotional experience of the people in the system. If you’re looking for a ra-ra manifesto or dollar signs go speculate on a token or something. Let’s begin.
Governance is primarily about how a community sets policy. Let’s imagine you’re a person who is concerned about AI. You worry that AI is going to be hugely detrimental to humanity, and you believe that AI enablement research (like ChatGPT or Bing Search) should stop until at least we have better theories of how to ensure AI is safe and aligned with the goals of humanity. If you believe that today, what can you do to express that preference in terms of policy on the global stage? There’s lots you can do, you can protest, you can create interest groups, you can build coalitions, you can work on AI safety projects, you can try to influence policies or legislators, etc. What I’ll be offering here is one additional option on that list of ways you get your way.
The first mechanism we’ll enable is what we’ll call a “market for policy”. Let’s imagine there’s a policy that says, “We should stop AI enablement research.” The first thing we’ll set up is a way for you to pay money to influence the adoption of that policy. For simplicity, let’s consider the case where whichever side stakes the most money gets the policy enacted. So, if your side stakes $1m on pro and the opposing side only stakes 900k against then the policy is immediately enacted. If later the opposing side stakes more, pushing it up to $1.1m against, then the policy automatically flips, and whatever fines or compulsion mechanisms were preventing AI enablement melt away. For now, we’ll imagine that these staking mechanisms are non-reversible, once you’ve staked you’ve staked permanently.
Warning: justification follows. I’m worried I’ve already lost you. It’s quite obviously a bad idea to create a market that sells policy to the highest bidder like we would for, what, apricots, right? Is there any idea more crypto-bro-libertarian than “we’ll just use markets”? What will putting policy on sale do other than to naturally result in more corruption, more plutocracy, where those with the most get more? What I want to offer as counter is that we already live in a world where policy is on sale. I’m not claiming it’s desirable, but that as far as I can tell it’s the first thing you must accept. Need I remind you that you live in the world where the place called the town square is for sale, and the person who bought it under the auspices of enabling free speech can silence you for telling people about other town squares they might like. And these sorts of distortions happen not just in the private sector but also in public ones. Money can get its way no matter your governance system, whether through bribery, or marketing, or media. If building governance systems is like designing a rocket, then “money gets its way” is the tyranny of the rocket equation. Achieving orbit is not a matter of wishing our gravity well away, but rather deliberately engineering around its constraints. By starting with a market for policy, we make the risk of capture and plutocracy explicit. And if despite this challenging start we can achieve a design that’s non-plutocratic then it’s truly promising.
Our first step in attempting to reduce wealth effects will be to equip our market for policy with a mechanism whereby it’s free to change policy as long as your community agrees with you. This will require us to answer three questions: 1) how can we make it free to stake 2) what counts as “your community” 3) how expensive should it be to disagree?
There’s a single mechanism set we’ll reach for to address all of these problems, which builds on the staking market for policy. Let me introduce you to the first component.
To modify our policy market so that it’s free to stake, when someone stakes a policy they receive in return a set of tokens that represent partial ownership of the assets staked in that policy.
So, for example, when the policy, “We should stop AI enablement research.” already has $1000 staked in favor of it, and you stake an additional $100, you receive a token worth
of the staked value of the policy (which is to say, $100 worth). If you’re familiar with crypto mechanisms, we’re wrapping the original currency in a new token. We’ll call this pattern, whereby you stake money in exchange for partial ownership of a policy, “intrusting”. What differentiates intrusting from investing is that with investing you’re expecting financial returns, whereas with intrusting you’re expecting “policy returns” — influence over policy.
To grok this: here are some sliders to play with. Let’s imagine there’s a policy that already has
1000
in intrustments.
Now you come along and you intrust
0000
100
(the policy is now worth
1100
)
You now own
9.09%
of the policy, so your part is worth
100.0
Now someone else comes along and they intrust
0000
100
(it’s now worth
1200
)
They now own
8.33%
of the policy, so their part is worth
100.0
And you now only own
8.33%
of the policy, but it’s still worth
100.0

This is a promising start, and the first instance of a recurring pattern we’ll come back to for tamping down wealth effects: you can give people default, costless permission to modify a policy and then allow their community to evaluate the choice. In the crypto world, this is called an “optimistic” pattern because you trust first and verify later. However, clearly what we have isn’t sufficient because we have yet to define what counts as a community.
Next, I’m going to introduce two components to what we’ve so far built that have a remarkable property of eliciting self-assembling communities. I think you’ll find this to be quite cool.
First we’ve got to handle the fact that we’re creating a cambrian explosion of new currencies here, one for each possible variation of policy that anyone could ever prefer. What we know in practice is that humans very rarely prefer to deal with this sort of complexity. Instead, we tend to anchor to one individual index currency, and compare everything against that. In fact, this dynamic is one of the leading explanations for what initially caused money to emerge: it’s cognitively infeasible to keep up with all the pairwise prices for bartered goods. How many chickens is a cow worth? How much of a cow is a loaf of bread worth? Etc. Instead, if we just measure everything in just the amount of grain that a cow, chicken, and loaf of bread is worth then it’s much easier to keep up with these prices. From a computer science perspective your polynomial memory requirements have just collapsed to merely linear.
We can engage with the problem more empathetically if we put ourselves in the shoes of someone who might be paid by one of these currencies. Let’s say you’re selling a product to someone and they have only USD to pay you with. This is no problem for you, if they’re willing to pay your price you accept. But what if they have a bunch of other currencies as well? They have some Yen, some USD, some Bitcoin, and some Ethereum. They’re willing to pay your price, but they’d ideally like to pay you in a mixture of these currencies. How do you feel? It’s complicated, right? Now you have to evaluate each of those currencies against one another. Do you need to factor in your beliefs about the long term economic prospects of Japan to get paid? What if you have a moral objection to the energy usage of Bitcoin? And worse, when it comes time for you to pay, you’ll have to do this same dance but from the other end of the transaction. If you accept Yen, will anyone else accept it? Now you can see why in practice economies tend to collapse to only a single medium of exchange. You know that they know that you’ll accept USD. And they know that you know that they’ll accept USD. So everyone is comfortable accepting USD.
If you’re paying attention, this is a coordination problem, and we can actually solve it. It’s pretty simple once it’s broken down: the question is, how can we establish an expectation that if you receive a strange currency you’ll likely be able to pass it off? Very simply, we design a mechanism that forces you to pay with all the currencies in your wallet. This mechanism will also end up creating some important opportunities down the line.
When you transfer payment to someone the recipient of the money gets a piece of each of your claims.
Here’s a visual way to explain the mechanism.
Imagine this is someone’s wallet. Each bar represents a token of some value:
image.png
You can see a $1 payment to the left of the line. It’s made of a piece of each token in the wallet based on token value.
image.png
A concrete example with numbers helps. Let’s imagine someone has a wallet with these tokens, all denominated in USD:
[USD100][Ethereum400][Stop AI Enablement Research300]
If they’re going to pay you
000
100
You’ll receive this as the composition of your payment:
12.5% USD
50% Ethereum
37.5% Stop AI Enablement Research
Which is:
[USD12.5][Ethereum50][Stop AI Enablement Research37.5]
In total, you receive
100
, just like we expect.
I’ve been encouraged by some people to call this a “proportional payment”, but I quite like the term “index payment”. For one, just like an index fund buys stocks proportional to market cap, an index payment transfers tokens proportional to value. Second, the common solution to the many-to-many problem in barter is solved by an index currency (or index good, like grain). So, it makes sense that there’s a dual way to solve this problem by intervening on the transaction, rather than the currency. To motivate this for my computer-sciency friends: consider how you can equivalently represent the same information in a database either by tracking state snapshots or deltas. Something similar is at play here: you can either ensure acceptability of transactions by indexing to a currency (the stateful solution) or you can ensure acceptability of currency by indexing the transactions (the delta solution).
For a more intuitive explanation of what this does to solve the problem, once again put yourself in the shoes of someone getting paid. If you receive the above payment and you know that people you’ll want to pay have adopted index payments how are you going to feel? Well, assuming you don’t have any objections to the market value of the tokens you’re receiving then you’ll feel fine. You know that when you go to pay someone they’ll accept the strange tokens in your wallet, because after all, you accepted the strange tokens in someone else’s wallet.
To be totally explicit: this means that if you had one of these wallets you wouldn’t be able to sell any individual token in it. You could only make transfers according to the rules of the index payment. You could buy new tokens, sure, but once they were in your wallet you couldn’t get rid of them (by any other means other than slow attrition where over multiple payments the concentration of that particular token as a percentage of your wallet would go down because you were spending and receiving new payments with different tokens). Some people find this to be a really strange thing to do, and make moral arguments like, “but it’s wrong to take away people’s ability to sell the tokens in their wallet. After all, the ability to sell is one of the core rights that ownership conveys. You’re constraining people’s freedom.” But what this argument misses is that all emergence is the result of constraint. You’re only reading this right now, instead of dissolving into a bubbling pink stain on your chair, because of all the things your cells are not doing — because of all the reactions the chemicals that make up your cells are not initiating. It’s thanks to the fact that the space of expression of the components of your body is constrained that the beautiful emergent thing called you can amble about. If we want to enable the ambling emergence that will be one day be to us as we are to our cells then we’ll need similar self-constraint.
Beyond arguments about emergence, this is a really interesting mechanism for increasing the fungibility (read: ability to spend) of alt tokens, and it likely applies outside of the domain of this particular governance project, so if anyone wants to explore it theoretically or to go and implement it into your own wallet feel free to rip, riff, repurpose, and redesign, just be beneficial with it. I do have a math friend that wants to explore the Schelling points this creates more theoretically but I can’t afford to hire him right now. Another cool property of index payments that’s also likely worth exploring is their ability to enable negative money that’s not debt.
After all, what if you do object to the value of the tokens in the wallet? What if they have Bitcoin in their wallet and you have an environmental opposition to it? What if you think that USD is overvalued compared to the current market value? What if they have the policy, “AI Enablement Research Should be Expedited” in their incoming payment and you disagree?
It’s this question that gets us those self-assembling communities I hinted at. Almost magically, autopoietic value-aligned economic communities are going to pop out of the mechanism that we employ to solve this problem of disagreement. That’s the sort of sentence I never thought I would write with so much gusto. Here’s how the mechanism works:
When someone pays you with some currency that you don’t consider to be properly valued, you’ll be able to choose how much you endorse it. Simply put, if someone is paying you, and they have in their wallet the policy, “AI Enablement Research Should be Expedited.” you can set an endorsement, which when smaller causes them to need to pay more because, at least from your perspective, there’s a hole in their incoming payment.
We call the additional money that the payer must fork over is the “conciliation”.
In general, this is the equation for the conciliation they must pay.
For now, I’ll skip motivating the equation and its derivation in detail, . The key idea is that if a payer has a token that’s discounted by a low endorsement, we make them make a second smaller payment, still using the index payment mechanism, to account for the shortfall. That second payment will also be a little short (because it too has the discounted token in it) and so they must make another payment, and so on. It turns out this infinite series has a closed form expression (yay WolframAlpha!) which is what you see above. If you’d like a more fun motivation for this mechanism, here’s a story about , if you’re into that sort of thing.
(What is worthy of note, however, is that this mechanism naturally offers sub-linear economic influence for non-plural economic actions while being motivated from first principles. Move over quadratic funding.)
Here are some more sliders for you to play with so you can grok it.
Let’s say the AI accelerationist agreed to pay you
$100.00
You’ve set a endorsement that discounts the claim “AI Enablement Research Should be Expedited.” at
80%
000
0.8
If
000
25
dollars of the
$100.00
payment was in the policy “AI Enablement Research Should be Expedited.”
The payment to you would be:

base:
$100.00
+ conciliation:
25.00
= total:
125.00


From a high level, this is pretty simple. The mechanism is allowing you to protect yourself against needing to accept payment in the form of currencies (including policies) that you don’t believe in.
You can see that as your endorsement increases, the total payment increases non-linearly. Similarly, as the percentage of discounted dollars in the payment increases, the total payment also increases non-linearly.
This graph gives you even more of a feel for it.
000
0.8
← Try changing your endorsement
You can see how when two people with radically different beliefs encounter one another it can get very expensive.
image.png
But when two people with low disagreement encounter one another the conciliation in negligible.
image.png
Something to clarify here is that when someone pays you, if 10% of their wallet is made up of the policy “AI Enablement Research Should be Expedited.” no matter how big of an endorsement you set the payment you receive will always still be 10% that policy. You don’t avoid holding a currency by setting an endorsement. The only reason to charge that endorsement is either because you don’t like that policy or currency, and so want to incent people to give it up, or because someone you’re later going to pay has set an endorsement and you don’t want to be the one that pays that cost, so you make the person paying you pay it.
This means that for many transactions you’ll be able to factor out of the conciliation the amount that you already charged someone else for that currency. Let’s say you’re paying someone $100 with a $10 conciliation. Let’s also say that when that money was initially paid to you the person who paid you sent you a $10 conciliation on their transaction. (For those paying keen attention, a $10 conciliation you receive that covers a $10 conciliation payment wouldn’t be seen very often because you’d have to start with an empty wallet, and you’d have to spend all of the money in it. But this was just to get the idea across in a simple case, not to cover the general case.) This means that it’s not really you that’s paying this new conciliation, someone else already paid it for you. You haven’t been counting the currency as worth its face value ever since you got it, so sending it off with this endorsement should feel like nothing. Within communities of like belief, people will always be paying a conciliation, but it just won’t feel like it because they’re not really the ones covering the conciliation, that was already paid by someone else.
This presents an interesting user experience opportunity. If wallets wanted, they could keep track of the conciliation already covered for a certain currency and only display the conciliation that would be carried specifically by the user upon payment. Similarly, wallets could only show the value of the users assets according to their own endorsement, not according to the underlying value of the assets. This would mean that when the user reduced their stridency on a particular topic, it would give them the experience of becoming wealthier, since their discounts would disappear. Although this wallet would be mathematically identical, this design approach would likely have huge psychological saliency.
Depending on someone’s privacy settings for users’ endorsements, it would also be useful to allow users to simulate transactions to see how much their economic neighbors discount certain currencies so that they can set their endorsements accordingly.
What’s still missing from our analysis is what it’s like for the person who does have to cover the conciliation themselves. What does it feel like for the AI accelerationist when they pay you? Well, they’re probably not too happy. First, they’re going to say, “can I just pay you in cash?” and you might say yes, which is fine. But if you two are in extreme disagreement about something that really matters to you, like AI safety, you might say, “No. Sorry, I only take payment in-network.” Or maybe you don’t really care than much, but you have an important supplier that insists on receiving payment through the network, and so receiving cash is just not worth much to you so you insist on payment in-network.
Next, the AI accelerationist is going to say, “Well, could you just lower your conciliation? Seems steep.” And you’ll maybe say, “Sure.” because you really want that sale, which is fine. Or you might say, “No, I’m sorry, this is what I need.” And you might do that because you really care, or maybe it’s because one of your employees has a really steep endorsement set which means that if you accept this payment now without the conciliation you’ll be losing a bunch to your employee’s conciliation when it finally comes time to pay her.
Finally, the AI accelerationist is going to say, “Ok well that’s just too expensive. Can we do anything about the price?” And maybe you’ll say, “Yeah no worries. How about we split the conciliation fifty-fifty?” Or maybe you’ll say, “Sorry, the price is the price.” and they leave in a huff. But you already know how to negotiate prices, I don’t need to explain that to you.
But how will this play out in contexts without negotiation? You’ve never bid in an apricot auction at Albertsons, nor haggled for hazelnuts at H-E-B. So what happens in those cases? At one end of the spectrum, your grocery store might just ignore the small additional loss they incur when they pay people out of their wallets, so they also won’t bother setting a endorsement. Somewhere in the middle of the adoption spectrum, the store sets a endorsement and charges you a conciliation at checkout, but you just have to guess ahead of time how much it’s going to be, and that’s the cost of paying in-network. Somewhere on the far end of the spectrum of adoption, the store has installed digital signs and qr codes which allow you to see how much buying that particular item will be for you. In fact, the store manages a separate wallet for each supplier, so the store is just a passthrough, and the price of your goods will depend on how much you agree / disagree with each supplier, allowing you to pick out goods that align with your endorsement (which will also happen to be the cheapest goods).
As these sorts of interactions play out, what will eventually happen to the macroeconomic dynamics? Well, for one thing, people who disagree more will transact less because this disagreement is expensive. Maybe at first they’ll just avoid buying from one another, but given enough time they’ll likely begin to move their residence to wherever those transactions can be cheaper because there’s better alignment between their preferences + beliefs and those of their community. Something lovely for them as they look for where to move is that this alignment will be explicitly visible in their cost of living. I’m sure there will be many popular websites that help you find the a community where you pay the smallest average conciliation given your endorsement.
So, it should make sense that this changes people’s incentives so that they’re better off transacting with people that they agree with. But wait, isn’t that a problem more than a solution? Don’t we already live in a world where people prefer to transact with those that they agree with? Isn’t that part of the problem?
The good news is that this cost for disagreement actually creates a net incentive for everyone to lower their endorsements. Remember, it’s free to agree. If you reduce the stridency of your endorsement, it will get cheaper, and as we’ve seen, is gets cheaper non-linearly! So, built into this mechanism is a native incentive for conciliation.
What this also means is that we should expect that relatively small levels of disagreement can result in fairly large changes in policy. This is because we’re normally super price sensitive, so small changes in your policy preferences can quickly shift many people’s endorsements if your policy isn’t objected to. This is a desirable property because it means that you get more responsive governance. Furthermore, the smaller the group is, the faster that policy change can propagate.
The shared cost of disagreement creates a shared incentive for funding conciliation work along the disputed border. If agreement could be found, transaction costs would be lower and so there could be trade. While of course this wouldn’t in any way get rid of disagreement, we could expect an increase in the velocity with which we moved on to new disagreements.
One way to think about the rationale for charging for disagreement is by analogy to parking in a big city. What should you do if you run a city where everyone complains about not being able to find a parking space? The economic answer is that you should charge for parking. People will now be more sensitive to the cost, and they’ll stay shorter times, or they’ll find alternative transport downtown. Based on this, what should you do if you live in a world where everyone complains about political disagreement that never leads to synthesis or change? If you choose to make it non-free to disagree then in exchange you get more responsive governance and an environment with less meaningless disagreement.
Another way to think about what endorsements are doing is that they are moving the ability to impose tariffs down from being a sovereign right of governments to being an ability of individuals and communities. When you set a endorsement you’re somewhat setting a tariff that outsiders first must pay in order to transact with your community. Right now, it would be somewhat hard to do this because, “you believe in AI acceleration” is a bit difficult to really nail down. Instead, you go for all or nothing strategies, you say, “I’m not going to work with that person or buy from that company because we’re not aligned.” Now, since policies have been economically embedded it’s much easier to take a more nuanced approach.
Another interesting and important attribute of this mechanism is that when someone pays you, you learn very little about their private endorsement. If they have lots of “AI Enablement Research Should be Expedited” in their wallet it doesn’t mean that they’re pro AI Research. It could just be that they’re paid a lot by AI researchers, or someone they’re paid by is paid a lot by AI researchers, and so on. It could be that they have a really high endorsement despite the fact that they’re paid by these people. So, all this to say, if someone pays you with a lot of a currency you don’t like it doesn’t necessarily mean they disagree with you. Their privacy is preserved. That said, yours is not, some information about your endorsement is leaking out due to the fact that they know the conciliation they’re paying and the composition of their own wallet. There may be mechanisms for hiding some of that information, but I doubt you’ll ever entirely get rid of it, so receiving a payment will require you to reveal some information about your endorsement. In principle, you could reverse this relationship if you made it so that the person receiving the payment would pay the conciliation. In that case, the payer would expose their endorsement. I’m not sure if I know ahead of time which one to prefer.
In addition to being able to follow others for their endorsements, it would also be nice if you could automatically set your endorsement on something based on its logical relationship with another thing. For example, if you could automatically set your endorsement on ChatGPT based on your endorsement on AI Enablement. We’ll explore a mechanism that can enable exactly that in the next section.
If you remember, this mechanism was motivated by trying to answer these three questions: 1) how can we make it free to stake 2) what counts as “your community” 3) how expensive should it be to disagree?
We’ve now explored the mechanism set that answers all of those questions. We made it free to stake with “intrusting”, which returns a token of the same value as the amount that you staked in your policy. We enabled you to meaningfully define your community by making it non-free to disagree, and thereby incenting you to a) not disagree unless you really mean it and b) find and prefer to transact with people that agree with you. Finally, we answered the question of how expensive it should be to disagree by way of the disagreement conciliation, naturally motivating a non-linearly increasing cost to stridency and exploring its emergent consequences.
If you remember from before, we were also trying to solve the problem of enabling strange currencies to have some default spendability. We said we were going to do that by answering this question: how can we establish an expectation that if you receive a strange currency you’ll likely be able to pass it off? Now that we’ve seen index payments and endorsements it’s easy to see how we’ve achieved that. Index payments allowed us to have certainty that if we get a strange currency someone else will accept it because the next person will also accept it. Endorsements allow people to discount tokens they disagree with, and importantly we get to set the default endorsement, which we’ll put close to 0%. Thanks to this, everyone can know that if they receive a token they can still get rid of it without too much risk, and now we can have many fungible currencies in a single economy.
This gets us to the one final thing that’s clearly not going to work in practice. We’re talking about creating a cambrian explosion of new currencies, one for each possible policy that could be enacted. And then we’re going to let people set endorsements on all of those policies depending on what they believe in. But how the hell is someone supposed to evaluate that legion of currencies? It’s just not possible, not with how much is going on in their lives, not even if they dedicated themselves to it full time.
The good news is that because of how we’ve set up the incentives, we can fall back to some institutions that are already familiar. Instead of each individual evaluating all currencies, they can find an advisor who will help them set those endorsements. They’ll be willing to pay this advisor because a good advisor will help them save money on their daily transactions. There’s a possibility for a type of liquid democracy here, where a user can follow another user’s endorsement, letting their personal endorsement evolve by the endorsement of the person they follow. These could also be automated by index endorsements (like index funds) which are set to the average endorsement of the community endorsement. As far as I can tell, these are both safe mechanisms to implement in order to majorly reduce the cognitive burden of decision making for an average user. That said, I would avoid mechanisms that allow a user to inherit a endorsement from another user, which is itself inherited from another user. I believe this sort of chaining can have detrimental effects because it will tend to centralize too much power in one place, and due to the way endorsements work it will allow a single user to make sweeping changes to policy with no local risk to themselves.
So this should put a wrap on the first mechanism set that I wanted to introduce you to. I’ve called this set “claim currencies” because of how they allow you to embed propositions in money. Nothing here explicitly brings about our downfall, that’s still to come, but it is important to understand for once we get there. To review, this mechanism is trying to enable a market for policy that is non-plutocratic. Along the way we tessellated three mechanisms together: 1) intrustments 2) index payments 3) endorsements. All together, they had surprising and promising emergent properties: self-assembling economic communities, incentives for conciliation, values and beliefs made legible to economics, among others.
There’s one final way we can build on this idea which is very important. Let’s look at how this mechanism might play a role in funding for public goods (and pricing for public bads).
A classic example of a public good is open source software. The whole point of this software is that it’s available for anyone to copy, change, and redistribute. So, how can this mechanism allow the people that create and maintain this software get paid?
One way we might imagine implementing this using the mechanism set you’ve seen so far is by creating a policy like this, “$5,000 of new currency should be emitted to the maintainer of xyz software.” For simplicity let’s assume that xyz software is maintained by a single person. We’ll relax this assumption later, but not using the mechanisms we’ve seen so far. Furthermore, let’s assume that, “Mint $5000 to account 0x123” is can be understood by a smart contract of some sort.
There’s one final piece of magic we need to add here, which isn’t technically new, but will feel different because of how we’re implementing it.
What we’ll do is give a smart contract we write control over a wallet. Then, we’ll mint shares that represent partial ownership of that wallet. When the policy has more intrustments in favor of the policy “Mint $5000 to account 0x123”, a smart contract mints $5k and puts it in the wallet. It then sends all the shares in that wallet to account 0x123 of the maintainer.
Now, when the maintainer checks the value of her account, let’s call her Marie, she’ll see x number of shares of ownership in policy wallet. But because we’re nice, why don’t we look up the amount of tokens in the policy wallet and calculate for her the value of that ownership. We can render that as simply $5000 worth of shares in policy wallet. You might already see where this is going.
What happens when someone intrusts against the policy “Mint $5000 to account 0x123” so much that it dips below 50%? Now, the smart contract that governs the policy wallet just burns that money by sending to a black hole. When Marie checks her account, she’ll see that she still has x number of shares, but when her wallet renders the value of those shares it will show up as $0 because the underlying collateral that made the wallet valuable is now gone. This is how we can get “reversible” transactions even with our append-only chain.
So that we can ensure we really understand how this is working, the next thing to imagine is how this plays out with our claim currency mechanism set. Our maintainer has just been sent a bunch of tokens into her account that are nominally worth $5000. What happens when she tries to go and spend them? Well, it will depend on people’s endorsements, won’t it?
For someone who agrees that she should be paid $5000 for your work, they’ll set their endorsement to 0%, and she’ll be able to pay them as if you were spending the underlying collateral itself. But you might notice that some people might not be so keen to accept her currency. For one thing, just as with all policies that are represented with claim currencies, people might just not believe in the policy. Maybe the open source software she maintains is Bitcoin, and they’re not interested in accepting it for environmental reasons. We’re already used to thinking about that dynamic. But there’s a new dynamic here, too. Now, when you consider whether to accept her payment you could be perfectly comfortable with her being paid that much, but you might worry that other people might intrust against the policy enough that the underlying collateral is destroyed and the tokens that end up in your account also collapse in value.
Does in make sense why you might worry about this? Let’s look at a concrete example. Let’s say that maintainer has an account that looks like this:
[USD100][Ethereum400][Stop AI Enablement Research300]["Print $5k and send to 0x123"5000]
If she’s going to pay you
0000
100
You’ll receive this as the composition of your payment:
1.72% USD
6.90% Ethereum
5.17% Stop AI Enablement Research
86.21% "Print $5k and send to 0x123"
Which is:
[USD$1.72][Ethereum$6.90][Stop AI Enablement Research$5.17]["Print $5k and send to 0x123"$86.21]
In the final payment you’ll receive more than 86% of your payment in that policy. That means if ever that policy were to be intrusted enough that “against” overwhelmed “pro”, 86% of your payment value would disappear down the drain. Regardless of whether you believe maintainer should get that kind of payout, you would be silly to accept that risk, and so you’ll likely apply a discount.
Now the wheels should start turning for you. See, normally printing new money represents an inflation risk. When newly printed money enters someone’s account they go out and spend it on new things, which spurs on the economy. But, after a bit of doing this things can get a bit insane. The new money that’s being printed causes willingness to pay to go up, which sellers catch on to and so begin raising their prices, which makes everything more expensive for the same amount of money, which causes people to ask for more money in their salary and so we end up with permanently higher prices. You know how inflation works. But this is different. Because in this case we’ve tagged every mint event with the reason for minting we can go back in time and “unmint” that currency if we no longer think it was a smart decision. This is an unprecedented ability.
In traditional monetary policy paradigm, there’s really only one thing that the central bank can do: they can start printing new money. They might do that for a handful of reasons, but often times it is to pull the economy out of a bit of a nosedive. See, when economic conditions start to deteriorate people get scared and they start to pull their money from the stock market. In response, the stock market contracts and prices fall, so now new people get scared, because the price is falling, and now they pull out. The strange thing about this cycle is there is no natural point where it should stop. Since people are pulling their money out of the market, there’s less capital available, so businesses begin to struggle and fail more. The increased failure rate of businesses causes loan interest rates to go up because risk is higher. So now more businesses fail. The failing businesses cause banks to suffer, they have lent out some of that money and it’s no longer coming back. So, people who put their money in the bank begin to worry about this and try to take their cash out before the bank runs out. The worry that there won’t be cash by the time you get to the bank causes more people to want to take their cash out. This is a bank run. Now the bank has no cash to loan, you don’t have your cash back from the bank, the businesses around you have failed because they can’t buy materials, you don’t have any work, and at this point you might begin to feel a bit depressed. In fact, you might feel a great amount of depression.
This is exactly the process that many economists believe led to the Great Depression. And as you look at that cycle, the remedy becomes clear: if at any point everyone could have been confident that prices would stabilize then they would have. But no one could have any confidence that the firesale of stocks would end any time soon, and so the tragedy intensified. That’s where inflation can come in to save the day.
Let’s say you’re one of the lucky few who managed to pull your money out of the stock market (and the bank) before everything crashed. You’re definitely not going to reinvest that money any time soon, the prices of stocks are still plummeting and everything is still getting worse. So, what do you do? You sit on that cash and wait. Of course, if the economy could speak it would beg you to reinvest your money, because if you and all the other people like you could reinvest at the same time then people might have some hope that the economy was returning to normal, and they might also reinvest, and then the economy *would* return to normal. Unfortunately, economies cannot speak. But Central Banks can.
It turns out that Central Banks need only speak one word: brrrrr. That’s the sound of the money printer.
image.png
To understand why money printing matters, it’s worth thinking about what it’s like for you. How do you feel sitting on your mattress stuffed with cash as the inflation starts to kick in? You start to get worried, sure the stock market is going down, but the value of your cash is also going down. It would be better if you could just get your money into some sort of investment, even if that investment returned very little. Plus... you start to realize, now that you’re looking for an investment you’re probably not alone. Other people are too. This starts to give you a bit more confidence that the market will rebound, and everyone else feels it, too. And because of this dynamic, the market does rebound. This is the part where monetary policy gets to be a bit weird, because sometimes the Fed doesn’t actually need to print the money, they can just threaten to (or, in their parlance, threaten a rate hike) and it will cause the effects they want without them having to do anything. It’s a magical coordinating signal.
So, what does this have to do with us?
Just as how we earlier gave individuals the ability to impose tariffs, this mechanism allows us to give individuals the ability to print money. You might notice that much of this mechanism set is about moving the abilities of nation states down to the level of individuals and communities. We might worry at first that people will abuse their ability to mint currency, after all, if I could print myself some crisp $100 bills by hitting the big red button you better believe I would be pressing it. But then we notice that endorsements put a limit on this bad behavior. Sure, you could print yourself some $100s, but if at any point people thought you shouldn’t have done that then the value of that money will disappear. And because of the risk that it will disappear, people around you will set a steep endorsement which discounts that money. So, unless you have buy-in that the reason you printed that money is a good one, it will be as if you hadn’t printed any new money in the first place.
Now with the ability to unmint currency, monetary policy need not be controlled by a highly regulated and expertise laden central bank. The decision to print is now no longer an irreversible choice with long term macro consequences which must be carefully modeled. It’s now a local choice with consequences that will eventually diffuse to other communities, but that can be undone if it proves to have been an error.
To borrow , with the advent of unminting we’ve moved monetary policy from the domain of a Type 1 decision, to Type 2 decisions. In his own words:
Some decisions are consequential and irreversible or nearly irreversible – one-way doors – and these decisions must be made methodically, carefully, slowly, with great deliberation and consultation. If you walk through and don’t like what you see on the other side, you can’t get back to where you were before. We can call these Type 1 decisions. But most decisions aren’t like that – they are changeable, reversible – they’re two-way doors. If you’ve made a suboptimal Type 2 decision, you don’t have to live with the consequences for that long. You can reopen the door and go back through. Type 2 decisions can and should be made quickly by high judgment individuals or small groups.
Now that the decisions around printing new money can be made by “high judgment individuals or small groups” we have a completely new economic dynamic to explore. It’s hard to speculate exactly on the kinds of properties this environment will have, but I’ll aim to make a case for some of the ones that seem plausible so that they can be tested.
First, let’s talk about the part I’m more confident in: funding public and common goods. When we explored Marie’s open source maintenance we got a bit of a sense for this effect already. Marie requested a grant from the system, and then since her policy was sufficiently supported that grant was emitted to her. Her incentives are good because if she doesn’t deliver on her promise then the money she received will disappear both for her and for the people that accepted it. But grants aren’t the only way you can support your favorite creator like Marie.
These dynamics, by the way, have the lovely property of encouraging Marie to take small payments in the beginning so as to support herself while she works on the technology, and then to request large retroactive funding payments when she’s completed her work, since at that point people will be able to have more confidence around the total impact of Marie’s contributions.
In addition to grants, we could imagine several other mechanisms that would allow you to support Marie’s work. One simple way to support Marie would be to perform a token swap. You might send Marie $500, and in exchange she’ll send you $500 back. Let’s say your endorsement says that Marie’s work is super valuable (a discount of 0%). So you get a mix of her tokens, including a bunch of the tokens from the grant. You’re taking on some of the risk for her. It might even be that it’s costing you a bit to perform this swap because other people still have high endorsements against this token. Alternatively, maybe you live in a community that highly values open source maintenance work, so when you go to buy things in your community it won’t cost you anything extra because the endorsements are so low. This means that for you it’s free* to benefit and support Marie’s work on maintenance.
* free but for the inflation caused by the new money that’s entering your local economy.
Beyond grants and token swaps, Marie could set up special kind of policy that matches every dollar you put into it. So, if you intrust $100 in the policy, then Marie gets $100 in her account. Remember, because of how intrusting works you get those $100 back, they’re now just “wrapped” in the policy that is being used to pay Marie. What this means is that when you go to pay with these newly intrusted tokens it might be a bit expensive for you if people in your community aren’t supporters of open source. However, similar to token swaps, if you do live in a community that supports open source work this might be free for you. The difference between this and a token swap is that you don’t have to take on the personal risk of the policy later failing and being reversed while you’re still holding those tokens. That should mean that on average it’s less risky (and therefore cheaper) to support public goods with 1:1 granting policies.
Finally, a very simple way to support Marie would be to set a lenient endorsement. By setting a very small discount you allow Marie’s tokens to have currency with you. If you just stick to transacting with people who also agree with that endorsement then it won’t cost you anything (other than the inflation incurred). But if you try to go outside of your economic bubble you’ll have to start paying some disagreement conciliation since those people do have endorsements against Marie’s tokens.
These mechanisms for funding public goods gives you a sense of the key components of public goods funding:
People that directly benefit from the public good should pay for it
People that don’t directly benefit from the public good should benefit from supporting it
This gets us to a core principle of claim currencies. Just as the early United States had the rallying cry, “no taxation without representation” a core principle that’s encoded into claim currencies is, “no taxation without compensation.” There are two kinds of compensation that you might enjoy. Either you should directly benefit from the public good you’re being taxed for, or it should be more profitable for you to support the public good with your taxes than it would be if you didn’t. These should happen automatically, by merely following your personal incentives. Let’s look at both of these.
In our story about Marie, you were likely supporting her work on the open source library because you personally use that library. You’re benefiting from her work. Instead of donating directly, you’re getting to mint new currency. But who else ends up having to pay for this?
As it stands, Marie has some newly minted tokens. And you also have some newly wrapped tokens, because to mint the tokens for Marie you had to intrust. If you’re both primarily spending these in an economic environment where discounts on your tokens are 0, then you don’t need to have any worries. But, what happens when you try to pay someone who didn’t benefit from the public good, and therefore doesn’t want to pay for it by accepting the new inflation?
To explore this, we’re going to consider a simplified example. We’ll pretend that the person who is considering accepting the payment is like the dictator of their economic community, we’ll call him Charlie. When Charlie changes his endorsement, everyone else in the economy also changes their endorsement to match. This is a highly unrealistic situation, but it will prove informative.
To simplify even further, we’ll imagine that there are only two tokens in circulation. One is token A, which is what everyone is already used to using, and the other is token B, which is the token that was newly minted in order to pay Marie.
In thi story, Charlie runs a business selling fine rugs and Marie wants to buy one of these rugs. When Marie approaches Charlie with an offer to pay
$1,000
for a particularly fine rug, Charlie has to pause. Marie’s account is filled with a bunch of token B. Let’s say that
10%
of her account is token B. If Charlie were to accept this payment, then it would mean that he’ll experience inflation. How much that hurts him depends on how much token A he already has saved up and how much total inflation this accepting token B will cause.
Let’s imagine that Charlie has
$500,000
in savings. So that we have some simple numbers to use, we’ll also imagine that when Marie was paid, her newly printed currency makes up
5%
of the new total monetary supply of token As and Bs. Here’s a concrete example to make it clear what that means: if there were
$100
m worth of token As in circulation and Marie was paid
5%
of the monetary supply, she must have received
$5.26
m worth of tokens, this is simply because
$5.26 / ($5.26 + $100)
=
5%
. For our purposes, we can ignore the numerical size of the initial monetary supply, and just work with the percentage of new money.
For Charlie, what’s really concerning him is how much of Marie’s money to accept. Remember,
10%
of Marie’s payment is made up of token B, that’s
$100
worth of token B. Charlie has a delicate balancing act to perform. He’d like to sell the rug, there’s
$250
of profit in it for him. But if he values Marie’s newly printed tokens then all the money that he’s been saving up for years will suddenly be less valuable. Remember, this assumption only holds for Charlie because he’s an economic dictator. Everyone is following his lead. For our story, we’ll give Charlie the ability to endorse token B to some percentage of its nominal value. An endorsement of 100% means that $1 of token B is, well, $1. But Charlie can set that endorsement to whatever he’d like. For example, if he sets an endorsement of 0.3, then every $1 worth of token B will only be worth 30 cents. A payment of
$1,000
from Marie would be worth only
$930
because all of her token Bs are now worth less.
Let’s look at what happens when Charlie sets his endorsement of Marie’s payment to
0
. Obviously, this means that Charlie experiences no inflation, which is nice for him. However, Marie is not happy about this. Since Charlie now considers
10%
of her account worthless, she’ll now have to pay what to her looks like
$1,111
in order for Charlie to receive what he considers to be
$1,000
. This is because when Marie pays Charlie, she’ll pay with an index payment, which guarantees that what Charlie receives as payment is still comprised of
10%
token B, so only
90%
of Marie’s payment has any value to him. By paying
$1,111
Marie ensures that Charlie gets what he considers to be
$1,000
(because
$1,111 × 90% = $1,000
). For Marie, because of Charlie’s low endorsement of token B, this payment has just become
$111
more expensive.
What if we go to the other end of the spectrum and Charlie endorsed token B at
100%
? In that case, Marie would be thrilled! Now her payment to Charlie will be only
$1,000
, just like she expected. Also, it’s great news for her that someone as important as Charlie is accepting her token Bs. However, things are not so great for Charlie. By telling his community that all of these token Bs are worth
100%
of their face value, he has massively increased the total monetary supply. What he used to be able to buy with his saved
$500,000
he would now need
$526,316
to buy. You can see this is the case because if those new token Bs were to be burned, and the monetary supply shrunk back down to
95%
of its expanded size, he’d be back to being able to buy the same amount of stuff because
$526,316 × 0.95 = $500,000
. This means that in order for Charlie to agree to this deal, he needs
$26,316
in compensation. He looks at the fine rug Marie is going to sell him. Of the
$1,000
she’s going to pay him,
$250
will be profit. That hardly gets him to the
$26,316
he needs. But maybe there’s a solution in between? Maybe Charlie can carefully set his endorsement of token B so that they’re both happy.
Let’s imagine that Charlie decides to endorse Marie’s token Bs at
0.5%
. This time, now that Charlie is being more careful, by setting his endorsement to
0.5%
, it’s like this new token B only makes up
0.03%
of the new monetary supply (instead of
5%
from before). We can see that this is the case if we once again imagine that there were
$100
m worth of token As and Marie received
$5.26
m worth of token Bs which makes up
5%
of the monetary supply. But now Charlie is only treating Marie’s token Bs as worth
0.5%
of
$5.26
which is
$0.03
m.
$0.03
m is only
0.03%
of the monetary supply as we expect, which we can easily see because
$0.03 ÷ ($0.03 + $100)
=
0.03%
.
This means that Charlie will now need
$500,132
to buy what
$500,000
bought him before, and since he previously expected to earn
$250
in profit, now he’ll at least take home
$118
in profit, with
$132
disappearing to inflation. As they ink the deal, and Marie prepares to complete the transaction and notices that she still has to pay more than
$1,000
. Because Charlie is only considering token B to be worth
0.5%
of its nominal value, and because it makes up
10%
of Marie’s wallet, Marie will still have to follow the rules of index payments in order to compensate Charlie for all the token B that he doesn’t want to accept. In total, she’ll have to pay
$1,110
to make them even. Let’s say Marie is ok with this and she sends the payment and both Charlie and Marie are happy.
But you shouldn’t be happy. Charlie might be ok with
0.5%
endorsement, but given that they’re both incurring costs, Charlie paid
$132
in inflation and Marie paid
$110
in conciliation, there’s
$242
of total loss. Charlie had to worry that Marie might just walk away because of the additional cost. What endorsement could he have chosen in order to minimize that loss, thereby having the lowest price for customers like Marie?
Under this economic dictator assumption, Charlie should accept the payment as long as this condition is met:
Where
e is the amount of endorsement by Charlie (in the range 0 and 1)
m is the percent of new monetary supply that token B will be (e.g. ~30% if there were 100 tokens in circulation and you minted 43)
s is the amount of savings that Charlie has (in dollars or some base currency)
r is the profit that Charlie will make on the sale
← Expand to see where this equation comes from
Charlie should say yes as long as his profit is greater than his loss due to inflation: ​
Where r is profit. But what is the loss due to inflation? Well, we know that in the case that he’s 100% endorsing it, he’ll need this much in savings to buy the same amount that his previous savings could:
Where s is his savings and m is the percent of new monetary supply that the newly minted money will be.
Which means he’ll need this much compensation:
What about the case where he sets an endorsement that’s not 100%? For that, we’ll need to swap out our m for this much longer equation:
Where e is the amount of endorsement. Which leaves us with this monstrosity:
Which we can simplify to:
Which means that we can rearrange this equation to find that the maximum endorsement that Charlie should accept with this equation:
(assuming s > 0; 0 < m < 1; r > 0, which are all reasonable assumptions)
Now let’s look at the prices that Marie will subjectively pay depending on whether she’s paying Charlie or Charity.
The difference between their situations is:
Charlie’s savings are
$100,000
Whereas Charity’s savings are
$19,000
Everything else about their situations are the same:
The price they’ve set for the sale is
$1,000
The profit they both expect from the sale is
$250
20%
of Marie’s wallet is made up of the newly minted tokens

Now that we have this we can calculate the maximum endorsement that Charity and Charlie can set respectively. On the x axis we show m, the percent of the post-mint monetary supply that’s newly minted. Notice that Charity’s max endorsement can always be bigger than Charlie’s.
Maximum Endorsement
1
(I’ve made some parameter assumptions for you here, which you can adjust if you want but you don’t need to in order to understand)
profit r
0000
250
percent of wallet p
000
0.2
quantity of payment q
0000
1000
savings Charlie s
000000
100000
savings Charity s
000000
19000
Cap the endorsement at 100%?
Their endorsement has an impact on the amount that Marie has to pay them. We can think about their endorsement as a reverse discount, and we end up with this equation:
The total conciliation they have to pay has the same behavior as before, we’ve now just given ourselves the ability to figure out what Marie’s payment will have to be. As you can probably guess, since Charity has a higher endorsement, she can charge less in conciliation, and so the total price that Marie has to pay Charity is going to be lower across the board than paying Charlie.
Marie's Payment
1
This graph is showing you that no matter how many new tokens are printed (m), it’s always cheaper for Marie to pay Charity than it is to pay Charlie. This is entirely due to the difference between their respective wealth. Remember, from the perspective of both Charity and Charlie, Marie is paying what looks to them both like
$1,000
, the additional cost from Marie’s perspective is entirely due to the fact that both Charity and Charlie are charging Marie extra to account for their loss due to inflation. Those losses are simply larger for Charlie because he’s wealthier.
This is a very interesting mechanism in its own right. It seems to cause wealth equalizing dynamics. However, it’s only able to do that because it is minting new currency. It seems prudent then to consider the incentives of those who mint that new currency, as well as the expected impact that this sort of economic environment would have on individual and group incentives.
[Author’s note] I ended up breaking off here and doing a standalone writeup of this which you can find here:
This was a kinda fun example of how the process of exploring why something doesn’t work can be the window to seeing something that does.









2 biggest weaknesses of this:
the issue around echo chambers is truly concerning and I’m not sure sufficiently dealt with by the fact that there’s always an incentive to defect, esp due to social context concerns
communicating why the money disappearing thing is fair is more difficult and involved than I thought it would be.

Outline:
AI Enablement Prevention Policy
Staking a Policy to change it as a token
Disagreement premium
Premium algorithm (will motivate later)
Consequence: move to economic communities in agreement: autopoiesis
Underlying incentive for agreement: it’s expensive to spuriously disagree: if you want more available parking spaces charge for parking
Nothing new: it’s just moving the ability to impose a tariff down to the level of individual
An interesting opportunity is the ability to fund public goods with this scheme
Fund AI Safety Policy
positive preference for that policy
sufficient support for the print?
no taxation without compensation
Risk of plutocracy: why won’t money just win here
Walk through the billionaire case: a town with a manipulable system
the need for a system where in principle the person with $5 can beat the billionaire
mechanism for enabling $5 to win vs billionaire, but under what conditions should they win?
delicate surgery to extract safely the incentives
Einstein story of prediction
Scout mindset story of enumerating how you could be wrong
Incentive for self-constraint, self-invalidation
Popper’s Falsifiability
discovery in arbitration
enumerating categories in categorical bayesian inference
epistemic leverage
example with AI enablement be wrong statements
Get concrete
specific algorithm, walkthrough of how it maps onto the claim mechanism, how epistemic leverage might work with a particular example.
Why isn’t this enough? We have an algorithm, right?
the problem of relevance
speed of light > 5kph
Solution: relevance linking intermediate claims
Will this algorithm be enough? It seems like it, explain why.

Nor gate which is universal, now enables disconnection
from a category theoretic perspective this is giving us collective control over the morphisms, not just the objects (or, equivalently, treating the morphisms exactly like objects, which is also what makes category theory so powerful)
Why haven’t I found the solution? Technicals:
Linking claims must be able to have subclaims and epistemic leverage
Need linking to have an effect on the first iteration.
Has to be able to handle cycles (including some rather complex cycles)
Ability to handle exclusive and inclusive outcomes
The need for the algorithm to be guaranteed to terminate (Ethereum solved this with gas)
is there anything to say about payouts for resolution? the incentive for self-invalidation both wrt the claim and for the close-community?
Should I talk about what my corner of preference and belief space will look like?
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.