Governance via Distributed Adversarial Inference

Proof of X
New technology like programmable money, machine intelligence, and mechanism design will soon enable a web based system that acts like a court of law. Here we’ll explore one method of implementing this web-based governance system: Decentralized Adversarial Inference.
This new method of governance will be so much better than what we currently have that one day we’ll look back on our existing legal and governance system in the same way we look back on other 1700s inventions like steamboats and sextants: marvels made quaint in light of what we now know.
Today, we have fundamentally new primitives for designing systems of governance, how might governance look different if we were to redesign?
From Helms to Hubcaps
In the early days of the United States, ships with legal permission from the British government would seize American cargo vessels as part of the British government’s method of enforcing taxation on the colonists, these private pirates were called privateers. When war broke out between the nascent U.S. and England, the U.S. responded by giving their own ships permission to capture privateers. This established a strange precedent in American case law: you can punish property for a crime.
In the context of the Revolutionary War, this made sense. The United States had no legal recourse against the British persons who owned the privateering vessels, their only option for punishing the owners was to confiscate their assets. After taking their ship and cargo in hand, it was only natural to turn around and sell the goods in order to fund the ongoing war.
However, as covered by Mike Ellis in , this precedent eventually became twisted into a law that allows police departments to confiscate vehicles without due process. Like a bureaucratic game of telephone, in the 200 years of caselaw winding through epochs of prohibition and mafia activity that simple precedent metastasized into legal permission and a perverse incentive to confiscate and auction people’s vehicles without due process — in some cases without even a hearing.
Initially, the knowledge that we couldn’t prosecute people living in another country, coupled with the fact that the U.S. no longer wanted their cargo seized by privateers resulted in a policy for mitigation: reciprocate by seizing the privateer’s ships, thereby making it expensive for them, if it’s expensive they’ll do less of it. However, this policy was encoded as law, dropping the underlying causal model in favor of treating it as a new tool: permission to take people’s stuff as a source of funding.
Fundamentally, this is the weakness of a legal system: it prioritizes law, when what’s interesting is the story and context that gave rise to the law. Decentralized Adversarial Inference offers an alternative.
[Hey Author, it seems like what’s missing here is a recognition of the reason why we have a constitutional and precedent based legal system: it constrains federal impunity. How does decentralized adversarial inference address that concern, along with concerns about regulatory capture?]
Governance As Inference
A powerful simplifying lens through which to view the issue of governance is to consider the whole job as inference: inference about what people want, how things work, and what policies to enact. It’s inference all the way down, including inference about preferences.
What’s economical about this framing is that it implies that if it’s possible to induce a
around the process of collective inference we can create a approach to performing inference at scale.
Imagine if minor credible signals of preference + compatibility with reality could be sufficient to modify policy.
On the small side, tweeting about a pothole gets it fixed.
On the large side, preferring that species aren’t extinct + the inference that species are becoming extinct could instantly modify policy to move to those that are believed to be effective to stop the problem (and those policies can be dynamically updated as we try them).
A possible approach to enabling governance via inference is Distributed Adversarial Inference, so named because it gives everyone the same power potential (distributed), recognizes that not all preferences are aligned (adversarial), and aggregates information about what’s happening, what people want, and what to do about it (inference).
Implementing Distributed Adversarial Inference
I know of two promising approaches to Distributed Adversarial Inference:
Algorithmic (via open modeling)
2. Game theoretic (via claim markets)
Algorithmic open modeling is the a way to enable Distributed Adversarial Inference.
It works by allowing anyone to upload computational models and data to an oracle which scores internal consistency of the models with the data. That oracle then rewards participants for their contributions insofar as they are internally consistent.
For example, the most popular and promising oracle at the moment is one based on Active Inference, which can return a score in the form of the reduction of free energy.
That network of models and data can then incent certain behavior (e.g. inducing policy by distributing bounties).
Game theoretic
Game theoretic distributed adversarial inference allows individuals to make discrete claims about what they believe to be true, and about how their claims relate to or refute one another. Whereas algorithmic open modeling is primarily a method for automatic measurement of internal consistency via a trusted algorithm with an added incentive layer, claim making is primarily an incentive layer (causal inference as a game) which then induces a model ensemble and the data it is to comply with.
There are some fascinating things to notice about what happens if we’re successful with either of these approaches:
A new type of consensus protocol is born: Proof of X. It’s the superset of all Byzantine Fault Tolerance schemes because it can accept PoW, PoS, PoH, PoB, Po[A-Z] as claims about which it possesses models (probabilistic proofs) which allows it to perform inference and allocate trust.
Open science is solved. Inference and governance become unified in this framework.
It might be supposed that because PoX is so powerful it is guaranteed to be much harder to build than, say, Proof of Stake, which has infamously taken a long time to be delivered. In fact, the opposite is true. Because PoX can rely on any possible signal to inform its inference, it has a much larger evidence space, which makes offering tight confidences on claims easier than when only paying attention to a single thing (e.g. how much someone has staked / hashed).
PoX is not zero sum with PoS or PoW. It can treat anything as a signal, it can trust (with some confidence) the claims of alternative platforms. If the Bitcoin network says you own a bitcoin a PoX network will likely agree as long as it accepts the precepts on which Bitcoin is predicated (namely that hash power is limited and hashing is difficult). Similarly, other PoX systems will also be able to reference one another. Each PoX can possess a “Theory of Mind” for the reliability and functionality of other PoX and weight their attestations accordingly, performing inference on the inferences of one another (much like you’re doing to my claims right now).
Finally, you might begin to think that this sounds a bit like AGI: it is not and does not require cracking AGI. It is really quite banal at the bottom of it all: humans make claims about how things work, and make claims about what they’ve seen. Generative models + data. It doesn’t have to be terribly smart to be incredibly useful, especially initially.
And the project is already underway!
For example, our beachhead problem is to solve climate change by paying stewards to care for ecosystems by selling impact offsets to existing carbon markets.
Meanwhile, at claim markets are our solution to collective governance. We’re exploring useing claim markets as the game theoretic alternative to algorithmic induction. There are probably many other ways to skin the game-theoretic-distributed-adversarial-inference cat, this is just the one I’m familiar with because I’ve worked on it.
You can check out (and fund!) . And you can see .
If you know other people working on distributed adversarial inference, please let me know!
Hint: most people working on DAOs are trying to solve this problem in one way or another.
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
) instead.