Share
Explore

Sketch Inference in the Negation Game

The negation game is an experiment in collective intelligence. If you’d like to play an early prototype, you can find it here:
In this short post I’m going to try to communicate the key ideas of how we’re trying to coax collective intelligence from individual contributions.
As brief motivation: we know that it’s possible to create systems of collective intelligence using individual incentives. Markets are exactly one such mechanism that aggregate individual moves to achieve collective outcomes. We also know that markets aren’t sufficient on their own as they don’t self regulate (cf. externalities).
Perhaps we can do better.

Science’s Good Idea

In order to approach collective intelligence we’re going to steal a key idea from the philosophy of science: making statements about how you could find out that you’re wrong improves the credibility of your ideas.
The development of the General Theory of Relativity is frequently offered as a prototypical example of the way in which the falsifiability of a theory is an important part of evaluating it.
As the story goes, Einstein’s theory made some surprising predictions. Among them was the prediction that the curvature of spacetime around massive objects would cause gravitational lensing, literally the light would appear to be coming from the “wrong spot” compared to the usual map of the locations of stars in our night sky.
According to the theory, even our sun should have been capable of producing this sort of lensing effect. But, it was too hard to tell just by looking at the sun because its luminescence drowns out the stars around it (scientists call this effect “daytime”). However, it was speculated that during a solar eclipse it should be possible to take pictures of this lensing effect. So, in 1919 an astronomer named Arthur Eddington travelled to Argentina during a solar eclipse
Later that year the Royal Society of London accepted the observations as evidence of the veracity of Einstein’s theories.
image.png
image.png
Images courtesy of Wikipedia
This story has become a prototype of the scientific process: publish a theory, see what observations it predicts, and then check those predictions. It’s the story told by the philosopher Karl Popper to motivate his theory of what is and what is not science (called the Problem of Demarcation). And it also manifests in a theory of intelligence and consciousness called the Bayesian Brain hypothesis, which posits that the brain has a model of the world, it then runs that model to make predictions about what it will observe, and then it updates its model when it encounters surprising observations — i.e. observations it did not predict.
It’s worth noting that many aspects of this story are simplified here to make a point, and are the subject of ongoing debate. For example, there’s some debate as to whether Eddington’s observations were actually valid, or whether it was actually subsequent observations wherein the deflection was distinguishable. Those debates don’t change the overall message of the story.
Here is where I’ll begin to opine. Often, when this story is retold, Eddington’s experiment is described as a “confirmation” of Einstein’s theory. As we build the negation game, the core perspective we’ll take is not that Eddington confirmed Einstein’s theory, but rather, at the risk of a mouthful, that Einstein’s theory offered surface area for invalidation, and then Eddington’s observations failed to invalidate the theory. Similarly, in some work there is emphasis placed on the importance of the “predictions” coming from Einstein’s theory — with some arguing that it’s not a scientific theory unless it makes predictions. Here, we instead say that Einstein’s theory was more precarious due to the fact that the predictions it made were surprising, and that made the failure to invalidate all the more persuasive. These concepts of surface area for invalidation, failure to invalidate, and the principle of precarity (precarity equal to persuasion) will become key ideas as we describe the negation game.

The Mechanisms

And now, without further ado, let us begin to discuss the structure of the mechanisms in the negation game. The first stop on our journey is the mechanism of staking, shortly followed by epistemic leverage. For many readers, the concept of staking is likely to be familiar due to its role in securing the Ethereum chain using Proof of Stake. The essential idea behind Ethereum’s system is that the person who stakes gets a reward for doing so, but they can also lose some of their stake — called getting “slashed” — if they disagree with the consensus. The job of the person staking in this context is to predict which choice everyone else is going to make so they get their reward and avoid the punishment. Since the decision they have to make is quite simple it’s easy for everyone to stake accurately.
Similar to Proof of Stake, the objective of the negation game is to create an incentive landscape that can achieve a collective view by weighing the contributions of various players. Unfortunately, unlike the questions the Ethereum network has to answer (essentially, “in what order did I receive these messages?”), the questions that communities have to answer are plagued by ambiguity, and therefore automated slashing as a result of disagreeing with the consensus is impractical.
In its stead, the negation game reaches for voluntary slashing as the means of achieving consensus, and does so by making it rewarding for a player to slash themself. But why would someone slash their own stake, causing them to lose money? There are two kinds of incentives the system can give to make this self-slashing a desirable move:
additional influence — as a result of slashing oneself a player gains (or regains) influence
long term earnings — as a new consensus forms, players that slashed sooner than others receive financial rewards for informing the consensus
The former of these two incentives we call epistemic leverage. It permits a player to make statements about how they can find out that they’re wrong and in exchange have more influence in the network; for example, they might do this to increase the likelihood their preferred policy is enacted.
Concretely, this is how epistemic leverage works:
a player stakes against a policy outcome they prefer
the player extends that stake by also staking that a contradictory argument is not true and if true would erode their position (potentially prompting them to partially slash their previous stake)
a second stake bets that they will never slash their stake.
image.png
A visualization of epistemic leverage
The purpose of the secondary market is to identify surprising signals. What makes a signal surprising is that it comes from a player that’s believed to be honest and is providing information that is believed wrong by the network. It’s not possible to infer either of those properties directly, but we can know (given certain assumptions) that if a player slashes their stake it must be because they were honest and they thought they were wrong.
In other words, this market is estimating the joint probability of wrong and honest. You’ll notice that these are exactly the conditions that preceded the empirical tests of General Relativity in the earlier story: predictions that could be falsified by observation and were expected to be. This is an incentive based falsifiability detector.
What’s potentially compelling about this formulation is that it offers closure within the mechanism. It’s not possible for any game theoretic mechanism to directly sense the outside world (e.g. see the stars), it can only know the bets players make and the stakes they slash, and then it must use that as an input into its internal algorithm to estimate trustworthiness of information.
It’s worth explicitly stating a non-goal: this mechanism is not trying to estimate whether a proposal is true or good. I don’t know how to directly extract that information, that has to be done through the normal methods of conversation and debate. Rather, this mechanism is useful for training collective attention onto surprising information so that the conversation can continue.

The Properties of the Mechanism

There are two properties to highlight that this mechanism engenders.
It rewards being seen as an intellectually honest player. A player who can find out that they’re wrong.
It does not overpunish error.
Intellectual honesty is perhaps the most obviously good, so let’s start there.
Existing social networks are a masterclass in incentive design, proving to each of us that independent of our normal personality, our incentive environment has significant influence over our resultant behavior. The holy grail of social networking has therefore been to find ways to engender prosocial and epistemic behaviors while still managing to attract and maintain a userbase. Unique to epistemic leverage as a mechanism set is the ability for players to have influence in the domains they care about in proportion to the degree that they are believed to be trustworthy players.
In the political sciences this problem is known to be notoriously hard. The problem of deciding who should have voice in matters of public controversy is often framed as “judge selection”. What makes it so hard is that there are no good solutions. On the one hand, if you grant judges life tenure they are free from political oversight — this gives them latitude to vote their conscience without the concerns of reelection, however, it also means that whatever mechanism places them on the bench becomes the hotly contested political battleground. If instead of life tenure they face a vote for their seat, whether from the public or an oversight committee, it incentivises them to cater to the whims of those voting parties instead of to the legality or facts of the case. Typically it’s not possible to sample from the various “judges” in proportion to the degree to which they are viewed as honest actors even by their adversaries, this mechanism offers that tantalizing possibility.
Social following algorithms can be thought of as a kind of solution to the judge selection problem. Your feed is populated with “judges” of varying repute, that you have chosen to follow for their takes. These allow for more of a bottom-up voice discovery mechanism than does judge selection, but suffer from indiscretion. Nothing about a follow or a like implies the informativeness of an idea, nor the reliability of its author. It’s just as likely that an account will achieve stratospheric success for following those prosocial norms as for flouting them. It’s not to say that those other mechanisms of influence and reach shouldn’t exist, just that this offers an alternative with more epistemic properties.
Markets are also bottom up mechanisms that have little interest in the individual reputations of participants. This is fine, they serve a different purpose. The improvement epistemic leverage offers is amending markets’ incentive for private information. Private information is the substance of market movements. If a player has a valuable source of information their best incentive has them privately trading on that information, certainly not disclosing it as that would allow other participants to capitalize on the insights. This is generally fine for markets, but is terrible for other domains. Imagine scientists refusing to publish their insights, data, or methods. Or the marketing department refusing to tell sales about their intended strategy. Sensemaking is best done — perhaps can only be done — in the context of shared information. This mechanism doesn’t disincentivize the sharing of information, and as we’ll see, can be easily augmented to significantly reward it.
While highlighting the fair-minded, intellectually honest player is an easily laudable goal, it’s forgivable to view the permitting of error with some doubt. After all, do we really want to have our attention drawn to ideas of dubious merit? Rather than argue on behalf of error, it’s more important to linger on the consequences of suppressing it. Breakthrough, paradigm-shifting insight is highly aleatoric. It cannot be predicted literally because the prevailing paradigm cannot conceive of it. It’s therefore impossible to meander down a wide road of new breakthroughs, instead, it must be done the opposite way — by mapping the negative space. If it’s expensive to explore moonshot ideas then most effort will be spent on incrementalism, which have orders of magnitude smaller returns. For a much more thorough exploration of this topic please see . In the design of sensemaking systems it’s desirable to reward both exploration and exploitation, overpunishing of error suppresses exploratory behavior.
As generators of new social norms, these two properties offer a picture of a world where many contributors asynchronously identify manifold possibilities, their attention trained to the theories most likely to be wrong that arise from trustworthy proposers, which are quickly resolved by the proposer so as to retain their good reputation for intellectual honesty. Strange ideas can quickly bubble to the top, receive attention, and then recede back into the darkness. Occasionally, one won’t recede, and instead it will earn converts and further staking, now catapulting its circle of attention and critique even wider.

What to do with the stake?

As the idea’s reach grows we can easily imagine its stake growing, too. Until now we haven’t discussed where the stake goes when a player voluntarily slashes it. Perhaps with the foreshadowing of the problems of private information you too can see the shining opportunity for what to do with those freed funds.
The Riemann Hypothesis and Google’s digital infrastructure have something in common: their ostensive impenetrability, and a large bounty if you can make progress in doing so.
The bounties offered for progress on these problems themselves encode significant information. The willingness to offer the bounty tells you that the issue is important. The continued presence of the bounty tells you that it hasn’t yet been solved. The size and duration the bounty remains open tells you that it’s hard, resilient in some way to attack. These signals can be incredibly valuable. Imagine software repositories that stake that their code is secure, creating a bounty for engineers to double-check. Imagine insurance companies that stake the assertion that your data isn’t sold to third parties, creating a bounty for employees to leak receipts. Imagine manufacturers that confidently stake statements of their ethical manufacturing practices, creating a bounty for journalists to discover the alternative. Imagine scientists that stake the likelihood of replication, and the integrity of their datasets, creating bounties for other labs to double-check.
Despite the significant income stream created for these professions the bounty itself isn’t merely the point. In addition to spawning an industry it also provides valuable information to decision makers. Just as 100 reviews with an average of 4.9 stars is a signal of quality, standing bounties — measured in their size and their duration — will become a signal of integrity.
So far, we’ve been seeking to paint a picture of how this mechanism of epistemic leverage might be useful. But as yet we’ve not touched the details of implementation. There are many details to attend to. For now, we’ll briefly touch on those most begging for attention.
Foremost among the dynamics to be tuned is the relationship between the amount staked and the magnitude of influence given in return. Without trying to cover all cases, here are some general things to consider:
if the incentive of influence is to matter it must be tied to policy outcomes. Total influence (meant as a technical quantity that aggregates their stakes, the community’s voice, etc) should directly select the policies that are implemented
staking should return approximately the amount of additional influence that could be earned by playing more clandestine influence games with the same amount of money. This way it doesn’t incent Sybil attacks, and it brings the player into the game rather than around it
staking counterarguments is akin to saying that they are observations that would be surprising, and so should still yield some additional influence, but it’s not clear how much
the signal value of the standing bounty would ideally be factored in as part of the reward the player is receiving and concomitantly reduce their influence as it’s one of the outcomes they’re paying for. In practice this may be infeasible
the likelihood of slashing is likely non-monotonic with stake size. I.e. if someone bets $1 maybe you don’t think they’re honest, if they bet $50 you do, and if they stake their life savings the likelihood they’re going to slash the whole thing has fallen off. Perhaps address this by designing the market mechanism to estimate the amount that will be slashed by a certain timestamp as with a bond.
reputation management will be an important part of these networks and stakers will be attentive to the systems that bettors use to evaluate their honesty. This is an opportunity to set good social norms with information visualization
when disagreements arise, the mechanism will have to be used recursively, just expanding the number of people that see it
So far, we’ve only discussed the first of the two kinds of incentives that the system can offer the player: additional influence through epistemic leverage. This is sensible because the second one, financial rewards, is much more difficult to imagine. One possibility is to find a mechanism that sensibly emits financial rewards to players for slashing their stake before other people do. This does serve a practical purpose: it incentivizes players to reward information provision. Imagine for example the thoughtful professor that stakes a statement (P=NP) and a couple of possible theorems that could invalidate the statement if proven. Now there is a reward for a grad student to potentially claim if they can prove the theorems. If they do the professor slashes her stake, the student earns the reward and then eventually when the result reaches a broader consensus as valuable and correct the professor’s creation of the incentive also is rewarded for its role in directing the attention of the student. While this is a sensible story to tell and a nice dynamic if it can be achieved, it’s highly suspect that it’s possible to do this in a mechanized way without first solving generalized retroactive funding for public goods first. Maybe if Index Wallets were further developed.
Thanks for reading! It’s now 213am on December 24th, Merry Christmas Eve, I’m going to bed.
P.S. one idea that seems important to mention is that there are two possible ways to reward contributors. When the collective is confident that they want a certain outcome they can reward bounty fulfillers with a liquid asset (e.g. usdc, eth, etc), whereas, when an individual is confident that an outcome will one day be valued by the collective they can claim an illiquid asset (e.g. a special token) that could one day appreciate if the outcome becomes valued. A mixture of these can also be used in cases where the collective wants to award a bounty to an individual while simultaneously aligning their incentives to delivering the intent (not merely the letter) of the bounty; one day, when the impact of the work is realized, the token can be bought back from the bounty fulfiller as a form of retroactive funding.

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.