Skip to content
Gallery
Option | How to Help
Share
Explore
Option | How to Help

The Algorithm

High level, the goal here is to create a tool for collective inference. The approach we use is to take a graph which describes invalidating relationships between points that people have staked and then to render a final score for each of them. Here’s an example to give you the flavor of what it might look like.
Let’s imagine we have a network of points which are related to one another through arrows that act as “negations”. The arrows indicate that the points are contrary or contradicting evidence.
image.png
We’re going to implement an algorithm that gives it this final score:
image.png
In other words, point 0 has become worth 200 while everything else is worth 0.
Another way to frame this is we’re looking to find an algorithm that transforms this first matrix into the second one:
image.png

The game is really as simple as, “have each point subtract its value from the value of each other point until there is no value left to subtract.” A friend likes to describe it like it’s a turn based RPG where each point has health that is also their attack, and they attack one another until their health is gone and there’s a single (or no) winner.
So that’s the game. Here’s another example just to get it to sink in:
image.png
image.png
image.png
image.png

I’m tempted at this point to put in an example of claims that invalidate one another, but I don’t think that’s the best move at the moment. You’ll just be left wondering why it’s not the case that the person that puts in the most money wins, and of course, it is the case that the person that puts in the most money wins because nothing about this scoring algorithm ensures that “good behavior” (whatever we mean by that) gets additional influence. That’s a problem for .
But we’re going to have to do an example because I need to show you a problem.
Let’s say this is an invalidation graph for the shape of the Earth:
image.png
Each arrow points to points which that point invalidates. So, “The Earth is flat” has an arrow to the point “The Earth is round” which has an arrow right back. But, The “Earth is hyperbolic” doesn’t point to “The Earth is not flat” because they are in agreement with each other, they don’t invalidate.
Now you could imagine that each of these claims has some amount that has been staked in it and so, in the end, depending on how much you stake each of those, you get a final score, called the “credence”. Here’s the problem:
image.png
Now someone has added the point, “The speed of light is greater than 15kph”. This point is obviously true, and equally obviously irrelevant. This means that in order to adjudicate the veracity of a point, we must also be able to adjudicate the relevance between points.
To accomplish this, we make the relevance between points itself a node that can be disputed — this we call a negation. Only if the value of the negation is above 0 do we allow the invalidation signal to propagate to the target point.
image.png
This way we don’t need to reach for extra-systemic systems to resolve the relevance question, everything can be done by recursively relying on the same elements.

There are some other constraints to also care about:
calculating the credence must terminate
it must not favor any particular point (e.g. Page Rank favored certain known nodes like ABC.com, fine for them not fine for us)

This can be achieved! Check out this Colab notebook to play around with the algorithm in your browser:
To entice you to open it let me promise you that I’m extremely embarrassed by the code quality and I apologize profusely.

In the notebook you can build invalidation graphs with simple code:
image.png
Visualize them:
image.png
And then view their result when they’ve finished calculating the credence:
image.png

You might be interested to know that it’s possible to build NOR gates using these structures:
image.png
image.png

This means that you should be able to use the primitives to build a Turing Machine. The reason why it’s possible for it to simultaneously be Turing Complete and we can be certain it terminates is that at each step it’s subtracting some of the value in each point (it’s not actually burning the staked currency in someone’s account, it’s just taking the staked value as input and converting it to a value internally, and using that as a variable for the calculation). This is somewhat analogous to gas in Ethereum, where if you don’t provide enough gas then the computation will error out and return nothing. But in our case, we keep the final result whatever it might be, even if it didn’t “complete” whatever that might mean.

Please ask me to provide an intuitive motivation of the credence function, because it’s probably much easier to approach it with that in hand than just by looking at the code.

Open questions here:
can the credence code be made simpler (ignoring performance characteristics for now)?
can we use this to learn simple systems, make predictions, make decisions? E.g. the classic mouse in a t maze
image.png
Are there any obvious “higher order” properties of a player’s behavior that seem interesting? E.g. if they stake on a negation that makes one of their preferred points weaker, can that be used to reward them with a higher score in exchange for being intellectually honest?
What if the linking point used a different activation function rather than a switch like a relu?
Are duplicate points a problem? Can they be easily detected and factored out?
Can you have “personal” scores, e.g. according to what I’ve staked the final prediction is X but the collective prediction is Y. Does that render any opportunities for making “worldview prediction accuracy” into its own score like you might do with a calibration chart?

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.