Share
Explore

icon picker
The Bribery Problem

The first key principle of a crypto governance system is this: bribery is extremely easy to perform, and difficult, bordering on impossible to detect and disincent.
Bribery already happens in many governance systems, and it has a range of different expressions. The most egregious kind is where a judge or a voter accepts money in exchange for using their influence in favor of the person who bribed them. On the other hand, offering free bus tickets to the ballot box or handing out sandwiches at the ballot box, or leaning on the long-time relationship you have with the judge to influence an outcome are more benign ways to swing a decision but nonetheless have a flavor of using money or resources to achieve a desired end.
Detecting this sort of behavior and preventing it is extremely challenging, especially in crypto systems.
To drive this point home, let's imagine that we tried to solve the problem of bribery by offering a bounty for anyone who had information on a bribe that was taking place. In this way, we're trying to bribe people to tell us about bribery. Notice that this doesn't even vaguely get us to our goal of detecting and penalizing bribery because:
Who evaluates that information to determine if it's actually bribery? Another governance mechanism that itself could be subject to bribery?
If you reward detection of bribery, then people will learn to manufacture it. This is known as the Cobra Effect, it's named after a time when the British government in India offered a reward for the heads of cobras to try to kill off all the feral cobras in the cities. In response the Indian people began to breed snakes, chop off their heads, and sell the to the government. Particularly funny is that when finally the government caught on to what was happening and changed the rule, a bunch of people now had a worthless stock of cobras which they released onto the streets. The same thing would occur here: instead of detecting conspiracies, the bounty would become excellent at detecting the conspiracies that were created precisely to be detected so that the bounty could be claimed by the conspiracy conspirators.
By the way, there's a sense in which the bribery problem is a more general statement of the Sybil problem. In the classic framing of the Sybil problem, we're trying to prevent people from being able to create multiple accounts which they can use to take some sort of "policy action", like voting or liking a post. A concrete and canonical example of hard work on preventing Sybil issues is the Gitcoin Grant platform, which puts a lot of effort into preventing people from creating multiple accounts. This is important because Gitcoin uses quadratic funding for its projects, which means that one person gets $1 of matching funds for each $1 donated, but only $2 of matching funds for donating $4, and only $3 of matching funds for donating $9. This creates an incentive to create multiple accounts, because of course if you had 9 accounts, they could each donate $1, which would net you $9 in matching funds instead of the $3 you would have gotten from donating from a single account instead.
Gitcoin makes a valiant effort of solving this problem. However, notice that there's an alternative defection strategy to creating Sybil accounts: bribery (or ). It's profitable for a nefarious donor to pay verified Gitcoin accounts to donate on their behalf. This means that if the donor was going to give $1000, where they would only expect to get $31 in matching funds, they can instead pay a bunch of individuals to donate on their behalf. In fact, even if the nefarious donor must pay 90% of the donation to participants it would still be profitable to split it up this way as long as the donor can find at least 11 accounts to bribe. This is true of all donation sizes.
In general, we can calculate the profit from bribery with this formula:
Here, x is the planned donation, n is the number of accounts the donor can bribe, and f is the fee (as a percentage of the donation) that the bribed account is charging. Given how much money there is to earn here, I would be shocked if this isn't already happening. The fee that a collusion group can charge is proportional to their size. If the group is 100 people, they can charge any fee less than 99% of the donation! (In general, it's profitable for a nefarious donor to prefer collusion in any case where the fee < (n -1) / n, where n is the number of bribed accounts the donation is split among). Clearly, this is bad news for the collusion resistance of quadratic funding.
Is it pragmatic to collude, or are there reasonable defenses against it? The question essentially comes down to whether it's possible for verified accounts to:
prove to nefarious donors that they actually donated
prevent detection from Gitcoin
Both of these are unfortunately easy, and are *enabled* by the norms and technology of crypto, not hindered.
In the simplest form, we can imagine that someone might build a smart contract, let's call it the "Colusseun" (since it's a collusion battlefield), that allows donors to specify the project that they want to fund and automatically coordinates the collusion among a network of bribe friendly accounts. Aren't smart contracts great?
The Colusseun smart contract can request the name of the project (in an encrypted form) and then it can request the funds from the donor. Then, Colusseun privately messages its bribe friendly accounts a lists of projects to donate to, and the donation quantities recommended. Astoundingly, creating a list with the exact quantities to donate is actually natively supported in Gitcoin.
Separately, the bribe friendly accounts can register with Colusseun, indicating to Colusseun 1) which account to watch for the donations on chain, and 2) what address to send the bribe to upon completion. The bribe receiving account can be brand new and completely separate from the account used for the Gitcoin donation.
Once the donation is complete, Colusseun can emit the bribes. In principle, you might believe it's possible to easily pair the bribes emitted by Colusseun with the donations that were given to projects, but it's trivial for Colusseun to obscure this information. For example, instead of promising to give exactly x% of the donation to each participant, it could promise to *on average* give x%, randomizing the payouts over each donation so as to obscure the exact donations it relates to. The second strategy it could employ is to ask for multiple bribery accounts from each agent, and then to emit payments in random fractions over multiple bribery accounts. To further confuse things, it could emit payments to randomly selected accounts that have nothing to do with the collusion in order to muddy the signal of which accounts are being bribed (e.g. emit payments to the accounts of existing Gitcoin donors). All this loss is worth it from the perspective of the donor, it's just part of the fee that they're paying to get more of a return.
You might imagine that as part of The Game, Gitcoin might begin to intercept these recommended donation lists that are emitted from Colusseun and to write code to automatically penalize projects for their affiliation with the smart contract. Reflexively, this means that Gitcoin has just handed the users of Colusseun a way to nuke grants they don't like. Just make a small donation to the grant you don't like through Colusseun and, for just that small fee!, Gitcoin will allocate less matching funds to the projects that are competing with you.
But wait! It turns out that Vitalik thought about this bribery problem and proposed a way to prevent this sort of coordination. It's called and the basic idea is that you can check whether player A and B both contributed to the same project and then proportionately discount their contribution if they did. This scheme offers an improved bound of
(where M is configurable) of total loss due to coordination among agents. This is certainly an improvement (the profit from coordination is now polynomial wrt the number of colluders instead of the fee growing at
) but it hasn't solved the problem because in principle the whole point of quadratic funding *is* to create a gradient where smaller donations from multiple individuals to the same project is more valuable than large donations from a single individual. If you sufficiently modify that mechanism such that it's no longer profitable to split up your donations then you must have also converted quadratic funding into linear funding. In other words, the profitability of bribery is baked in to Quadratic Funding.
As just one more nail in the coffin in the trustfulness of Quadratic Funding, notice that if Pairwise coordination subsidies are implemented (which in Gitcoin they are, and the code is public) then Colusseun merely has to solve an optimization problem. Given k projects p_k that have d_k dollars in "please help me find people I can bribe" money each, find the optimal covering of the graph of all projects p such that the sum of losses from the discoordination coefficient is minimized. What is that algorithm? It doesn't matter. Why? Because honest donors are not going to be employing it, so once again dishonest donors have a major advantage and honest donors are chumps for not colluding.
So, here we can see that crypto primitives actually make it extremely easy to verify specific performance of a bribed account and to hide that performance from would-be detectors. This proves that Quadratic Funding is not trustful. Trustfulness is when the optimal strategy is identical to the desired outcome. Honesty (donating to only the projects you want) is the desired outcome, but here only chumps should actually be honest because most likely everyone else is circumventing the Quadratic Funding mechanism with bribery.

Note, this was written before I knew about and another project called . In my opinion, these both still have shortcomings. I cover that in ?

Solving the Bribery Problem

Now, let's reverse the inference.
Let's say that we solved the bribery problem, what would that do to the Sybil problem?
What's remarkable is that if you can solve the bribery problem then there's a domain in which you have automatically solved the Sybil problem. What's even more amazing, is that it allows you to solve the Sybil problem *without having to identify anyone as an individual* (something that's currently ). The heart of the Sybil problem is that people are capable of creating duplicate accounts in order to get more influence, it turns out that if you solve the bribery problem, solving the Sybil problem isn't even necessary.
It should be obvious why this is the case. If you have a system where it's not possible to bribe someone in order to influence policy, then it must be the case that either 1) having partial or complete control of their account isn't valuable in principle or 2) it's expensive to bribe them, and the cost of the bribe is larger than the alternative ways to influence the system.
There's a counterexample that comes to mind almost immediately: Twitter doesn't seem to have a bribery problem, but does have a Sybil problem (what Lon would call a bot problem). People create thousands of accounts in order to create fake followings, fake engagement, etc. But notice that these are still just bribery problems in disguise: even if Twitter didn't do anything to prevent inauthentic behavior, it's still costly for someone to create these accounts, create passwords, log in, and take actions. That friction can be overcome with a combination of technology and hiring real people to do some kind of work. So, in the end, whether you're bribing people to write software to sign up for accounts (in the form of a salary), or directly paying them to manually sign up for accounts, it's still "bribery". (Here we're rolling all forms of payment into the category of bribery. Some people might be hesitant to do this, but I feel pretty comfortable saying that I'm "bribed" to do my job.) (Of course, you could do the sign up yourself, but there's an opportunity cost to your time and energy, etc.) In this way, the Sybil problem is just a special case of the bribery problem.
It might even be interesting to think about how solving the Sybil problem just puts a lower bound on the domain within which bribery is useful. If we think about the Gitcoin case, a nefarious actor's best move is to create new Sybils until the cost of a new Sybil is larger than the expected value of a new Sybil compared with just renting someone’s account by bribing them temporarily.
Another interesting thing to consider in the Gitcoin case is that because it is profitable to accept bribes (aka own a verified account) it's also more profitable to create new Sybil accounts. In other words, it means that people creating accounts to bribe will be willing to endure more to create an account than honest account creators. What's insidious about this is it means that if everyone is following their incentives:
Gitcoin will increase the account creation friction to prevent Sybiling
Bribe friendly accounts will be willing to endure more friction to get the account and fulfill donation bounties Honest players creating accounts will endure lots of signup pain, possibly stopping them from signing up
Anecdotally, I tried to get my mom on Gitcoin and completely gave up. Not worth the pain of explaining what a wallet is, what Github is, what QF is, and why she needs to log in with 10 different accounts.
We should expect in the limit that the friction of creating an account will always be high enough to prevent every honest player from completing signup because a bribe friendly account has much more resources and incentive to endure the sign up process than an honest one, so pushes up the cost Gitcoin must place on the process, disincenting honest players, which disincents honest projects, and the spiral continues. The bribery problem is a bitch.
Hopefully, this has given a sense of the magnitude of the importance of the bribery problem. But, unless you were paying close attention, it might not be obvious how to address it. How might we solve the bribery problem?
The compelling alternative to explore is just to accept the constraint. Money can be used to buy policy, linearly. Wherever there is an attempt to constrain the bribery problem, people can find ways to bribe those constraints. What would happen if that was used as a primitive instead of as a problem? What if we designed a governance system where you can directly purchase the outcome you want, as an acknowledgement that that's what is going to happen anyway if we choose any other mechanism.
Just to be clear, I'm not saying it's desirable that money is all pervasive like this, I'm just saying that it seems like that's how it is. It's kinda like gravity, no matter how tight a seal you put on your door it still manages to get through. Accepting gravity doesn't mean that you can't create flying machines, or go to space, it just informs your design. In fact, accepting gravity as a constraint is among the first things you must do if you do want to create a flying machine. The same may be true of bribery.

The subsequent question then must be: how do you embrace bribery without devolving into plutocracy (rule by the rich and powerful)? It’s something I’ll write about sometime.
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.