Ethereum

About the Revelation Against Game | Ethereum Foundation Blog

A growing number of applications proposed on top of Ethereum rely on some form of incentivized multi-party data provisioning. This carries a high risk of collusion, whether for voting, random number collection, or any other use case where obtaining information from multiple parties to increase decentralization is highly desirable. RANDAO can certainly provide random numbers with much higher cryptoeconomic security than simple block hashes. Deterministic algorithm using publicly known seedsHowever, this does not mean that there can be infinite collusion. If 100% of RANDAO participants collude with each other, you can set the outcome as you wish. A much more controversial example is prediction markets. prophetDecentralized event reporting is an advanced version of Schelling plan, everyone votes on the outcome and the majority of everyone receives the reward. The theory is that if you expect everyone else to be honest, your incentive is also to be honest so that you are in the majority, so honesty is a stable equilibrium. However, the problem is that if more than 50% of the participants collude, the system collapses.

The fact that Augur has an independent token provides a partial defense against this problem. If voters collude, we can expect the value of Augur tokens to decrease to almost zero, as the system will be perceived as useless and unreliable. It loses a lot of value. But this is certainly not a complete defense. Paul Sztorc’s Truthcoin (and Augur) include additional defenses that are very economically clever. The core mechanism is simple. Rather than simply granting a fixed amount to everyone in the majority, the amount paid is determined by the degree of discrepancy between the final votes, with more discrepancies meaning more votes for majority voters and more votes for minority voters. An equally large amount is taken out of the deposit.


The intention is simple. If you get a message from someone saying, “Hey, we’re starting a contest. The real answer is A, but let’s all vote for B,” you can agree in a simpler way. However, in Sztorc’s plan, this individual might reach the following conclusion: actually I’m going to vote for A and I’m trying to convince him. only a few percent People vote for B to steal some of their money. The lack of trust therefore makes collusion more difficult. But there is a problem. Because blockchains are excellent devices for cryptographically secure consensus and coordination, it is very difficult to make collusion impossible. maybe.

To see how, consider the simplest possible scheme for how vote reporting works in Augur. There is a period in which everyone can send a transaction giving their vote, and at the end an algorithm calculates the results. However, this approach has a fatal flaw. It creates an incentive for people to wait as long as possible to see what all the other players’ answers are before answering themselves. Taking this to a natural equilibrium, everyone would vote on the last possible block, so the miner on the last block would essentially control everything. Random endings (e.g. the first block passing 100 times the normal difficulty threshold) mitigate this somewhat, but still leave a huge amount of power in the hands of individual miners.

The standard cryptographer’s response to this problem is the hash-commit-release scheme. all players blood(i) determine their reaction R(i)and there is a deadline by which everyone must submit. h(R(i)) where hour This can be a pre-specified hash function (e.g. SHA3). After that, you must submit everything. R(i), the value is checked against a previously provided hash. For two-player rock-paper-scissors or purely zero-sum games, this is very effective. However, in the case of Augur, the opportunity for credible collusion is still open. Users may voluntarily disclose information. R(i) Before the fact happens, others can verify that this actually matches the hash value you provided to the chain. Allowing users to change their hashes before the hash submission period ends has no effect. Users always spend a lot of money on specially built contracts that only release them if they show that their vote has not been changed by showing that no one has changed their vote by finalizing with the previous block hash without providing a Merkle tree proof to the contract. You can lock it. vote.

New solution?

However, there is another way to solve this problem, one that has not yet been adequately explored. Here’s the idea: Instead of doing a pre-release for public offering purposes within the base game itself, this would be expensive. Introducing Parallel Games Anyone who reveals information about their vote in advance to others (although this is mandatory, backed by oracle participants’ deposits) risks being (probably) betrayed. With no way to prove it It is he who betrayed them.

In its most basic form, the game works like this: Suppose we have a distributed random number generation scheme where users all have to flip a coin and provide either a 0 or a 1 as input. Now let’s say we want to suppress collusion. What we do is simple. everyone To register a bet Any player In the system (note the use of “anyone” and “all players” – non-players can participate as long as they provide a deposit), essentially “I am confident that this person will vote for X by more than 1/2”. He says. Here X can be 0 or 1. The betting rules are simply that if the subject provides X as input, N coins are sent to the bettor, and if the subject provides any other value, N coins are sent to the bettor. Moved from bettor to target. You can bet on the middle ground between dedication and revelation.

Probabilistically speaking, any Providing information to other parties now has potentially prohibitive costs. Even if you convince someone that you will vote for 1 51% of the time, they can still probably take your coins away from you, and if such a plan is repeated, you will win in the long run. Opponents can bet anonymously, so they can always pretend that the bet was made by a passerby gambler and not by them. To further strengthen your plan, you can say: ~ have to You bet on N different players at the same time, and players must be pseudo-randomly selected from the seeds. If you want to target a specific player, you can do so by trying different seeds with other players until you get the target you want, but there will always be some degree of plausible deniability. Another possible improvement, although costly, would be to require players to register bets only between promise and reveal, and to only reveal and execute bets after several rounds of the game have occurred (we assume there are long periods of time) ). before withdrawing your deposit for this operation).

Now how do I translate this to an Oracle scenario? Consider again the simple binary case. Users report either A or B, and before the end of the process, some unknown P will report A and the remaining 1-P will report B. Here we change the scheme somewhat. Now the bet says, “I’m sure this person will vote for X with probability P or greater.” Note that the language of the bet should not be taken to imply knowledge of P. Rather, it suggests the following opinion: Whatever the probability of a random user voting for, the specific user the bettor is targeting will vote for X with a higher probability than that. The betting rule processed after the voting phase is that if the subject votes for Target.

In the general case, the profit here is much more guaranteed than in the binary RANDAO example above. If A is true in most cases Everyone Therefore, using a complex zero-knowledge proof protocol, you can achieve very low-risk profits even if you only vote for A. probabilistic Confidence that you will vote for certain values.

Additional technical note: If there are only two possibilities, why can’t you decide? R(i) from h(R(i)) Should I try both options? The answer is that users are actually posting. h(R(i), n) and (R(i), n) For large random nonces N There is too much space to list them because they will be discarded.

Another thing to note is that this plan is a superset of Paul Sztorc’s counter-adjustment plan described above. If someone falsely convinces someone else to vote for B when the real answer is A, they can secretly use this information to bet against them. . In particular, profiting from the immoral behavior of others will no longer be in the public interest, but rather in the private interest. Anyone who deceives others and attacks them with false collusion will receive 100% of the profits, which will further increase suspicion of joining. It is a collusion that cannot be cryptographically proven.

Now how does this work in the linear case? Assume that users are voting on the BTC/USD price, so rather than choosing between A and B, we need to provide a scalar value. A lazy solution is to simply apply the binary approach to all binary digits of the price in parallel. However, an alternative solution is range betting. Users can place bets in the format “I am confident that this person will vote between X and Y with a higher probability than the average person.” Roughly disclosing to others the value of voting in this way is likely to be expensive.

problem

What are the weaknesses of this plan? Perhaps the biggest one is that it opens up the opportunity to inflict “secondary grief” on other players. Although you can’t force other players to lose money with this scheme, as you’d expect, placing bets certainly exposes you to risk. Therefore, an opportunity for blackmail may be opened. “If you don’t do what I want, I’ll force you to gamble with me.” This means that these attacks come at the cost of exposing the attackers themselves to risk.

The simplest way to mitigate this is to limit the amount you can gamble, and even limit it proportionally to the amount you bet. That is, if P = 0.1, we allow bets of up to $1 to be made by saying “I am confident that this person will vote for It allows betting up to $2. Probability”, etc. (Mathematically advanced users will find devices such as log market scoring rules are a good way to implement this feature efficiently.) In this case, the amount of money you can extract from someone depends on the amount of personal information you have. It is quadratically proportional to the level, and performing a large amount of griefing guarantees that money will be lost to the attacker as well as risk in the long run.

The second is that users can be exploited if they are known to use multiple specific information sources, especially for binary events, but also for more subjective questions such as “vote on the price of token A/token B”. For example, if you know that some users have a history of listening to Bitstamp and some to Bitfinex for voting information, you can extract a certain amount of money from participants at a probabilistic rate as soon as you get the latest feeds from both exchanges. . Depending on your estimate of the exchange they are hearing. Therefore, it remains a research question to determine exactly how users will react in this case.

Please note that such cases are complex issues in any case. Failure modes such as everyone focusing on one specific exchange are very likely to occur even in simple Sztorcian schemes without this kind of stochastic sadness. Perhaps a multi-tiered scheme with a second layer of voting “courts of appeal” that are called so infrequently that the centralizing effect never occurs could alleviate the problem, but this remains a very empirical matter.

Related Articles

Back to top button