Thoughts on smart contract security
Over the past days, with the help of the community, we have crowdsourced a list of all the major bugs associated with smart contracts on Ethereum to date, including various small 100-10000 ETH thefts and losses occurring in DAOs as well as gaming and token contracts.
This list (original source here) As follows:
You can sort the list by bug category.
- Variable/function naming mix: FirePonzi, Rubixi
- Public data that should not be released: Open RNG seed casinos, fraudulent RPS
- Reentrancy (A calling A): DAO, Maker’s ETH-based token.
- 2300 Transmission failed due to gas limit: King of the Ether
- Array/Loop and Gas Limitations: Government
- A much more subtle game-theoretic weakness that has people debating whether it’s a bug or not: DAO
From better development environments to better programming languages, formal verification and symbolic execution, many solutions have been proposed for smart contract security, researchers say. I started developing such a tool. My personal opinion on this topic is that the important basic conclusions are: The evolution of smart contract security will inevitably be layered, incremental, and dependent on defense-in-depth.. there will do If there are more bugs, more lessons will be learned. there won’t Let’s become one magic technology that solves everything.
The reasons for this fundamental conclusion are as follows. All cases of smart contract theft or loss – in fact, the very definition The issue of smart contracts being stolen or lost is fundamentally about the difference between implementation and intent. In your specific case, if implementation and intent are the same, every instance of “theft” is actually a donation, and every instance of “loss” is a voluntary burning of money, economically equivalent to a proportional donation to ETH. Token holder community through deflation. This leads to the next challenge. Intention is fundamentally complex.
The philosophy behind this fact is best formulated by the friendly AI research community.Complexity of Values” and “vulnerability of value“. The thesis is simple: As humans, we have many, many values and very complex values. So complex that we cannot fully express them even on our own, and any attempt will inevitably include some unrevealed cases. In AI research The reason the concept is important is because a superintelligent AI will actually search every nook and cranny, even nooks and crannies so counter-intuitive that we might not even think about them, to maximize its goal of 99.99% through some rather complicated tweaks of molecular biology. You can achieve your goal, but you will soon realize that you can increase this to 100% by triggering the extinction of the human race through nuclear war and/or a biological pandemic.It will cure cancer without killing humans, and will wake humans up if you wish. It would simply force all humans to freeze themselves, reasoning that it wouldn’t technically kill them since they could.
The situation is similar in smart contract land. We value things like “fairness,” but it’s difficult to define what fairness means. It might be tempting to say something like “it shouldn’t be possible for someone to steal 10000 ETH from the DAO”, but what if the DAO actually approved the transfer because the recipient provided a valuable service for a given withdrawal transaction? But how do we know that the mechanism that determines if a transfer is approved has not been fooled by a game-theoretic weakness? What is a game-theoretic vulnerability? What about “split”? What about front-running for blockchain-based markets? If a given contract specifies an “owner” who can collect fees, wouldn’t it add to the fun if the ability for anyone to become the owner was actually part of the rules?
All of this is not an attack on experts in formal validation, type theory, strange programming languages, etc. Smart people already know and appreciate these issues. But this shows that there are fundamental barriers to what can be achieved, and “fairness” is not something that can be proven mathematically by theorems. In some cases, the fairness claims are so long and complex that I wonder if there are bugs in the claim set itself.
Towards a mitigation path
That said, there are many areas where the gap between intent and implementation can be significantly reduced. One category is taking common patterns and hard-coding them. For example, the Rubixi bug could have been avoided by doing this: owner Keywords that can only be initialized identically message.sender in the constructor and maybe transfer of ownership function. Another category is to create as many standardized mid-level components as possible. For example, we might not recommend that all casinos create their own random number generators, but instead instruct people to: landao (or something similar My RANDAO++ proposalonce implemented).
However, a more important category of solutions involves mitigating specific and counterintuitive shortcomings of the EVM execution environment. These include gas limits (responsible for government losses and losses incurred by recipients consuming too much gas when accepting transfers), re-entry (responsible for DAO and Maker ETH contracts), and call stack. Limit. For example, call stack limitations can be relaxed by: This EIP, which essentially eliminates them from consideration by replacing their purpose with changes in gas dynamics. Re-entrancy could be completely prohibited (i.e. only allowing one executing instance of each contract at a time), but this would likely lead to new forms of non-intuitiveness, so a better solution may be needed.
However, gas limits are not going away. Therefore, the only solution is likely to be inside the development environment itself. When called without data, the compiler should issue a warning unless it is proven that the contract consumes less than 2300 gas. A warning should also be issued if the function does not terminate within a safe amount of gas. Variable names can be colored (e.g. RGB based on the first 3 bytes of the name hash). You may see a heuristic warning if two variable names are too close to each other.
Additionally, there are some coding patterns that are more dangerous than others and should not be banned, but should be clearly highlighted and require developers to justify their use. Particularly relevant examples include: There are two types of definitely safe call operations: The first is a transfer containing 2300 gas (if we accept the standard that for empty data it is the receiver’s responsibility not to consume more than 2300 gas). The second is a call to a contract that you trust and that itself has already been determined to be safe (note that this definition prohibits re-entrancy, since you have to prove that A is safe before you can prove that A is safe).
As it turns out, a very large number of contracts can be covered by this definition. But not everyone can do that. The exception is the idea of a “universal decentralized exchange” contract where anyone can place an order offering to trade a given amount of Asset A for a given amount of Asset B. Here A and B are arbitrary. ERC20 compatible token. Although you may qualify for the “trusted recipient” exemption by entering into a special purpose agreement for only a few assets, having a general agreement seems like a very worthwhile idea. But in that case you have to call the exchange. move and in transmission For unknown contracts, yes, you may want to provide enough gas to execute and make a re-entry call to try to take advantage of the exchange. In this case, the compiler may issue a clear warning, unless you use a “mutex lock” that prevents the contract from being accessed again during the call.
The third category of solutions is defense in depth. One example is to recommend that all contracts that are not going to be permanent have an expiration date to prevent loss (other than theft). After that, the owner can take any action on behalf of the contract. In this way a loss can only occur if (i) the contract is broken and at the same time (ii) the owner is absent or dishonest. To alleviate (ii), trusted multi-signature “owners” may emerge. Adding a waiting period can reduce theft. The scope of the DAO issue was greatly alleviated because the child DAO was locked for 28 days. A proposed feature in MakerDAO is to create a delay before a governance change becomes active, allowing token holders unhappy with the timing of the change to sell their tokens. This is also a good approach.
You can layer formal verification on top of that. One simple use case is how to significantly alleviate gas-related issues by proving termination. Another use case is to prove certain properties. For example, “If all participants collude, money will be won in all cases.” Or “If I send token A to this contract, I will get/earn the token amount.” B If you wish or receive a full refund.” or “This contract is suitable for a limited subset of Solidity that makes re-entrancy, gas issues, and call stack issues impossible.”
Lastly, while all concerns so far have been about accidental bugs, malicious bugs are an additional concern. How confident can we be that there are no loopholes in the MakerDAO decentralized exchange that could drain all of our funds? Some of us in the community may know the MakerDAO team and think they are good guys, but the whole purpose of the smart contract security model is to provide guarantees strong enough to survive even when they don’t. Services that are not well-connected and established enough for people to automatically trust them, and without the resources to establish trustworthiness through a multi-million dollar licensing process, are free to innovate and give consumers confidence in their safety. Please have it and use the service. Therefore, any checks or highlights should not exist only at the level of the development environment, but also at the level of block explorers and other tools that allow independent observers to check the source code.
Specific action steps communities can take include:
- I’m working on a project to create a good development environment and a good block/source code explorer that includes some of these features.
- Standardization of as many components as possible
- We are working on a project that experiments with different smart contract programming languages and formal validation and symbolic execution tools.
- We discuss coding standards, EIPs, Solidity changes, and more that can mitigate the risk of accidental or intentional errors.
- If you’re developing a multi-million dollar smart contract application, consider reaching out to security researchers to use your project as a test case for various verification tools.
As noted in a previous blog post, DEVGrants and other grants are available for most of the above.