Ethereum

More thoughts on scripting and future compatibility

Introduced in the previous post Ethereum Script 2.0 We’ve received many responses, some very supportive, others suggesting a switch to their preferred stack-based/assembly-based/functional paradigm, and offering a variety of specific criticisms that we are looking hard at. Perhaps the strongest criticism this time came from: Sergio Damian LernerBitcoin security researcher, developer of QixCoin and thanks for his hard work. dagger analysis. Sergio particularly criticizes two aspects of the change: the fee system, which is changed from a simple single variable design where everything is a fixed multiple of BASEFEE, and the loss of the cryptographic opcode.

Cryptographic opcodes are the more important part of Sergio’s argument, so let’s address that issue first. In Ethereum Script 1.0, the opcode set had a collection of opcodes specialized for specific cryptographic functions. For example, I had an opcode SHA3 that took the length and starting memory index from the stack and then pushed SHA3. A string taken from the desired number of blocks in memory, starting at the starting index. There were similar opcodes for SHA256 and RIPEMD160, as well as a cryptographic opcode centered around secp256k1 elliptic curve operations. That opcode is gone in ES2. Instead, it’s replaced by a floating system where people have to manually write SHA256 in ES (they actually offer a commission or bounty for that), and later a smart interpreter can seamlessly replace the SHA256 ES script with a regular script. An older machine code (or hardware) version of SHA256, the kind used when calling SHA256 from C++. From the outside, ES SHA256 and machine code SHA256 are indistinguishable. Both compute the same function and therefore perform the same transformation on the stack. The only difference is that the latter is hundreds of times faster and provides the same efficiency as SHA256 as an opcode. We could then implement a flexible fee system to make SHA256 cheaper to accommodate the reduced computation time, ideally making it as cheap as the opcodes are now.

But Sergio prefers a different approach. The idea is to provide a large number of cryptocurrency opcodes and use hard fork protocol changes to add new changes when necessary. He writes:

First, after observing Bitcoin closely for three years, I have come to understand the following: Cryptocurrency is not a protocol, a contract, or a computer network. Cryptocurrency is a community. Apart from a very small set of constants, such as the money supply function or global balance, anything can change in the future as long as the changes are announced in advance. The Bitcoin protocol has worked well so far, but we know that it will face scalability issues in the long run and will need to change accordingly. Short-term advantages, such as the simplicity of the protocol and code base, helped Bitcoin gain global acceptance and network effects. Is the reference code for Bitcoin version 0.8 as simple as version 0.3? you’re welcome. Now there are caches and optimizations everywhere to achieve maximum performance and higher DoS security, but no one cares about them. Cryptocurrencies are bootstrapped by starting with a simple value proposition that works in the short/medium term.

This is something that is often discussed regarding Bitcoin. But the more I see of what’s actually going on in Bitcoin development, the more my position becomes firm: it’s all but a nascent crypto protocol with very little real-world usage. The claim is completely false. Bitcoin currently has many flaws that could be changed if there was the collective will. Some examples include:

  1. 1MB block size limit. Currently, there is a strict restriction that a Bitcoin block cannot contain more than 1 MB of transactions. This means you are limited to approximately 7 transactions per second. We are already starting to overcome this limitation of approximately 250KB for each block, which is already putting pressure on transaction fees. For most of Bitcoin’s history, fees were around $0.01, and whenever the price rose, the base BTC-denominated fee accepted by miners was adjusted downward. However, the fee is now stuck at $0.08 and the developers are not willing to lower it because adjusting the fee back to $0.01 would push the number of transactions over the 1MB limit. Removing this limit, or at least setting it to a more reasonable value such as 32MB, would be a minor change. This is just a single number in the source code, and it will certainly go a long way in ensuring that Bitcoin continues to be used in the medium term. However, Bitcoin developers have completely failed to do this.
  2. OP_CHECKMULTISIG bug. There is a well-known bug in the OP_CHECKMULTISIG operator used to implement multisig transactions in Bitcoin. Here we need an extra dummy 0 as an argument that is simply removed from the stack and not used. This is very counterintuitive and confusing. When I was personally working on a multi-signature implementation, PBitcoin Tool, I’ve been stuck for days trying to figure out if I should have a dummy 0 in front or replace the missing public key in 2/3 multisignature, and if I should have two dummy 0s. 1/3 multi-signature. I eventually figured it out, but I would have figured it out much faster if the operation of the OP_CHECKMULTISIG operator had been more intuitive. However, the bug has not been fixed yet.
  3. bitcoin client. Bitcoin clients are known for being very unwieldy and non-modular devices. In fact, the problem is so severe that anyone trying to build a more scalable and enterprise-friendly alternative to Bitcoin is starting from scratch without using Bitcoin at all. This is not a core protocol issue, and in theory changing the Bitcoin client would not require a hard fork change at all, but the necessary reforms are not yet complete.

All of these problems don’t exist because Bitcoin developers are incompetent. It’s not like that. In fact, they are highly skilled programmers with deep knowledge of cryptography and the database and networking issues inherent in designing cryptocurrency clients. The reason there is a problem is that Bitcoin developers are well aware that Bitcoin is a $10 billion train traveling at 400 kilometers per hour. If you’re trying to replace an engine mid-cycle and even the smallest bolt comes loose, the whole thing can come loose. It stopped. A change as simple as replacing your database in March 2011 Almost like that. That’s why I think it’s irresponsible to leave a poorly designed, future-proof protocol in place and simply say that the protocol can be updated in a timely manner. Conversely, protocols should be designed from the beginning with an appropriate level of flexibility so that changes can be made automatically by consensus without the need for software updates.

Now to address Sergio’s second issue: his main complaint about modifiable fees. If fees can go up and down, it becomes very difficult for contracts to set their own fees, and if fees go up unexpectedly, this can open up vulnerabilities through: An attacker can also force a contract to be broken. Thanks to Sergio for pointing this out. This is something that has not been fully considered yet, so I think we need to think about it carefully when designing. But his solution, manual protocol updates, is arguably no better. Protocol updates that change the fee structure may expose new economic vulnerabilities in the contract, and are much more difficult to compensate for because there are no restrictions whatsoever on the content that can be included in a manual protocol update.

So what can we do? First of all, there are many intermediate solutions between Sergio’s approach (with a limited, fixed set of opcodes that can only be added through hard fork protocol changes) and the idea of ​​making miner votes floating that I gave in my ES2 blog post. Fee changes for all scripts. One approach is to further separate the voting system, so that there is a strict boundary between scripts that must pay 100% fees and scripts that are “promoted” to opcodes that only have to pay 20x CRYPTOFEE. This can be done through a combination of usage counting, miner voting, Ether holder voting, or other mechanisms. This is essentially a built-in mechanism for performing a hardfork that technically does not require source code updates to take effect, making it much more flexible and non-disruptive than the manual hardfork approach. Second, it is important to point out again that the ability to perform powerful cryptocurrencies efficiently did not disappear with the genesis block. When we launch Ethereum, we will create SHA256 contracts, SHA3 contracts, etc. and “pre-mine” them from scratch with pseudo-opcode states. So Ethereum comes with a battery included. The difference is that the batteries are included in a way that allows for the seamless inclusion of more batteries in the future.

However, it is important to note that we believe this ability to add efficient, optimized crypto operations will be essential in the future. In theory, it is possible to have a “Zerocoin” contract inside Ethereum, or one that uses SCIP (Crypto-Computational Proof) and fully homomorphic encryption, so in practice it is possible to use Ethereum as a “decentralized Amazon EC2 instance” for cloud computing. Now people mistakenly believe that it is. When quantum computing comes out, we may need to switch to contracts that rely on NTRU. When either SHA4 or SHA5 comes out, you may need to switch to a contract that relies on them. once Obfuscation techniques As it matures, your contracts will want to rely on it to store personal data. However, to make all of this possible with fees of less than $30 per transaction, the underlying encryption would need to be implemented in C++ or machine code, and a fee structure would be needed to reduce the operational fees. Appropriately once optimization is complete. This is a challenge with no easy answers in sight, and comments and suggestions are very welcome.

Related Articles

Back to top button