Ethereum

eth2 quick update number 8

keep coming

tldr;


Runtime verification audit and deposit contract verification

Runtime Verification recently completed an audit and formal verification ~ Of eth2 deposit contract Bytecode. This is an important milestone that brings us closer to the eth2 Phase 0 mainnet. Now that we’re done, we’re asking for the community’s review and feedback. If there are gaps or errors in the official specifications; eth2 spec repository.

formal semantics K framework We define the exact behavior that the EVM bytecode should exhibit and prove that this behavior is maintained. This includes input validation, iterative Merkle tree updates, logs, etc. watch here To dig deeper for a (semi-)high level discussion of the specified content. here For the full official K specification.

I would like to thank Daejun Park (Runtime Verification) for leading this effort, and Martin Lundfall and Carl Beekhuizen for their extensive feedback and review along the way.

Again, if you like this, now is the time to provide your comments and feedback on our official verification. Please take a look.

The word of the month is ‘optimization’

The past month has been all about optimization.

A 10x optimization here and a 100x optimization there may not seem all that real to the Ethereum community today, but this development phase is just as important as any other in getting us to the finish line.

Beacon chain optimization is important

(Why can’t we use a beacon chain to maximize the performance of our machines?)

The beacon chain, the core of eth2, is a necessary component for the rest of the sharded system. To sync all shards, whether a single shard or multiple shards, use the client ~ have to Sync your beacon chain. Therefore, to be able to run a beacon chain and a small number of shards on consumer computers, it is most important that the resource consumption of the beacon chain is relatively low when validator participation is high (~300,000+ validators).

To this end, much of the eth2 client team’s effort over the past month has been dedicated to optimizations that reduce the resource requirements of stage 0, the beacon chain.

I am pleased to report that we are making fantastic progress. next ~ no Comprehensive, but instead Just a moment Gives you ideas for work.

Lighthouse runs 100,000 validators like a breeze.

Lighthouse took down its testnet of about 16,000 validators a few weeks ago after a proof gossip relay loop caused a DoS to the nodes themselves. Sigma Prime quickly patched this bug and looked to bigger and better things. That means a 100,000 validator testnet! The last two weeks have been dedicated to optimization to make this full-scale testnet a reality.

The goal of each progressive Lighthouse testnet is to allow thousands of validators to run easily on a small VPS that comes with 2 CPUs and 8GB of RAM. Initial tests with 100,000 validators showed the client using a consistent 8GB of RAM, but after a few days of optimization, Paul was soon able to get this down to a stable 2.5GB with some ideas for lowering it further. Lighthouse also achieved a 70% gain in state hashing, which has proven to be a major computational bottleneck for eth2 clients, along with BLS signature verification.

The launch of the new Lighthouse testnet is imminent. pop-up their feud follow progress

The Prysmatic testnet is still operational and has greatly improved synchronization.

Prysm testnet as of a few weeks ago Celebrating our 100,000th slot Over 28,000 validators are conducting verification. Currently, the testnet has passed 180,000 slots and has over 35,000 active validators. Maintaining a public testnet while also doing updates, optimizations, stability patches, etc. is a big deal.

There is a lot of real progress being made at Prysm. I’ve spoken to several validators over the past few months and, from their perspective, their clients continue to improve significantly. One particularly interesting item is the improved sync speed. The Prysmatic team optimized client synchronization from ~0.3 blocks per second to over 20 blocks per second. This will significantly improve validator UX, allowing them to connect and contribute to the network much faster.

Another exciting addition to the Prysm testnet is: of Aletio New eth2 node monitor — eth2stats.io. This is an opt-in service that allows nodes to aggregate statistics in one place. This will allow us to better understand the state of the testnet and ultimately the eth2 mainnet.

Don’t trust me! Pull it down and try it out for yourself..

everyone loves it Proto_Array

The core eth2 specification often (intentionally) specifies expected behavior in a suboptimal way. Instead, the specification code is optimized for intended readability rather than performance.

Specifications describe the correct behavior of a system, while algorithms are procedures for executing specified actions. Different algorithms can faithfully implement the same specification. Therefore, the eth2 specification allows for different implementations of each component as client teams consider different tradeoffs (e.g. computational complexity, memory usage, implementation complexity, etc.).

One such example is fork selection — Specification used to find the head of a chain. The eth2 specification specifies behavior using naive algorithms to clearly show the moving parts and edge cases (e.g. how to update weights when a new proof comes in, what to do when a new block completes, etc.). The spec algorithm will never meet the production needs of eth2. Instead, client teams must think more deeply about computational trade-offs in the context of client operations and implement more sophisticated algorithms to meet these requirements.

The client team was lucky when Protolambda was implemented about 12 months ago. Various fork selection algorithms, documenting the benefits and pros and cons of each. Recently, Paul from Sigma Prime observed a major bottleneck in Lighthouse’s fork selection algorithm and bought a new one. he exposed Proto_Array It’s in the old list of proto.

Porting it took some work. Proto_Array Must be aligned with the latest specifications, but once integrated Proto_Array It has been proven to “run in much less time and perform much fewer database reads.” After being first integrated into Lighthouse, it was quickly adopted by Prysmatic and is available in the latest release. The clear advantages of this algorithm over alternatives are: Proto_Array It’s quickly gaining popularity, and we expect other teams to adopt it soon!

Ongoing Phase 2 Study — Quilt, eWASM, and now TXRX

Step 2 of eth2 is to add state and execution to the sharded eth2 world. Although some core principles are relatively defined (e.g. communication between shards via cross-linking and Merkle proofs), the two-stage design environment is still relatively broad. Quilt (ConsenSys Research Team) and eWASM (EF Research Team) has put in a lot of effort over the past year to study and better define this large open design space, alongside ongoing work to specify and build out Phases 0 and 1.

To this end, there has been a lot of activity recently with public calls, discussions and ethresearch.ch posts. There are some great resources to help you secure land. Here is a small sample:


In addition to Quilt and eWASM, newly formed TXX (ConsenSys research team) is also devoting some effort to Phase 2 research, initially focusing on better understanding cross-shard transaction complexity and researching and prototyping possible paths for integrating eth1 into eth2.

Stage 2 R&D is all relatively green. There is a tremendous opportunity here to dig deep and make an impact. Expect more specific specs and developer playgrounds to blow your mind throughout the year.

Whiteblock announces libp2p gossipsub test results

this week, white block libp2p released gossipsub test results Toward the pinnacle of co-funded grants ConsenSys and the Ethereum Foundation. The goal of this work is to validate the gossipsub algorithm for eth2 usage and provide insight into performance boundaries to support subsequent testing and algorithm improvement.

Importantly, although the results of these tests appear robust, additional testing is needed to better observe how message propagation scales with network size. Please confirm full report Methodology, topology, experiments, and results are explained in detail!

Spring has piled up!

This spring is full of exciting conferences, hackathons, eth2 bounties, and more! Each event will have a group of eth2 researchers and engineers. Come chat! We’d like to talk about engineering progress, validation on testnet, what to expect this year, and anything else that’s on our minds.

Now is a great time to get involved! Many clients are in the testnet phase, so there are all kinds of tools to build on, experiments to run, and fun to have.

The following outlines many of the upcoming events that will include robust eth2 representation.


🚀

Related Articles

Back to top button