Proposal: Fast path to the Endgame Decentralised Computer

Gnosis Chain: Fast path to the endgame decentralized computer

The blockchain community gets more and more clear vision of the future decentralized computers. There seems to be a technical consensus that

  1. L1 will be mostly used for data storage and L2 is best suited for computation
  2. Block production will likely become centralized, but the blockchain validation will have to stay very decentralized.

This realization manifests itself in an ethereum community committed to L2 scaling and the development of flashbots. Blog posts like the one from Vitalik introduced the naming of “Endgame (of Decentralized Computers)”.

Given this understanding, the race between all the different L1 solutions will be a race to implement this vision. This post describes a very pragmatic roadmap of how the Gnosis Chain could become the first chain reaching the endgame setup in 5 steps:

  1. Start to build an ecosystem in one dominant L2 rollup shard. This will introduce a separation between data and computation from the start

    • The fastest way forward is to start with an optimistic roll-up implementation. This provides already 30 times more transaction throughput.
    • If all activity is bundled into one dominant L2 rollup shard, the optimistic challenging time period is not a disadvantage.
  2. Introduce quickly data sharding with 64 shards to make the L2 shard more powerful and keep the gas costs cheap.

    • This will improve the scalability by another factor of 64.
  3. Split the L2 roll-up block into 64 subblocks. Each shard-validator group should then verify one sub-block, in order to avoid the fisherman’s dilemma of optimistic roll-ups.

    • If a shard’s validator set does not challenge the L2 shard block within x blocks and the chain continues building on fraudulent chains - meaning the L2 is invalid -, then they can get slashed on-chain.
    • This will require additional computational resources compared to the usual eth2.0 setup, but this is a compromise that the Gnosis chain is willing to make in order to onboard more users with low gas fees.
  4. Switch optimistic roll-up implementation in such a way that it gets more snark/stark friendly.

    • This can be done by simple updates of the base contract for the optimistic roll-up.
    • Examples are that the base contract uses more efficient hashing functions etc.
  5. Introduce zk-snark or zk-stark proofs for the correct execution of the L2 subblocks. This allows the validator set to be run on leaner hardware and it will improve the decentralization again to a similar level as ethereum.

This approach has the following advantages:

The separation between data and execution layer from the start

Since all activity is happening in the big L2 shard, this approach separates the data layer and the computation layer from the start. This early separation allows introducing the real decentralization at a later point in time, once zk-technology is ready, and it allows immediate scaling.

Early user onboarding

While ethereum stays decentralized with the opportunity cost of missing onboarding new users short-term, the Gnosis Chain can onboard new users from early days on with cheap gas prices. At the same time, Gnosis Chain has a clear roadmap for decentralization at a later point in time, once zk-technology is ready.

Double security, until zk-technology has fully matured

It might take some years until zk-technology is fully mature and people trust it to put billions of dollars into these systems. Until then, the Gnosis Chain L2 shard will be secured by the two mechanisms introduced in step 3 and in step 5. As described in step 3, validators will continue to check that the execution in the L2 shard was correct in a split way. Additionally, they will validate the zk-proofs that are introduced in step 5. If any of these checks detect any incorrect executions, the system will revert these. This means that investors holding funds in the L2 shard will be secured by both mechanisms.

Main challenge:

Changing a running system is very hard. Changing the implementation of the optimistic rollup implementation, in order to make it more zk-friendly will be hard (step 4). Though, there seems to be hope that the changes will be minimal or even not necessary if the rollup disallows Keccak hashing and some other operations from the start. The following projects hermez project, ConsenSys Research, and Scroll tech work already on this implementation.

I am very curious about your thoughts.