In a previous thread we outlined a path to decentralized solution submission for Gnosis Protocol v2 based on a competition between permissionless parties (so called solvers).
One of the key questions for this competition is what is a desirable metric that solvers should aim to maximize for. This thread is meant to spark discussion around this topic.
GPv2’s main value proposition is to allow traders to capture their trade utility/surplus (which is the difference between their personal valuation of tokens as expressed in the limit price and the fair market price of these tokens) instead of giving it away as MEV to miners or arbitrageurs.
As such, simply maximizing the sum of all individual utility is an intuitive candidate for an objective criterion. However, as we have seen in GPv1, simply maximizing the sum of all surplus is prone to manipulation where bad actors can “claim” a very high utility for wash trading fake tokens only they control in order to outbid real utility of other benign traders.
Apart from maximizing utility, envy freeness is a desirable criterion for solutions. In an envy-free settlement, given the prices reported in the solution, each trader is “happy” with the outcome of the auction (in the context of an exchange economy envy free settlements lead to a Walrasian Equilibrium).
Concretely, for every order in the system, if the reported exchange rate between the affected tokens is better (from the trader’s perspective) than their limit price, the order is fully executed. If the reported price is worse than their limit price, the order is not executed at all. If the limit price is equal to the token price, the trader is indifferent about trading and so any partial fillment is acceptable. In a two dimensional orderbook (like the one used in Gnosis Auction) this is commonly referred to as “matching supply and demand”. Gnosis Protocol works on multi-dimensional order books, as such finding equilibrium prices is significantly more difficult.
In GPv1, we referred to the envy as “disregarded utility”, which was non-zero if a trader would have been happy to trade more given the reported prices, but their order was not traded fully. We discounted the sum of disregarded utility of all “touched” (traded) orders in a solution - since these are the only orders the smart contract saw - from the objective value. However, this ignored disregarded utility from orders that were not traded (potentially ignored on purpose by a malicious solver). For more background information on GPv1’s objective criterion you can watch our talk at EthCC3
With GPv2 we are moving more logic off-chain. Off-chain we can compute disregarded utility over all orders in the current orderbook and could enforce envy-freeness on the solution. While this could not be ensured on chain, the protocol can ask DAO members to verify that solutions fulfill whatever criterion we agree on off-chain and vote to penalize solvers that misbehave (cf. phase 2 of road to decentralization).
Enforcing disregarded utility to be 0 has another challenge: As block space is limited, so is the maximum number of trades that can be matched in a single settlement.
Even though GPv2 will be able to handle significantly more trades per batch than GPv1 (in the order of 100 vs 30) it is still possible to create pathological batches in which the “perfect” solution would require more than 100 trades.
We see potentially two options to address this:
- Before solving, split batches into small enough sub batches (according to some sorting rule) so that each batch in itself will at most contain 100 trades. Then require disregarded utility to be 0 and maximize total utility
- Instead of enforcing disregarded utility to be 0, we optimize for an approximate equilibrium where the solution for which the price vector would have to be relaxed by the smallest ε in order to make each individual trader envy-free wins. For a large enough ε, there will always be some solution in which less than 100 orders are traded.
In 2) the competition could either be to minimize ε or have some form of weighting between a small ε and a large total utility.
Another thing to consider is the execution cost of a settlement (in terms of gas). While a slightly better price may be achieved by integrating e.g. many different AMMs in the solution, doing so might incur a difference in transaction cost that is much larger than the gained surplus for the user. We are therefore wondering if total surplus should be discounted by the total execution cost of the settlement?
Lastly, the utility of each individual trade is computed either in units of sell token or buy token of the trade, leading to a vector of utilities (one dimension per token). In order to derive a single value which solvers can compare against one another, the utility vector has to be normalized into a common base currency (e.g. the native token).
Since each solution might assign fundamentally different prices to the traded tokens, normalization should not be based on the “new” price vector, but instead on some “last known price”. Initially we suggest to use the most liquid base liquidity (as defined by the DAO and implemented by the orderbook’s price estimation API - for now this is the Balancer/uniswap price at the block the competition is kicked off).