GPv2 Objective Criterion

In a previous thread we outlined a path to decentralized solution submission for Gnosis Protocol v2 based on a competition between permissionless parties (so called solvers).

One of the key questions for this competition is what is a desirable metric that solvers should aim to maximize for. This thread is meant to spark discussion around this topic.

GPv2’s main value proposition is to allow traders to capture their trade utility/surplus (which is the difference between their personal valuation of tokens as expressed in the limit price and the fair market price of these tokens) instead of giving it away as MEV to miners or arbitrageurs.

As such, simply maximizing the sum of all individual utility is an intuitive candidate for an objective criterion. However, as we have seen in GPv1, simply maximizing the sum of all surplus is prone to manipulation where bad actors can “claim” a very high utility for wash trading fake tokens only they control in order to outbid real utility of other benign traders.

Apart from maximizing utility, envy freeness is a desirable criterion for solutions. In an envy-free settlement, given the prices reported in the solution, each trader is “happy” with the outcome of the auction (in the context of an exchange economy envy free settlements lead to a Walrasian Equilibrium).

Concretely, for every order in the system, if the reported exchange rate between the affected tokens is better (from the trader’s perspective) than their limit price, the order is fully executed. If the reported price is worse than their limit price, the order is not executed at all. If the limit price is equal to the token price, the trader is indifferent about trading and so any partial fillment is acceptable. In a two dimensional orderbook (like the one used in Gnosis Auction) this is commonly referred to as “matching supply and demand”. Gnosis Protocol works on multi-dimensional order books, as such finding equilibrium prices is significantly more difficult.

In GPv1, we referred to the envy as “disregarded utility”, which was non-zero if a trader would have been happy to trade more given the reported prices, but their order was not traded fully. We discounted the sum of disregarded utility of all “touched” (traded) orders in a solution - since these are the only orders the smart contract saw - from the objective value. However, this ignored disregarded utility from orders that were not traded (potentially ignored on purpose by a malicious solver). For more background information on GPv1’s objective criterion you can watch our talk at EthCC3

With GPv2 we are moving more logic off-chain. Off-chain we can compute disregarded utility over all orders in the current orderbook and could enforce envy-freeness on the solution. While this could not be ensured on chain, the protocol can ask DAO members to verify that solutions fulfill whatever criterion we agree on off-chain and vote to penalize solvers that misbehave (cf. phase 2 of road to decentralization).

Enforcing disregarded utility to be 0 has another challenge: As block space is limited, so is the maximum number of trades that can be matched in a single settlement.

Even though GPv2 will be able to handle significantly more trades per batch than GPv1 (in the order of 100 vs 30) it is still possible to create pathological batches in which the “perfect” solution would require more than 100 trades.

We see potentially two options to address this:

  1. Before solving, split batches into small enough sub batches (according to some sorting rule) so that each batch in itself will at most contain 100 trades. Then require disregarded utility to be 0 and maximize total utility
  2. Instead of enforcing disregarded utility to be 0, we optimize for an approximate equilibrium where the solution for which the price vector would have to be relaxed by the smallest ε in order to make each individual trader envy-free wins. For a large enough ε, there will always be some solution in which less than 100 orders are traded.

In 2) the competition could either be to minimize ε or have some form of weighting between a small ε and a large total utility.

Another thing to consider is the execution cost of a settlement (in terms of gas). While a slightly better price may be achieved by integrating e.g. many different AMMs in the solution, doing so might incur a difference in transaction cost that is much larger than the gained surplus for the user. We are therefore wondering if total surplus should be discounted by the total execution cost of the settlement?

Lastly, the utility of each individual trade is computed either in units of sell token or buy token of the trade, leading to a vector of utilities (one dimension per token). In order to derive a single value which solvers can compare against one another, the utility vector has to be normalized into a common base currency (e.g. the native token).
Since each solution might assign fundamentally different prices to the traded tokens, normalization should not be based on the “new” price vector, but instead on some “last known price”. Initially we suggest to use the most liquid base liquidity (as defined by the DAO and implemented by the orderbook’s price estimation API - for now this is the Balancer/uniswap price at the block the competition is kicked off).

3 Likes

Hi guys, finally having time to join the conversation.

Bringing my background from finance, I can share these thoughts:

Enforcing 0 disregarded utility is key. If you can do that you can compare solver solutions with a lot more objectivity.

So when it comes to the 2 options proposed, I’m strongly in the camp of the splitting batches. While this is not a perfect solution, I think Gnosis as a whole can never fully run on the main chain, and all of those issues are temporary until you can move the entire dex into a rollup, be it optimistic or (eventually) zk. Those things should end the number of trades restriction and in my opinion should be a long term focus.

Anyway, since those things are still into the future (hopefully not too long future), we got to focus on what’s at hand so I vote for batch splitting.

When it comes gas and computing prices according to gas, I think they absolutely have to be included in the effective price. If a trader uses uniswap to trade 0.1 ETH for dai (imagine nominal price of ETH is 2500 dai) and he gets 200 dai after swap fees, chain fees etc, his effective sell price was 2000. I don’t care what nominal price says. He Had 0.1 ETH and now he has 200 DAI. That’s it. That’s the price, that’s what matters. Whether the money went to LPs on the dex, arbitrageurs, main chain fees or whatever does not matter. That’s what needs to be maximized. How much of one asset he gets for the other after all is said and done. Saying that he sold at 2500 but only got 200 because of fees, while technically correct, misses the entire point of the trade.

So yes, we absolutely need to take chain fees into account when considering trades. We need to understand them not as some external factor to be ignored, but as a price changing event, just like slippage.

The narrative that GP protocol must sell is not “better prices than uniswap”, but “more bang for your buck”. It sounds like the same but isn’t, because the first ignore a bunch of fees and we should not.

2 Likes

Do you have suggestions how batches could be split? If we just order by “first seen timestamp” there might be ordering consensus issues in a decentralized environment and fixed batch sizes would allow adversaries to spam with orders in order to isolate “juicy” orders into a single batch.

That makes a lot of sense. Maybe we need to rethink our separation of “sellAmount” and “feeAmount” (the fee the solver can take from an order is capped by the latter).

Thanks @CryptoOrca for the suggestions!

I absolutely agree that this would make a lot of sense! One obstacle that we are facing is the fact that orders can be specified to be fill-or-kill, meaning that they must either be fully matched, or remain untouched. Now, imagine there are two such orders:

  • sell 2 ETH for DAI for at least 3300 DAI per ETH
  • buy 1 ETH for DAI for at most 3500 DAI per ETH

If partial matching was allowed, both orders can trade 1 ETH for 3300 DAI, thereby matching 50% of the first order and generating no disregarded utility. However, when each order is only allowed to be fully matched, no price admits a trade, thus every price incurs disregarded utility for at least one of the traders.

In our current approach, we treat this issue by only enforcing zero disregarded utility for selected orders, but we would be very happy to hear other thoughts about it!

Hi Tom, I was thinking about the problems regarding @fleupold points (batch splitting), but your questions makes me wonder why do we have fill or kill orders? They indeed screw over 0 disregard utility, or at least their own utility has to be treated differently since they explicitly ask to miss on better prices under certain conditions. They also have the problem that if midway a settlement an order is canceled (or uniswap move prices), fill or kill orders might make the solution completely invalid instead of only partially fillable. This is a huuuuuuuuge problem when batching splitting since once you get one batch through you can never revert it.

Unless you have pretty good reasons to allow fill or kill as an option, I would get rid of it. But if you do have good reasons, it’s utility has to be always 0 (and it’s impact on other users utility) unless there’s enough trades on their side for them to take part. In practice would be like treating their order price as completely out of range (buying for 0/selling for infinity) until enough volume accrues on their side to fill their order, at which point their utility can be computed normally. In your example, there’s no trade because for the amounts available for trading he is not selling at 3300, he is selling at infinity. It’s a little bit like his order only shows up on the calculation once the volume reaches his threshold. I think that’s actually the best way of thinking of it. Fill or kill are not orders that exist now in the order book, they are orders that are triggered, kind of like stops. But instead of prices, their trigger is available volume under a certain price range (in your example 2 ETH above 3300 DAI).

But then we still have the issue of forcing them to be batched first (joining on Felix point about multiple batches here) since they can never be left half filled.

Now regarding batch ordering and disregarded utility, the more I think about it the more I’m concerned about order cancelling. What prevents me today from running a bot that signs offline orders on cowswap and when I see a transaction in the mempool calling the contract settlement I move the tokens from my account using higher gas causing the solution to not be executable anymore (the order I signed becomes no longer valid because the tokens are not there anymore) and forcing the contract to revert to the previous state and therefore never finishing an auction? Because all attack vectors I see on batch splitting are some sort of variation of that.

Ok building on our conversations on discord, if we assume that order cancelling is a not a big deal (it happens infrequently mid batches and mostly by accident), then batch ordering is not a super big deal. If all batches execute, the order don’t matter.

To reduce disregarded utility once a batch fails, I think we should ask solvers to order them by utility, but ultimately let him create the batches at his will. Other considerations like gas fees are in play when deciding on particular order matching. Then the contract execute the batches themselves by ordering them by utility, so in a scenario where a batch fails the overall solution loses as little utility as possible.

For example, imagine each batch contains at maximum one trade and the solved price is 1 ETH = 3500 DAI.

The orders for this solutions are:

Trader T1: Buys 1 ETH for 4000
Trader T2: Buys 0.5 ETH for 3600
Trader T3 : Buys 0.5 ETH for 3500

Trader T1’: Sells 1.5 ETH for 3500
Trader T2’: Sells 0.5 ETH for 3000

One possible batching created by the solver is:

T1 matches T1’ for 1 ETH (batch 1)

T3 matches T1’ for 0.5 ETH (batch 2)

T2 matches T2’ for 0.5 ETH (batch 3)

In this case, the batches would be ordered as

batch 1 -> batch 3 -> batch 2
because batch 3 has more utility than batch 2.

But the Solver could have proposed this set of batches instead:

T1 matches T2’ for 0.5 ETH
T1 matches T1’ for 0.5 ETH
T2 matches T1’ for 0.5 ETH
T3 matches T1’ for 0.5 ETH

This batching uses 4 batches instead of 3, and would be ordered by batch 1 -> batch 2 -> batch 3 -> batch 4.

I think it should be up to the solver to choose how the batching is done (once again, it impact gas costs of the solution, like above).

Then back to the goal of the forum post, the objective criterion (assuming we are enforcing 0 disregarded utility in the solution), is first volume traded (no need to explain), then number of batches (the smaller the number the higher the chance we are actually enforcing 0 disregarded utility) then finally distance to last price (when multiple prices are acceptable the one that changes prices the least).

Need to some extra thinking on the attack vectors of the batch splitting beyond the obvious we discussed already (since here batching number is taking precedence over price distance to last, it might be opening a new can of worms)

In order to achieve comparability between different solutions we need to split orders into batches before we know their matching (so that a constraint like “no disregarded utility” can be required by all solvers). If solvers can split orders into batches as they like, they could isolate orders they don’t like (e.g. the ones which would incur disregarded utility in their solution) and not solve them.

I was thinking that when enforcing 0 disregarded utility, we assume all batches will be able to execute. As if it were a single batch.

This way we can enforce that the solution has 0 disregarded utility and with the information at hand it should be executable, but we can’t enforce that the solution will actually in practice execute properly (we can never enforce that because of the order cancellation issue).

That’s what I was thinking, so we can compare solutions under the assumption that they will execute.

But yes, this opens an attack vector where a solver match an order against an account he controls and cancels the match before that particular batch that contains that pair executes. That’s the targeted attack, I agree. That’s why order cancelling is the issue.

The problem is that you could create a pathological batch that only achieve 0 disregarded utility if every order is matched with the next one (imagine an extremely large ring). This solution can later not be split into multiple transactions (each transaction will leave some in-balance) and thus might not be settleable in the block gas limit.

This is why we thought about splitting the orders into batches beforehand so that a solution to each batch will for sure fit into one block.

Joining the party a bit late. This might seem a little silly but why can’t you be even more aggressively optimistic on the batch settlement? If Solvers could be required to always stake more than the total order amount A then you could just settle the end-result of the trade on-chain along with a merkle root of the individual trades. As a bonus feature you could require some data availability layer on some cheap chain (e.g. xDAI). Users (and watch towers) could then be given some dispute period of N blocks to challenge the order execution. If a solver was cheating, they get slashed. Only downside is that the total deposit per solver would need to be >N*A.

1 Like

The issue is choosing between competing batches no? Sure, one can use something equivalent to optimistic rollups to dispute malicious behavior, but if you are gonna have an objective criteria to choose between solutions already, you can enforce that the solution is legit at the same time.

Or do you just mean in the context of batch splitting?

It’s a good suggestion. Using a merkle root + snark proofs for state transitions was actually part of our very initial research design (cf. @bh2smith’s 2019 EthCC talk, ~3 minutes starting at https://youtu.be/VUhxMwQ-UXY?t=914).

From GPv1 we learned that deposits and withdrawls into a DEX are a UX hurdle and we didn’t yet get to tackle scalability issues (with the rise of L2s and their deposit/bridge contracts maybe that sentiment will change).

2 Likes

While I agree with the “UX hurdle” it could be greatly reduced by:
a) making deposit and trade an atomic action for the user
b) allowing users to use their ZKP proof balances for transfers (and thus less need to withdrwal)

We had initial ideas to do this with aztec.network and their “defi-connector”.
A challenge here will be the additional “load” for the solver to generate the proof in near-real time as those settlements have race conditions (the moving onchain-liquidity that is partially settled against)

1 Like

Hi guys, have been thinking about this topic for some time, but finally have freed up time to order my thoughts and put them to writing.

Here are some of my ideas about solvers, how to rank / reward / manage them. Think right incentives / mechanisms make / break crypto systems from the long term. Some might be a bit off, some relevant though…

		solvers
			change ranking criteria
				Goal => prevent wash trading by solvers
					rank solvers by "normalized" batch utility surplus
						remove from calculation
							create some kind of "average" and remove trades that bring extreme positive trade surplus 
					maintain list of high-cap tokens, other tokens' weight in utility calc. is only x percent (20%)
			permissioned network of solvers
				hypothesis
				in adversarial environment solvers == MEV searchers, since they can add any trades to the batch with no cost to them, they will be effective in long term in gaming any solver ranking criteria. That creates an arms race between criteria update, solver's bots update which they will tend to win in a long run due to having more incentive than governance, basically moving MEV capture from ETH block generation to batch generation. 
					if correct
						making network participants somewhat permissioned could introduce other incentives besides only short term gain from batches. The idea is, for the solvers to be members of cowswap community, avoid mercenaries. 
						possible mechanisms
							council managing whitelist of solvers, solvers are known or pseudoanonymous
							solvers required to have COW tokens staked, not just any collateral
							? time delay in distribution of rewards
							fixed solver rewards as opposed to utility based
							
							2nd hypothesis
							In medium / long term difference between the batch utility from best and second best batch will be negligible.
								if holds
									first n batches could be attributed rewards
										reason => to ensure some redundancy of high quality solvers always being available.
										requirements => whitelist of solvers needed, not Sybil resistant.
1 Like

Thank you @marko for your thoughts. The idea of weighting tokens by importance (or maybe 24h trading volume or the like) sounds good.

Our argument against this hypothesis is that in traditional Ethereum the right to propose a block comes from some unrelated “game” (finding a hash in PoW, being a staker in PoS). In GPv2 is only allowed to settle a batch if they have the “best” solution. I agree that solvers may try to find ways gaming our definition of “best” but I’m optimistic that this won’t be a cat and mouse game if the criterion is well-chosen and incentive compatible. Until we have that confidence I agree that some form of permissioning (knowing who our solvers are) may be necessary.

E.g. I don’t think we will allow solvers to add trades to the batch and count them towards the objective value (private “orders” should be excluded from surplus).

Hope you are right here, may be possible. Have you guys tried to prove this under some reasonable constrains?

I mean traders can add trades from different addresses that would pose as regular trades. Would they have to pay something more than one time approval fee? If not this should be quite cheap for them. Not sure how easy / economical would it be though…

My general idea was to incentivize some kind of curated cooperative ecosystem, hence rewards / staking in platform token, rewards to more than one batch etc. Ideas in this direction.

The problem that I see here is that it is really hard to say when this threshold is reached. At the beginning there will be people running solvers who are community driven and there is much less money to be made from running the solver. Also, I do not see any objective metric how or where to draw a line when we say. “Nobody was gaming the system till now it’s not gamable”. Governance could always change it I guess, but could be hard once the network is big.

But would definitely go for fixed reward for the batch rather success fee based on saved utility. Think there is no need for that if there are enough solvers.

Do you guys have the batch rewards more formalized somewhere? (formulas / code)

Hi, sorry for the late reply.

Not yet, we were hoping to be able to collaborate with academia on these fundamental research questions.

No, however in order to use their order to boost the overall objective value it has to be included in the public orderbook and so all solvers would be able to use it.

What do you think would be a reasonable reward? Should there be a reward for runners up as well or is this flawed because a solver could just pretend to be two and make their solutions marginally worse? Also, how would a fixed reward work with fluctuation in gas prices?

Not yet. We are finishing up some work to differentiate between different solvers by address on chain and then would like to start the payout program (also to spark more interest for external solvers). For that it would be great to define the payout structure (e.g. the fixed reward you proposed)

Here isn’t it possible for the solvers to simply sign the txs do not publish them, and just publish them as they are executing their batch? They could basically win them the batch, if they can get in the tx approvals all at once which they could bundle together and submit to friendly miner. maybe seem a bit far fetched at first(they would probably need to cooperate with miners about this or perhaps they could use flashbot bundles?)

Not really sure what would be reasonable reward. In this I would perhaps approach early solvers, hopefully they are still aligned with the community what are their thoughts about this. If there is gnosis still running its solvers, I think it could be just tried out and see what is good enough amount. Not sure if there are any not easily gamable metrics that could be use as inputs to function calculating rewards for the bundles.

At least what could be done, to prevent gaswars for juicy bundles is to spread out the rewards in time to make the system more stable / revenues more predictable for the miners.
e.g. reward = mulitplier *simpleAverage(last20bundles(trade utility)) . Also multiplier could be progressively lowered as time goes on, to reflect higher overall bundle rewards. (realize this is not really fixed :wink: )

Depends if the system is permissioned in some way or not. To reward runner ups feels gamable though… I know I have suggested at first, but not sure if it would be long term sustainable, would probably avoid it.

think EIP1559 solves this, as it makes gas price data available on chain, gas price could included as input to fn calculating solver’s reward.

In this example the reward would not be truly fixed, as you say, it needs in the very least reflect the current gas prices and also would need to reflect changing market dynamics.

Also I do not have good intuitions what does it take to run a solver / costwise. I assume that costs for running a solver are fairly fixed. My rationale here is that if costs are fixed, the rewards should not rise linearly with number of tx cowswap handles, this could create uberprofitable landscape with exponential rise in rewards for the solvers with no additional work on their part which could turn solvers to try to game each other + the system. Not sure how this would play out, but huge exponentially rising gains could lead to competition between the solvers that is ultimately detrimental to the users, thinking of MEV on Ethereum here.

Happy to look into it / help with it. Would need more insight how does the solver dynamics work now, how are the winning bundles chosen. Could you point me to some resources about this or some ongoing discussion if you have one?

Also was thinking before about rewarding miners by COW tokens, as I have thought more about this is seems more detrimental from the long term. It brings a new attack vector when solvers are big COW holders and can basically decide their own rewards… Also might discourage new solvers entering system once there are couple of solvers who effectively already control the governance.

Do you see any ways how to prevent the whole governance system to devolve into this? It’s still a possible attack vector even if solvers do not receive COW tokens.

1 Like

Our current vision would be to run a network of clients (or use a cheap side chain like xDAI) to get consensus on data availability (which order was publicly visible at the time the batch closed). If your “hidden” order is not in that view, it doesn’t count towards the objective value.

Regarding permissionning of solvers, the protocol will likely still have some form of allow-list that the DAO will govern (likely conditioned on putting up a significant bond and potentially explaining a rough strategic approach of the solver being used). This could allow us to reward users with sufficiently different strategies (still a very early thought though).

Yes, the most expensive part is likely the R&D work going into it. Afterwards there are some cost for running AWS instances (potentially linear to problem size as in number of orders) but I agree that it’s likely not incentive aligned to give linearly higher rewards in busy batches.

This is the ongoing discussion :wink: The code ranking solutions at the moment is here. It’s basically normalizing all surplus into ETH (using pre-batch price estimates) and then choosing the solution with the highest sum(surplus) + sum(fees) - gas_cost (discounting gas costs to not give a solution with 1$ more surplus priority if it costs $100 extra to settle it).

This approach does have a bunch of problems. At the moment we are struggling with the fact that this value can be negative (in times of high gas prices), which then leads to e.g. picking a solution from a simple solver that can only match a single order1 (e.g. objective value -10) rather than taking the more sophisticated solver which merged order1 (objV -5) and order2 (objValue -10) for a total of -15. We could potentially decide to not settle solutions with negative objValues (as they are not economically viable), but that would create problems for the UX (traders want their transactions to go through).

We are thinking to only reserve a fixed and in comparison to other stake holders minority percentage of CoW tokens to solvers. Using a different token is a reasonable idea (we already collect fees in other tokens) but think that using a project specific token is still a good way to align incentives between solvers and the community (similar to how ETH miners are rewarded in ETH).

seems like quite neat idea! How would this work in practice? Solvers give attestations to txs they see?

Think this kind of allow-list would be a necessary I think. Also like the idea of sufficiently different strategies. As I gathered from what you have said, it might make sense for one party to have different solvers for different market / gas environment. Do you think this is short term problem, or from the long term there still will be solvers that dominate different market dynamics?

Also thinking about this, have you thought of solver strategies as analogous with yearn strategies?

Separate strategy creation / adding strategy / running strategy
Maybe this could be a good model for the cowSwap. My idea is, that perhaps different solver roles could be separated:
Solver infrastructure and Solver logic could be two roles. solverProvider + strategyMaker
I think it might make sense to separate these roles. Both from practicality perspective and game theory.

solverProvider would just get rewarded for running the infrastructure with low latency, data availability and according to correctly executing best strategy for current batch. - flat feet(gasprice adjusted)

strategyMaker would only get rewarded if his strategy is implemented. There could be a governing process checking / testing the strategy controlled by the governance (similar to adding/removing strategies to yearn pools). Would draw inspiration from what worked for them. with adding / changing new strategies. strategyMaker would get some % of the surplus his strategy generates.

The positive effect of this, besides not forcing people to wear two hats and specialize, is that the “naive” traderSurplus function could be actually sufficient to calculate added value generated by certain strategy. since strategy would have to pass review process where it would be clearly visible if it brings any real added value or strategyMaker is just trying to game the traderSurplus function.

There could be council for adding / removing strategies voted in and paid for by token holders that is responsible for reviewing strategies. There could be whole code review and testing process before adding the strategy / evaluation process etc.

Analogues to yearn vault strategies, it probably make sense to have just couple of the best and differentiated strategies active that are audited + reliable, or is there any major reason for having a long tail of competing strategies?

Also this could minimize the chance of some bug in the system which would let rogue strategy exploit the system and stealing of funds and remove the pressure from traderSurplus function to be “perfect” and “not gamable”.

Another thing you mentioned is a problem that currently solvers are not able to execute strategies which make a loss in the times of high gas prices. Since in this case the system would rely on governance for vetting / testing strategies the condition for selecting the right strategy for the current batch could be relaxed to just max traderSurplus even if negative. Since strategy and solverProvider are separated, there is no way for solverProviders to misuse this for their advantage. Also solverProviders can execute the least costly batches and be automatically made whole from the treasury. This would not be possible if solverProviders would be paid based on traderSurplus function as it is now.

Also from mechanism perspective think it would be good to separate strategy creation / adding strategy / running strategy as in this example.

What are your thoughts?