Secondary decision markets (half-baked)

Should GnosisDAO research how to create efficient sets of decision markets?

Summary

Currently, the governance process of GnosisDAO requires at stage 3 that a pair of prediction markets be created which enable the GNO/USD price to be estimated dependent on which decision is taken. I’ll call these the “primary markets” - primary markets address the “bottom line” question - does this decision benefit Gnosis? Under some conditions, it might be beneficial to create a set of markets that support the primary market.

Motivating example

Consider the following unrealistic scenario: Both Alice and Bob are pretty sure that trade volume is the only consequence of Gnosis Protocol v2 (GPv2) implementation that is likely to impact the price of GNO/USD. The market pair GNOyes/USDyes and GNOno/USDno computes an outcome map O: GPv2 y/n → GNO/USD price. Because of Alice and Bob’s assumption that volume is the only consequence that matters, O can be factorised as the product of a volume map V: GPv2 y/n → Volume and a price map P: Volume → GNO/USD price, with O = P \circ V (I’m ignoring the fact that these should be probabilistic maps to keep it simple).

Alice has deep knowledge of the trade volume Gnosis Protocol v2 is likely to have if it is built, and so has strong views on the likely form of V_Alice, but is completely ignorant about whether a certain level of trade volume will lead to a higher, lower or unchanged price of GNO, and so has no views on the likely form of P. Bob has strong views on P_Bob, but no idea about V.

Given a conditional GNO/USD market with arbitrary relative price of GNO and USD and hence arbitrary outcome map O, Alice due to her ignorance of P cannot offer an alternative price on the basis of her knowledge of V_Alice; according to her any level of trading volume is equally consistent with any GNO/USD price. Similarly, Bob’s ignorance of V prevents him from offering an alternative price based on his knowledge of P_Bob; according to Bob, any decision is equally consistent with any level of trading volume. Even though each has some knowledge relevant to the original question, neither can contribute this knowledge by making a trade on the market. Note that if they were not perfectly ignorant of one quantity they could offer a trade, but I expect that this problem persists to some extent even if Alice is more ignorant of P than V, but not maximally ignorant.

If they could privately communicate their knowledge to each other, this problem could be solved - Alice could use Bob’s estimate P_Bob and vise versa. However, maybe this doesn’t happen often in practice. Alternatively, suppose instead that there were three market pairs:

  1. GNOyesGPv2/USDyesGPv2 and GNOnoGPv2/USDnoGPv2 (estimate Q, the “primary market”)
  2. GNOhighvolume/USDhighvolume and GNOlowvolume/USDlowvolume (estimate P)
  3. HVyesGPv2/LVyesGPv2 and HVnoGPv2/LVnoGPv2 (estimate V)

where HV and LV are synthetic assets that pay out with high or low trade volume respectively (these could be tokens in another prediction market). In this case, Alice can contribute her estimate V_Alice to market 3 and also use market 2’s estimate of P to contribute to market 1. Bob can similarly contribute to market 2 based on his estimate P_Bob and use market 3 to estimate V and thereby contribute to market 1. The key difference is that the secondary markets allow Alice and Bob to contribute their knowledge to market 1, so the estimate from market 1 should improve.

Another way of looking at this: Alice and Bob in isolation agree that there should be no change to the GNO/USD price in expectation based on either decision, but if they were able to share their models then they would believe something different about the GNO/USD price. Furthermore, the secondary markets facilitate sharing of models.

Discussion & further work

The general question here is, given a fixed amount to pay for some set of markets that inform a decision, what is the optimal set of markets to create? I think that the answer might not always be “only the primary market”.

While the assumption of perfect ignorance on the part of Alice and Bob is unrealistic, the price of GNO really does depend on many things, so it seems plausible that many players might have knowledge of some relationships but are relatively ignorant of others. I think it’s also plausible that players are unlikely to privately communicate to coordinate their knowledge. It is not obvious to me whether the gains from secondary markets would typically be large or small.

Do you think this idea is worth pursuing? What are the most important questions? Some possibilities:

  • Develop a rigorous theory of gains from multiple markets and write it up
  • What is technically required to implement multiple markets (e.g. synthetic assets, resolving Change the funding structure of Gnosis Impact markets)?
  • How could GnosisDAO experiment with single vs multiple markets?

I’m a PhD student working on foundations of causal inference, so the first option is the one that stands out to me - I would like a stronger reason to believe the secondary market effect was large before pursuing the idea further.

4 Likes

Fully agree with this post.

GnosisDAO could try to find a few secondary goals which should have the quality that ideally a lot of people would agree that achieving those would be good for GNO - or differently put: conditions that would make them buy GNO.

e.g.
Gnosis Protocol
Will the trading volume on Gnosis Protocol be >$10M every day in June 2021?
Will GP have at least 1000 users (addresses) every day in June 2021?
Will GP have at least a 5% market share of DEXs in June 2021?

Safe
Will there be at least 1000 Safes with >3 tx and >$10,000 worth of assets

Those markets should ideally enable buyers to purchase GNO with less risk. However - they should be denominated in a token with low capital costs (e.g ETH > DAI) because money with be held in those for a longer time.
Now once those markets are established and there is a clear market signal that achieving this goal indeed increases GNO demand it might be sufficient to demonstrate that a proposal increases the chance of accomplishing such a secondary goal. A proposal creator might target one of those and the markets would trade the conditional likelihood of achieving the goal under doing/not doing the proposal.

3 Likes

I’ve had a similarly half-baked idea on how we might make these markets more meaningful, will write it up as a new thread at some point. The TL;DR is that rather than GNO being the base token, we’d use an index (like tokenset’s DPI) that includes GNO along with a handful of synthetic tokens that track other metrics that the DAO cares about (could be things like usage metrics, user numbers, carbon footprint, etc).

@davidoj, using your example, perhaps one of the synthetic tokens tracks trade volume on GPv2.

In this case, Alice should be able to contribute knowledge since a metric that they have knowledge on will directly impact the price of the base (index) token.

1 Like

@auryn_macmillan, as I understand your suggestion, we create an “index” for two reasons:

  1. There might be things GnosisDAO want to achieve not reflected by the GNO/USD exchange rate
  2. It’s possible to construct an index so that it is easier to figure out how some particular thing (e.g. transaction volume) impacts it than how transaction volume impacts the GNO/USD exchange rate

I think 1 is an excellent reason to substitute a composite index for the GNO/USD exchange, but I’m nos so sure about 2. My intuition here is that you probably lose something by substituting an endpoint that represents “what you really want” but is hard to calculate for one that maybe doesn’t represent what you really want but at least you can calculate it. Maybe an index constructed to satisfy 1 would serendipitously satisfy 2, but my feeling is that for the primary outcome we should give “what we really want” much more weight than “easy to calculate”.

Another thought I had: my opening post argued that there can be additional benefits from exposing people’s models publicly, rather than just their outputs, and that creating additional markets can achieve this. However, sometimes models might be very high dimensional and so it’s not feasible to create enough markets to fully expose them - for example, a good model for predicting the price of GNO might take many different inputs and combine them non-linearly, and it’s not a good idea to create hundreds of markets to capture all of the interactions.

For high dimensional models, it might be better to run a competition where we award prizes for the best prediction of the GNO/USD price (or chosen index function) given a set of indicators chosen by GnosisDAO, and run secondary markets in these indicators (ideally there would be a process for adding to the indicator set as well). Also, to be eligible for prizes, models must be be posted publicly. This way we could get some good public models of the high dimensional problem of mapping Secondary Indicators -> Price without having to create hundreds of markets. Then anyone wanting to forecast the impact of a decision on price can forecast the impact on secondary indicators and run it through the model.

The relationship between such models and the primary market to be similar to the relationship between poll aggregators and election betting markets. Betting markets don’t strictly follow poll aggregators, but poll aggregators still give everyone betting a good synthesis of what polling information says about the election outcome. If there were no public aggregators, bettors on political markets would either have to bet while being relatively ignorant of what the polls suggests about the outcome + uncertainty, or they would have to expend substantial effort creating their own poll aggregator. This effort would be duplicated across many bettors.

Does your original proposal better satisfy this dilemma?

This would make an interesting GIP. It sounds very similar to numer.ai, I wonder if we could leverage that somehow?

Suppose you have an index and you think that raising the index probably improves whatever you really want. Then you set up some markets which say decision A probably raises the index and decision B probably lowers the index.

Now you have an explicit map {decision A/B} → {index}, and an implicit assumption that your true objective goes up and down with your index, and from these two things you could conclude that decision A is good and decision B is bad. However, whether decision A is actually good depends on whether the implicit assumption about the {index} → {true objective} map is sound! I think in most situations you can do at least as well by explicitly estimating the {index} → {true objective} map instead of leaving it implicit, whether you do the estimation with markets or some other technology.

Ultimately, you want a better estimate of the {decision A/B} → {true objective} relationship. This might be a hard problem, and all solutions might not be great (our suggestions might help, but the answer might still be unclear). Even so, there’s no way to get around the fact that this is the problem that needs to be solved.

1 Like