Compositional Mechanism Design

JULES HEDGES, PHILIPP ZAHN

Institutional Problem or Research Question

Describe what the open institutional problem or research question you’ve identified is, what features make it challenging, and how people deal with it currently.

Effective institutions require careful incentive design. One of the most widely used analytical frameworks for crafting incentives is mechanism design, a subfield of game theory. While important and useful, this approach has limitations in the context of large-scale institutions. Typically, mechanism design seeks to break organisational questions down into single, monolithic allocation problems. However, real-world institutions are not monolithic; they contain multiple interacting subsystems.

One could attempt a divide and conquer strategy by considering each subproblem, applying solutions to these smaller, well-defined components, and then aggregate those solutions. This approach, however, could easily devolve into ‘whack-a-mole’, since solving one problem can cause spillovers elsewhere.

Actual improvement will require an analytical framework that can represent and reason about the interactions between subsystems and system components. Such compositional modelling should not be a passive exercise, splitting up the components according to predefined rules. Instead the definition of components should itself be a target of optimisation: how can spillovers be mitigated? Which components are independent and which need to exchange information?

Possible Solution

Describe what your proposed solution is and how it makes use of AI. If there’s a hypothesis you’re testing, what is it? What makes this approach particularly tractable? How would you implement your solution?

Our approach is based on category theory, specifically the applied work we have started towards ‘categorical cybernetics’. The most immediate next step involves extending the existing compositional game theory framework – a categorical version of traditional game theory – towards an expressive framework that can express complex institutional architectures.

AI is key for this endeavour on three grounds: (1) there is no hope of running analyses of the kind mentioned above without AI; (2) in order to specify models, describe goals and preferences, etc., we need AI-supported user interfaces to make the tooling widely accessible; and (3) in the near future many agents in the very institutions we aim to describe will likely be artificial.

AI is therefore needed for making analyses scalable but also as an ingredient to enable sufficiently rich models. For the latter, we must be able to model the strategic interaction of such agents in a coherent framework.

What started as a translation of standard game theory into the languages of category theory now shows deep connections to central aspects of learning. In addition to offering conceptual insights, the categorical framework we are working towards can practically accommodate blends of learning and non-learning agents.

Lastly, as for compositional game theory, our approach comes equipped with a blueprint for implementation, allowing the analytical framework to scale.

Method of Evaluation

Describe how you will know if your solution works, ideally at both a small and large scale. What resources and stakeholders would you require to implement and test your solution?

There are at least two dimensions along which our solution should perform: one concerns accessibility of the tooling and the other concerns the scale at which it can guide institutional design.

Regarding the latter, our approach should be widely applicable to incentive design across settings that vary in scale. A natural starting point is subproblems within particular institutional contexts (e.g. incentive schemes for specific actors in a given system). An advantage of beginning with smaller problems is that they often allow for experimental testing due to the comparative ease of defining reasonable metrics. It is worth noting, however, that evaluating when an institutional design works is itself a delicate task.

By focusing on small applications and relatively isolated problems, we can essentially replicate the standard mechanism design approach as a benchmark. (This opens another set of metrics for evaluating our approach: ease and speed relative to the traditional approach.)

Ultimately, to make actual improvements, we will need to tackle larger-scale institutional design questions. Obvious stepping stones include blockchain-based systems and other virtual institutions (e.g. institutions in gaming), since these have sufficient complexity but limited impact on the rest of the world.

Regarding accessibility, we need to develop a tool relevant to people with diverse backgrounds, including those who lack mechanism design knowledge. The first step in this direction is to enable stakeholders to devise improved institutional designs.

We hope to make our framework as widely accessible as possible. This goal is important in its own right but also because it will help reach large-scale institutions, since such changes require buy-in from many groups – it cannot happen simply via deferring to a small group of experts. We see our framework as a bridge between experts and non-expert stakeholders.

Further progress will require programmers and software engineers who can connect our framework with modern ML architectures and improve efficiency. Secondly, we are always on the lookout for stakeholders who want to experiment with our framework while tackling concrete problems.

Risks and Additional Context

What are the biggest risks associated with this project? If someone is strongly opposed to your solution or if it is tried and fails, why do you think that was? Is there any additional context worth bearing in mind?

The main risks we see lie in scaling and accessibility issues.

If our approach has theoretical appeal but is too computationally demanding to scale, it will fail. We are keen to establish a research institute outside of academia in order to attract skilled and experienced engineers who can mitigate this risk.

We hope that enabling many people to participate in the design process will increase the chance of changes being accepted. This can easily fail if we do not design good user interfaces.

Next Steps

Outline the next steps of the project and a roadmap for future work. What are your biggest areas of uncertainty?

The next three major steps are to: (1) extend the current implementation with new theoretically motivated foundations towards learning; (2) improve engine performance; and (3) develop small scale strategic use cases. This will constitute a working proof of concept, after which we will need to collaborate with institution designers to put our approach into practice. An organisation such as the Collective Intelligence Project would be very valuable for connecting us with potential collaborations.

Going forward, we hope that this project will become part of our broader research effort – we are developing the Institute for Categorical Cybernetics, a nonprofit organisation to host this work. Projects will focus on topics such as incentives and AI alignment, use of AI for computational economics, and development of software tools for modelling complex systems in general.

References

1. Seth Frey, Jules Hedges, Joshua Tan and Philipp Zahn. Composing games into complex institutions. In PLoS ONE, 2023. [https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0283361]

Previous
Previous

Harnessing Collective Intelligence for Dynamic AI Governance

Next
Next

Recommender Systems for Institutions