AI For Institutions: Website Launch & Update

The new AI For Institutions website is here: ai4institutions.com!

In recent years, AI has become an increasingly powerful tool for information processing. It has reshaped society, whether through social media newsfeeds, resource allocation algorithms in government, or changing human interactions at work through virtual assistants. Indeed, it and other technologies have progressed to the point where our social institutions need to be updated to keep up with technological advancement.

The question we seek to address is: can we use AI itself to help us design and build better institutions, which enhance human capabilities for collaborating, governing, and living together? (We might even hope that using AI to improve our institutions could enable them to better govern our AI, in a virtuous loop.) We refer to this idea as “AI for Institutions”.

To help answer this question, we teamed up with the Cooperative AI Foundation (CAIF) to convene around 25 participants from around the world, from institutions like Google DeepMind, Demos, Harvard, MIT, NESTA, OpenAI, Protocol Labs and UCL in a beautiful Oxford venue for two days of participant driven presentations and discussion. Participants submitted ideas in advance, which the organisers clustered into themes (Composition, Information, Representation, Simulation, Delegation, and Support). Theme by theme, everyone gave a lightning talk on their idea, before breaking into small groups to work on specific proposals, which we “peer reviewed” with each other at the end.

Some of these ideas have since been recorded as “Project Cards” on the new AI for Institutions website, building on the existing work in this domain:

  • Online insight aggregation and deliberation platforms (such as Remesh, Pol.is, and Stanford’s moderation chatbot) could help surface topics of societal importance and enable productive discourse between people with differing beliefs and values, enabling greater social cohesion and, further down the line, better institutions. 

  • AI-enabled tools (such as the AI Economist and DeepMind’s Democratic AI) could be used in the process of designing institutions, such as by searching for novel redistribution policies, or new corporate structures that help solve constrained optimization problems more complicated than maximising profit.

  • Better agent-based models, behavioural cloning from humans, and/or multi-agent learning algorithms (such as DeepABM, and more generally in ABMs for economics), could help us to simulate how people respond to such policies.

  • Finally, as AI becomes more embedded in our existing institutions, it will be critical to ensure that those AIs are more cooperatively intelligent, better-aligned with human values, and built using processes that can capture a wide range of such values.

There are clearly high stakes here, regardless of where or whether such AI systems are deployed. Can we test such systems in lower-stakes settings?  Where are the best opportunities for deploying them? What bottlenecks exist when attempting to do so, and how can we unblock them?

Answering such questions requires input from many different disciplines and perspectives. This includes multi-agent learning, game theory and mechanism design, economics, social choice, political science, philosophy, complex systems, and work from practitioners who can bring these ideas to life.

If you’re interested in working on AI for Institutions, please get in touch! This could include submitting a new project card, helping organise future workshops, conducting research, or developing new tools. CIP and CAIF are actively looking to support people interested in working in this area, including the potential for funding, mentorship, or collaboration.

Previous
Previous

AI and the Commons: Data Governance for Generative AI

Next
Next

Alignment Assemblies: Nine Months In