We should all get to decide what to do about AI
It is uncontroversial to state that technological progress should be directed towards the collective good. It is, however, difficult to determine what is actually good for the collective.
There is a default path to answering this question: leave the task of building technology up to technologists, directing what to build up to investors, and steering technology away from risks up to regulators, and then assume something like collective good will result. Ideally, the public gets to benefit from technological advances while being assured of checks on safety and repercussions from democratically accountable policymakers if development causes harm.
But this approach breaks down when dealing with the society-scale consequences that are increasingly likely to result from new technologies. Which means that it won’t be enough to deal with AI. AI is a general-purpose technology, widely accessible for an almost unbounded range of applications, from music generation to chess-playing to coding cybersecurity exploits. Much good for humanity could result. But the unknown risks of these new technologies mean that we shouldn’t roll the dice by deploying widely and litigating harms over the next several decades.
The speed, scale, and impact of AI don’t bode well for sticking with the status quo and hoping it ends up in a place that incorporates the public good by default. Market incentives among only a few firms are not enough to set a public vision for what we do and don’t want from AI. Policy moves slowly, and the tools of policy can be coarse, making it difficult to adjudicate risk and reward tradeoffs. We need to do something different: incorporate collective input at the ground level, developing new ways to determine what is good and how to achieve it within the models themselves, and the control structures that govern them.
Our organization, the Collective Intelligence Project, is starting to experiment with exactly that, by piloting ‘alignment assemblies’. Alignment, so that we can bring technology into alignment with collective values. And assemblies, because they assemble regular people, online and across the country or the world, for a participant-guided conversation about their needs, preferences, hopes and fears regarding emerging AI.
We are running several such processes over the next few months to directly influence the trajectory of generative AI. This requires connecting the public to power. There are a limited set of entities with decision-making power over the direction of AI development. To a first approximation: AI labs, companies that provide necessary inputs, like hardware and cloud compute, to AI labs, organizations that invest in AI labs, and policymakers. Our goal is to begin surfacing collective input to AI labs and policymakers directly, and we are beginning by partnering with OpenAI and Anthropic. The typical process of figuring out how to aim for the collective good can take decades (we’re still litigating the basics of social media). We want to work on timelines closer to months than years.
Our pilots are scheduled for June and September. The first will focus on collective input into risks and evaluations for AI: what questions do people have about the impact of these technologies, and how do we evaluate based on those questions in particular? The second will consider how models should behave in different circumstances when values trade off with each other.
We expect that these initial processes will be imperfect and flawed. But we hope to demonstrate that we can develop new ways to determine how to build technology for the collective good, partly by involving the public in determining what the good can and should look like. We would like others to run alignment assemblies too, and are excited to help support what could be a Cambrian explosion of experiments in incorporating collective intelligence into technological development, from federated citizen’s assemblies to retroactive funding processes for writing better model evaluations to public red-teaming. Beyond alignment assemblies, we’re working on different pathways towards institutionalizing better mechanisms to decide and execute on collective priorities over AI.
Reach out to us if you are interested in supporting. We will share updates on alignment assemblies at cip.org/alignmentassemblies, and @collect_intel.
Alignment assemblies team: Divya Siddarth and Saffron Huang (co-leads), Kinney Zalesne, Tantum Collins, Audrey Tang
Help to set the alignment assemblies agenda:
If you have reached this point and have thoughts on what topics we should run alignment assemblies on, let us know using the Pol.is tool below!