AI Risk Prioritization: OpenAI Alignment Assembly Report
This report details a public input process on AI development conducted by The Collective Intelligence Project (CIP) with OpenAI as a “committed audience”. Our goal was to understand public values and perspectives on the most salient risks and harms from AI, to inform the governance of large language models (LLMs) – hence the name “Participatory Risk Prioritization.” This work is part of the broader CIP Alignment Assemblies agenda, through which we are conducting a series of processes that connect public input to AI development and deployment decisions, with the goal of building an AI future that is directed towards people’s benefit, using their input.
For this specific process, OpenAI agreed to be a committed audience: to participate in our roundtable, and to consider and respond to the outcomes of this report. Over two weeks in June 2023, 1,000 demographically representative Americans participated through the AllOurIdeas wiki-survey platform. Participants ranked and submitted statements completing the sentence "When it comes to making AI safe for the public, I want to make sure..."
Our findings were as follows:
People want regulation. They categorically reject the “Wild West” model of AI governance.
People are more concerned about good governance than specific risks.
People want to avoid overreliance on technology we do not understand.
People are worried about misuse of large language models.
The categories of oversight, understanding, and governance ranked highest, while concerns about overbearing regulation ranked lowest. The top-ranked statement was avoiding overreliance on AI systems that people, and researchers, do not fully understand. Misuse was the top-ranked category of risk, including spread of misinformation, hate speech, and enabling violence, although good governance was still a higher priority than managing any single risk.
Six participants attended a follow-up roundtable with OpenAI to discuss concerns. They worried overreliance could, among other concerns, degrade critical thinking and cause over-trust in unreliable systems. They wanted more accessible information about how AI systems work to make informed decisions.
Our recommendations, based on the findings, are:
1) Monitor post-deployment effects carefully.
2) Create evaluations for overreliance.
3) Show that acceptable use policies are being enforced.
4) Share data on real-world use cases.
5) Invest in literacy, accessibility, and communication.
6) Create and empower forums for public input into AI.
Public engagement showed the value of gathering broad input to guide responsible AI innovation. We provide pathways for companies and governance bodies to implement these recommendations for transparency, better evaluations, and inclusive decision-making.
In this report, we will first cover our Methodology in this process, then detail a few of our Key Findings (the summary and the evidence for each), and then our Key Recommendations based on the highlighted findings.