Building the Field of Democratic AI: Our Roadmap Launch and Webinar

A Roadmap Towards Democratic AI

You’ve heard the phrase: ideas are cheap, execution is hard. Building a better future is possible, and there’s no shortage of ideas for how technology will help us get there — but making that a reality will take a dedicated and collective effort from many of us.  

There are a lot of existing efforts that are working towards democratic AI and collective intelligence, but they either tend to focus on the long-term future or on addressing immediate harms. While both are necessary, there’s a clear gap, and if we’re going to build this future we’ll need to bridge that gap through near-term, tangible projects that pull the horizon of that future a bit closer. 

As a key effort to construct this bridge, our Roadmap to Democratic AI offers a set of priorities and specific experiments the ecosystem can run this year to start actively pursuing a shared goal of democratic AI. We agree we need fundamental change, and that starts with concrete action. 

Trilemma

In the field of artificial intelligence (and more broadly, transformative technology) institutions and actors generally tend to fall into three camps - safety, progress, or participation. Regulators are often focused on safety, developers focused on progress, and civil society on participation. Implicitly, these are viewed in terms of tradeoffs, and the assumption is that it’s nearly impossible to have all three. 

Without safety, we expose society to adverse risks; without progress, we leave opportunities to improve the future on the table; without participation, we risk the consolidation of power and allow a small number of people to set the direction of the future. 

This is the trilemma of artificial intelligence, and to set us on the path toward a more democratic AI future, we need to be able to build and optimize for all three. 

Building an Ecosystem

This roadmap endeavors to not only list experiments but to build the field of democratic AI, similar to how there have been efforts to field-build around AI ethics or AI safety. Building a successful R&D ecosystem requires not just shared values but shared projects and priors. 

The past few years have seen a glut of calls for democratizing AI, for better work in the public interest, and for more input. What is needed is a way to act on those calls, and a set of experiments and collaborations that make direct progress. Rather than entrenching in the camps of safety, progress, or participation, where discussions can often devolve into unproductive misrepresentation, our goal is to start with a specific set of things we can all do to push toward democratic AI.

Our goal is for the roadmap to catalyze a broader movement committed to building democratic AI. The scope of opinions and perspectives from our panelists below are healthy signs of an emerging field, one that we’re committed to stewarding. Below are some standout quotes from each of them, and watch the full video of the webinar below.

Manu Chopra, CEO, Karya 

“Our hope is to truly put our communities at the center of AI. AI represents a 100X opportunity. And much like the internet, I'm afraid that it'll only deepen the gaps that already exist in our society.”


“I am completely confident that AI will change your life and my life. I'm not so confident it'll change the life of my communities for the better.”


“The fact that AI models perform so poorly in most languages from the global South is a failure of the market. The world needs more nonprofits to tackle market failures, to push the market wherever necessary, and to ensure that our progress doesn't come at the cost of exclusion of other people.”


Amba Kak, Executive Director, AI Now Institute

“The evidence from the last 10, if not 15 years, suggests that while the benefits accrue to a handful of corporate actors, current AI often perpetuates a long cycle of both racialized and geographically very unequal disenfranchisement of groups that reap few of the benefits, but bear most of the harm.”


“What we're still missing in this debate is a public interest vision. So AI is currently being pushed as a solution in search of a problem or in search of its killer app across education, health care, government's benefits allocation, often assuming that investments in or procurement of AI should take precedence over other approaches.”


Bruce Schneier, internationally renowned security technologist

“There are two kinds of trust, interpersonal trust and social trust, and we regularly confuse them. That confusion will increase with AI. We're going to make a fundamental category error. Interpersonal trust is intimate, but it doesn't scale. Social trust scales, but is largely dependent on governments and technology, and we regularly confuse them.”


“If we want to use AI for personal things, especially for democracy, we need trustworthy AI, and the market will not provide this on its own. So we need public AI models, systems built by academia, nonprofit groups, government, not based on a profit motive.”


“We're never going to make AI into our friends, but we can make them into trustworthy services, make them into agents and not double agents. This will only happen if the government mandates it. And it is well within the government's power to do this.”


Teddy Lee, Product, Collective Alignment at OpenAI

“My team, collective alignment, is a team that is working to answer an … important question of how do we represent and incorporate human values into the development of AI? In some ways, the super alignment team is trying to figure out how we align super intelligence and us on the collective alignment team are trying to figure out what do we align that super intelligence to?”


“A lot of people love the idea of democratizing AI, but it also means different things to different people. Democratizing AI can include the use of AI, it can include the development of AI, it can include the distribution of the profits from those AI technologies, and it can, of course, also mean AI governance.”


“In many ways, if we don't figure out AI governance properly, none of the other things are really that sticky. Things can always change down the line unless you have a really strong governance structure in place.”


Yoshua Bengio, Professor, Université de Montréal, Turing Award Winner and founder of Mila

“We need to think about AI not just as it is now, but what it will be in two years, five years, 10 years from now, because the capabilities are on the rise and the increases in capabilities is going to give a lot of power to whoever controls these systems.”


“Constitutional power in private hands can be in tension with what democracy is about, because democracy is about sharing power. So how do we salvage democracy as we go towards systems that eventually are as smart as us or eventually smarter than us? These are big, difficult questions.”


The Road Ahead

Because we’re focused on what can get done today, we’re thinking beyond the differences that mark the various parties in the AI world. We know that we can set good governance frameworks and build the technical and social foundations for democratic AI.

We’re currently working on several of these experiments, and invite you to build the field of democratic AI with us. Please reach out to us at hi@cip.org if you have any ideas that you would like to test out with us.

Previous
Previous

We need network societies, not network states

Next
Next

AI and the Commons: Data Governance for Generative AI