The Case for Collective Intelligence @ iWORD 2022

Divya Siddarth

This is a lightly edited transcript of a talk given at Bruce Schneier’s wonderful iWORD 2022 conference on Reimagining Democracy. Saffron’s talk from the same conference is here.

I’ll lay out a vision for collective intelligence as both a frame and a set of tools and institutions for governing transformative technology, and in particular, AI. But first, I want to register some points of agreement with many of the speakers here today. 

  • With Tim O’Reilly: I agree, AI is already very much a form of collective intelligence. And solving AI governance is the same as solving governance in general; it raises all the same problems of power, scale, and efficacy.

  • With Rob Reich: I agree, there is a core to democracy that is beyond preference aggregation. Democracy is about adjudicating conflict, and values, and fundamentally engaging in politics. That’s why I see our work as actually expanding the scope and possibilities of democracy.

  • With Ted Chiang: I agree, many of our fears about technology are fears about capitalism. And more broadly, about the social systems that we build technology within. I think some of the best things we can do for AI governance are entirely unrelated to AI. Let’s expand the social safety net and consider UBI, let’s build systems of social provision and ownership, and let’s expand public goods. 

So how do we put this all together, and what do we do about it? I confess to being young and optimistic, so I’m going to try and aim at some solutions. None of these are perfect, or satisfyingly structural, or perfectly post-capitalist. But I do think these are important problems to solve. And that AI is likely to be powerful—that it already is, and might be much more so—and so it’s worth thinking about what shifts are possible. 

I also will say something else, which is possibly controversial given our day so far. I know that tech can’t solve all our problems. I came of political age at Stanford, in the cauldron of techno-solutionism, then swung back all the way the other way, moved across the world and worked on social movements in India, and advocated against centralized data stores and the expansion of national tech platforms, and what I’ve taken from all of that is this. Technology can’t solve all our problems, but it can be a part of the solution. And it must be, especially when the problem is a technological one.

So with that: our goal at CIP is to govern transformative technologies for collective benefit, using collective intelligence. This requires change up and down the collective intelligence stack, at different layers of structural centrality.  I do think that each of these layers requires power-sharing, as Danielle put it. It’s not enough to build collective approaches just at the institutional layer, we also need them at the technological layer. 

Layer One: Technological Architecture

First, the forms and architectures we choose to emphasize as the future of technology matters. And current approaches to AI—what co-authors and I have in the past called ‘actually existing AI’, I think fundamentally misconstrues the nature of intelligence, which is not autonomous, but social and relational. 

In humans, the benchmark that so many AI systems use, our own ‘general intelligence’ is not so much atomistic and individual but social and cultural. systems that aim to achieve something like emergent intelligence will depend on their capacity for interdependence, sociality, and collaboration. 

So one thing we can do is try to build systems that optimize for collaboration, not competition. if AI systems are actually the product of human data, expertise, and knowledge—hundreds of thousands of books and reddit posts, millions of lines of open-source code—how do we make sure their benefits are shared in a way commensurate with this? 

And if the best way to correct errors in AI is through reinforcement learning with human feedback—and by the way, there are a lot of errors, AI can do a ton of amazing things but only 80% of the time—then can we create blended ownership structures for models, building on work done in the commons-ownership and governance literature?

I think we can. This might look like building data coalitions that utilize privacy-preserving machine learning and new institutional structures to represent people’s preferences over their data. This will require delegation—perhaps liquid democracy structures—and it will require interoperability between data sources. but there are organizations already experimenting with this, like pooldata.

Or it might look like ways of understanding the impacts of these models on the commons, and necessitating remuneration for that—which itself can feed into public goods mechanisms, or supporting workers’ whose jobs have been disrupted. Small-scale pilots of this are being attempted, for example with Shutterstock.


Layer Two: Governance

The second layer is governance. There should be democratic participation in the governance of AI systems, which affect all of us. But what does that look like? David Robinson has this great book, Voices in the Code, where he talks about democratic input into kidney-matching algorithms. These are literally life or death decisions, and they use a collaborative multistakeholder process to get input on the algorithm—independent experts, third party audits, deliberative panels, many many forms of input. It’s a really incredible case study that I highly recommend learning about, for a really high-impact use case. This is the kind of work that is sometimes required and possible. 

And also, it took ten years! I think about Gillian’s point, earlier, and it reminded me of an Oscar Wilde quote I think about constantly: “The trouble with socialism is that it takes up too many evenings.” This process might work for a kidney matching algorithm, and that gives me hope that those processes can work for really high-stakes use cases with really driven stakeholders. But not for everything. We need new ways to deal with, as Danielle said, the scale and complexity of what we’re trying to govern, and all the micro and macro decisions that entails.

And this is where technological advances can themselves be useful. Maybe that can feed into deliberative democracy platforms that both utilize and feed back into LLMs—for example, by using language capabilities to summarize views, translate, not just across langauages but to reach groups with different priors, and build consensus statements. Existing geographic boundaries of traditional democratic structures don’t match up well with AI platforms—in particular, national regulation may end up just having the United States or perhaps Europe fully determine directions. Broader input from super-national, cross-border entities is required. We’re working on a deliberative democracy platform pilot, and I know many folks here are thinking along these lines. 

This would also probably require third-party auditing or standards-setting structures, collecting the data to determine the impacts of models – as Saffron mentioned, we’re working on that with foundation models, but there’s much more to do. And we also need to change the corporate governance of AI companies: introducing capped returns or public goods funding structures, enabling stakeholder input, or making them accountable for the risks introduced by their platforms, which we haven’t been able to do yet. And building ways to expand labor share of income, which has been dropping—through augmentation or ownership. 

Layer Three: Social Structures and Power

Okay, let’s move up the stack. Third, what about broader social institutions and processes? 

In the end, it is our social institutions that determine what we build, and what impact it has. It is those systems that determine whether we build AI optimized for large-scale, top-down surveillance and large-scale labor automation, ushering in a world of precarious work and inequality, or we build AI that enables shared ownership over powerful models, shared adjudication of the risks that these models involve, and, most obviously, whether this powerful technology will be targeted towards solving human needs or any number of other goals, from maximizing profit to enabling authoritarianism. 

So how do we build institutions and the institutional capacity that ensures that AI will not just affect, but benefit, all of humanity?

What this requires is what might be called scalable oversight over AI systems. I’ve always found the alignment framing from AI safety researchers interesting on this — the premise being that if we don’t align superintelligence to human values before we create it, the results could be catastrophic. But this is really hard because who knows what ‘human values’ are. 

And my response is often—well, yes, this is hard, but this is the point of structures like legal codes and contracts, of voting and elections, even of markets; we’ve built so many processes to engage in real conflict and consensus-building over what values are. These structures are pluralistic, not unitary; of course we’ll never determine the one utility function for humanity. But certainly across domains and tasks, we can and must elicit values, allow for productive conflict, and execute on the decisions that result from these structures in different ways.

And I think that pluralism is at the core, as Glen and others have emphasized—we need multiple models for each piece of governance.

I’ll just go through one example, which is funding. Right now, the major funder of AI is venture capital, to the tune of many billions of dollars. This incentivizes short-term gain and race dynamics, but also ends up building on hype cycles rather than exploring infrastructural solutions. We are exploring designs for mixed funding models. for example, democratic matching-fund mechanisms could be available for for-profit as well as non-profit entities. Building on the pioneering work of groups like Gitcoin, and the optimism collective. Matched funding could be accompanied by governance rights, breaking open the decision-making processes that currently underpin what gets built, and for whom. Scientific institutional innovations, like the focused-research-organization model, that spins up partly-philanthropically-funded, 5-10 year basic science moonshot projects, could be a model here. 

Look at the Exit to Community movement, where we work on supporting tech startups (among other organizations) in the transition to cooperative, community ownership. Or benefit corporations and purpose trusts. There are so many ways to experiment with solutions that include technological, governance, and social changes to govern these technologies.

As I conclude, I just want to end with one thing. These are questions of power. To build collectively-beneficial technologies and collectively-intelligent institutions, what is required is to shift power towards those outcomes, and away from where it often currently lies. 

That means that we need to do two things. 

The first is experimentation with systems, technologies, mechanisms, institutions, that can enable collective decisions for the shared good.

And then the second is building the capacity to institute those systems, which comes back to questions of countervailing power: coalition-building to stand against the concentrating power of capital and monopoly that is ever-present.

One question that Saffron and I talk about a lot for CIP is: what would it look like to truly accelerate technological progress in the collective interest? To shift the behemoth of technological experimentation, resourcing, funding, growth in the direction of collective flourishing? 

And at a very simple level, our answer is often: first, enabling systems to allow individuals and communities to define what ‘good’ means, in a plurality of contexts. And second, building systems to try to make those ‘good’ things happen, as much as possible. 

Of course, it’s complicated. But it’s also simple, and there are clear paths forward. We look forward to exploring them with all of you.

Previous
Previous

Generative AI and Democracy @ iWORD 2022

Next
Next

Co-owning the Future with Data Cooperatives @ Stanford HAI