Shared Code: Democratizing AI Companies



Introduction: AI for Everyone

The next decade will likely determine whether artificial intelligence becomes a technology that primarily consolidates power and wealth, or one that broadly benefits humanity. This outcome depends not just on technical advances, but on how AI companies themselves are structured and governed. The success of AI, more than other past technologies, requires social trust and acceptance. While companies race to improve the technology and expand its adoption, many are discovering that conventional corporate models are ill-equipped to handle the unique challenges and responsibilities that come with developing potentially transformative AI systems.

In this note we offer AI companies, specifically those that train and deploy frontier models, a set of viable and achievable options for democratizing the governance structures of their organizations. We’ll provide a set of recommendations that companies could begin to undertake immediately, many of which have been previously outlined both in Collective Intelligence Project’s Roadmap to a Democratic AI and the Media Economies Design Lab’s Exit to Community (E2C) framework. 

In our discussions with leaders at various AI companies and civil society organizations, we’ve noticed that there is a growing concern and discomfort over the power that artificial intelligence vests in a small number of individuals. Companies have historically proven ill-equipped to deal with the negative externalities and risks of their technologies; letting technology companies self-regulate increases the risk of significant market failures and leaves a lot of potential positive uses on the table.

How these labs and organizations are governed are highly relevant to the public, and these should be designed with the public interest in mind. There are a lot of barriers to designing a governance structure that maintains its duty to the public, for the following reasons.

First, it is historically hard to know what exactly is in the interest of the public; AI experts actively debate whether rapid innovation or a cautious approach is more in the public interest, and the answers aren’t clear to many of us. Second, even if we did know the interests of the public, it’s not obvious how to translate those interests into functional constraints and decision-making processes. Third, it is a time-tested habit for corporations to drift from a social mission to financial self-interest; what mechanisms are robust enough to prevent that from happening yet again? 

Anthropic’s Long-Term Benefit Trust is a promising early step, as it indicates a willingness to have a clear structure of corporate oversight that prioritizes the impact to humanity over shareholder returns. But that is only the beginning of what is possible, and how well it works to lock in those values over the long-term, particularly in the face of shareholder pressure, is still unproven.

Continuing to iterate on accountable governance structures could facilitate greater public trust and adoption of AI technologies, especially at a time when the tech industry has faced growing skepticism and scrutiny. Democratization goes beyond just open-sourcing the use of AI technology; it involves a deeper consideration of market incentives, infrastructure deployment, and redistribution. It means establishing structures to ensure that the benefits of AI are broadly shared. In the same way that joint-stock corporations unlocked latent social and economic potential amidst technological and social changes, other forms of stakeholder governance present possibilities for unlocking the social and economic potential of AI. 

Drawing on our work with earlier legacies of shared ownership and governance and our involvement in the AI industry, we believe there are significant opportunities to develop dynamic, accountable ownership designs for AI companies. Our recommendations fall along a few major points.

  • Build with people: AI developers are highly-engaged and mission-driven, and their users are often passionate about the tools; companies should experiment with models that vest decision-making authority with stakeholders at all levels of the process

  • Build with purpose: Use company structure and external entities to lock in the values of the organization.

  • Build with the commons: Develop multiple levels of governance and accountability across a composable stack of technical systems and legal entities.

What do we mean by democratizing AI?

While the calls for democratic AI have gained support and momentum in the past year, we’ve found that the term democratization has often gone left undefined, and that lack of clarity has been an impediment to meaningful progress. As we’ve written before, rather than a singular definition of democratization, we can frame it as four types of democratization: (1) use, (2) development, (3)  benefits, and (4) governance. 

  1. Democratization of Use:

    Expanding access and use to AI technologies through open-source models and free access.

  2. Democratization of Development:

    Leveraging public input and co-design of these systems to more reflect the preferences and values of different communities.

  3. Democratization of Benefits:

    New institutional and investment forms to experiment with better ways of allocating risk and rewards

  4. Democratization of Governance:

    Co-ownership over models to ensure expanded public input and decision-making capacities over trade offs between safety and innovation

With these in mind, we draw from previous organizational democratization efforts that have worked to propose recommendations for AI companies in particular. We have also sought to identify recommendations that build on the existing structures and strengths of AI companies.

Build with People: Integrating Diverse Stakeholders

Employee Empowerment

Employees of frontier AI labs are usually high-agency, high-conviction, and driven by a mission to build technology that can transform the world. Currently, the most common organizational structure for tech companies involve plans that grant employees vested equity as part of their compensation packages. These are usually offered as incentives for qualified talent to join high-risk, high-reward companies, with employees receiving significant financial windfalls in the event of a startup exit.

However, these plans often do not offer any formal pathways for granting employees the power to give direct input into the decisions of the companies. There are rich histories of companies utilizing different forms of worker ownership and governance that give employees more power over the function and management of organizations. 

Two of the most prominent AI companies, OpenAI and Anthropic, currently have unique structures that could allow for intermediary worker-run committees to have more say over decisions usually left to management. This kind of input exposes both AI systems and their governance to a fuller range of insight outside the c-suite. Microsoft has already celebrated the outcomes from its collaborations with German corporate clients that employ workers' councils, which shows the promise of involving organized workers in efficient, ethical AI alignment. The German model of co-determination, in which workers have a literal seat at the table, supports shared prosperity and inclusive growth without sacrificing progress and innovation. It is also a bulwark against short-termism and quarterly capitalism, as research shows that it leads to increased investment into capital assets, ensuring that there is a focus on business investment, long-term growth, and innovation, rather than stock buybacks and shareholder returns. To best reflect the breadth of experience and expertise in organizations, a system for worker voice should include as broad an array of workers as possible, from elite engineers to QA and support staff. 

This kind of practice can be built into the ownership structure. For instance, when the biotech firm Gingko Bioworks went public in 2021, it established a class of stock for employees that had ten times the voting rights of common stock. Like frontier AI companies, Ginkgo Bioworks is involved in risky R&D. The dual-class stock structure means that, over time, employees will likely gain majority control over the company’s board. Additionally, the company invested in fostering a culture of responsibility among employees to ensure they are up to the task of using their power wisely.

There are significant risks in over-indexing on employee empowerment, as employees do not always have the necessary information to support in decision-making and cannot necessarily be relied on to act in the public interest, particularly when there are significant financial rewards at stake for a relatively small number of elite employees. But AI workers do have a unique perspective and can play a critical role in ensuring that AI development remains responsible. 

Public Participation

While employees do represent broader interests and values than that of executive leadership and major shareholders, organizations working on technology of society-scale importance will run into decisions not well suited to just a small group of employees or domain experts. The use of bodies known as “assemblies,” “councils,” and “juries” are becoming increasingly widespread in governance systems, especially as people grow dissatisfied with representative legislatures alone. Examples such as Belgium’s permanent citizens council demonstrate that such assemblies can be both effective and popular.

When adjudicating and deliberating possible routes, executives and boards should solicit more diverse information to help lead to better, more well-informed decisions, and tighten the feedback loop between social researchers, policymakers, and AI labs. Aside from building public trust and legitimacy, this provides a means for transparent and independent oversight  as well as broader democratic input on whether and how to pursue AI development in general. What the market values is often divergent from what is valuable to society, and a formalized and legitimate form of public participation can be an active countervailing presence to ensure a more robust deliberation between social and market returns.  

A critical hurdle to the legitimacy of the public-benefit mission of companies such as OpenAI and Anthropic is how to best represent the public in consequential decisions. Fortunately, there is a growing set of best practices emerging in large-scale participatory processes.

A stakeholder council might consist of a rotating cast of experts and members of the public. Its decisions may or may not be binding. This could look like a more democratic successor to the Meta Oversight Board, whose members currently consist of appointed experts. Companies might defer to these councils the most contentious, polarizing issues that otherwise risk distracting from the companies’ core missions. Councils could themselves utilize a range of AI-enabled collective intelligence mechanisms that gather input from people around the world or a clearly defined stakeholder population, and deliberate to generate proposals for the board.

There are several ways such a stakeholder council might interact with a given AI company. The council could hold a company board seat, in which case the findings of the council could have a voice in the highest-level decisions at the company. This is an extension of “codetermination” models, with board seats for worker representatives, that have found great success in Germany and other countries. Alternatively, like Meta’s Oversight Board, the council could have a more contractual relationship with the company and its operations. In any case, a council should have meaningful independence from company leadership.

Either in conjunction with the stakeholder council or separately, AI labs could also establish evaluation juries that work as part of internal governance. For fuzzy social issues around AI, the line between acceptable and not-acceptable impacts can be very blurry for experts to determine. Evaluation juries would be tasked with adjudicating when they think a line has been crossed, and would be composed of domain experts in relevant areas who are asked to provide a severity score for particular evaluation results.

Recommendations:

  • Develop pathways for transitioning current forms of governance toward significant voice for workers and users.

  • Cultivate a culture of responsibility and care among workers.

  • Incorporate participation through diverse mechanisms such as juries and councils.

Build with Purpose: Locking in Values

A major risk factor in the landscape of AI companies are the institutional “default containers” for these organizations — the venture-capital funded startup. These work well for asset-light and high-growth entities, but carry risk when the default outcomes for these are the traditional exits of either IPOs or corporate acquisitions. Organizations tend to do what organizations are designed to do, and in the case of corporations and startups, the pull of profit-maximizing incentives will often force them to drift away from initial public benefit missions. Conventional startup models lack community and social oversight and often will produce a monocultural landscape that creates a sector-wide vulnerability to collapse due to common failure modes. To counteract these tendencies, AI companies can take steps to lock values into their corporate governance through intentional ownership models.

Already, AI companies have begun to do this. OpenAI has a non-profit parent organization, and Anthropic is a public-benefit corporation (PBC) with a long-term purpose trust that selects part of the company’s board. These build on a long legacy of purpose-centered companies especially in Europe, such as Novo Nordisk and Bosch, have encompassed their successful technology businesses within non-profit organizations. But recent tribulations at OpenAI suggest that there is still learning to do. And the experience of Etsy, which relinquished its PBC trajectory under investor pressure, suggests that the PBC may not be a sufficient mechanism for locking in values in the long term.

Other mechanisms are available for making a mission lock stronger and also clearer, so as not to introduce unnecessary confusion into governance. A perpetual purpose trust (PPT) could serve as a parent entity for a company, and it seems to be a powerful tool for locking in values than a non-profit corporation; this is, for instance, how Patagonia recently transitioned into a company built around the bottom line of environmental sustainability. Alternatively, a values-centered external organization can hold a golden share in a company—a share that has the right to pull the emergency brake in the case of specific kinds of mission violations.

When a trust holds power like this, company ownership moves from humans to a perpetual entity that may be insulated from short-term incentive structures. Under trust structures, the mission matters more than the interests of any particular stakeholder group. But a trust can nevertheless instill a "sense of ownership" both through governance (i.e., employee seats on the board) and economic flows (i.e., dedicated employee profit-sharing built into the purpose statement).

There are some challenges and potential tradeoffs to adopting PPTs in the context of AI companies. First, taking on debt/equity in a PPT structure usually requires positive cash flow, and frontier AI labs generally have yet to turn break-even thanks to the massive investments required for training and access to compute. Second, if a trust isn’t instituted at an early enough stage, companies may be locked in to existing governance structures and unable to change course without upsetting powerful stakeholders. 

It will be critical for any company to figure out the right balance of independence, accountability and oversight in the relationship between the trust and the operating company. This complexity may also create barriers to accessing needed capital, as investors are still generally wary of investing in this type of structure. For an established company like Anthropic this doesn’t pose as much of a problem; but for an upstart company this could prove burdensome.

Recommendations:

  • Identify a clear set of non-negotiable values and establish a clear, robust structure for protecting them.

  • Alongside the more dynamic stakeholder-led governance described above, identify an appropriate structure that can ensure hard-to-change mission locks that don’t otherwise interfere with efficient governance.

Build with the Commons: Composable Infrastructure

AI is poised to become a foundational digital infrastructure underpinning nearly all sectors of society. On its surface, AI exhibits many characteristics that economists associate with natural monopolies well-suited for public utility models: immense upfront costs, non-rivalrous access, and vast potential to create downstream value. However, emerging research reveals such core infrastructure creates societal value that transcends classical economic framings, and as such requires novel organization forms in order to best capture and redistribute its benefits. 

To the extent that it becomes enmeshed in our lives, AI could be seen as digital public infrastructure (DPI) in its own right - in which case it’s governance is vital to the health of our core civic institutions and for human self-determination. Treating AI as merely another enterprise technology ignores the scale of its impacts. If AI is a transformative technology, we need robust ways to steward its sociotechnical integration. Failing to do so risks enshrining an anti-democratic AI regime misaligned with the principles of pluralism.

A key risk in the AI industry is power consolidation, as this skews both the incentives and market structure, in turn creating a more fragile ecosystem. AI companies could take a proactive approach in establishing new standards and protocols across the AI stack that allow for better alignment of principles across the ecosystem and guarantee multistakeholder governance. An open and diverse ecosystem is more robust to market collapse.

For instance, the Visa payment network was originally set up as a member-owned cooperative for financial transactions among regional banks. Founder Dee Hock recognized that the credit card system would not work if owned by a single institution, and he persuaded Bank of America to spin its BankAmericard product out as a separate company, owned by the smaller banks that used it. Although the company later transitioned to investor ownership, Visa’s early life as a cooperative enabled the credit-card industry to spread through the trust engendered by shared ownership. Similarly, starting in the mid-19th century, the Associated Press formed as a cooperative among local newspapers. Thanks to the trust across ideological lines that the cooperative structure enables, the AP remains a widely trusted source for news and even electoral results.

The development of open-source technologies has shown the power of organizational stacks that are composable and dynamic. The Linux operating system that much of the internet runs on depends on diverse organizational structures that interoperate. While a “benevolent dictator for life” oversees the Linux kernel, the constitutional democracy of Debian provides critical maintenance work, while startup-style companies like Ubuntu adapt those products for end users. More recently, the emerging social media platform Bluesky has developed an ingenious strategy for “composable moderation” designed to prevent moderation decisions from being monopolized at any one bottleneck.

The AI industry can add another chapter to this story by exploring composability and distributed ownership across its own technical and organizational stack.

Recommendations:

  • Explore ways of dividing up the AI stack into composable parts and distribute ownership and control in diverse ways across them.

  • Identify opportunities for cooperative models to build shared infrastructures.

Conclusion

AI companies have compelling reasons to adopt alternative governance structures. These include the need for long-term strategic planning, which traditional profit-driven models may hinder; the realization that success of AI requires social trust and acceptance; and the recognition that AI's unprecedented potential power and unpredictability demand more robust oversight. By exploring innovative organizational designs, such as purpose trusts or stakeholder councils, AI companies can better align their operations with long-term visions and societal interests, while establishing safeguards commensurate with AI's transformative potential.

The venture capital-funded startup model, while effective for many tech companies, may not be the ideal container for organizations developing such transformative and high-stakes technologies. We've seen how the default incentives of these structures can sometimes pull companies away from their initial public-benefit missions.

The examples we've explored—such as Gingko Bioworks' employee voting stock and Visa's cooperative origins—demonstrate that it's possible to create dynamic, successful companies that prioritize long-term value and stakeholder—rather than shareholder—interests. 

Moving forward, we encourage AI companies to view these recommendations not as prescriptive solutions, but as starting points for experimentation and collaboration. The path to more democratic, accountable AI development will likely involve a combination of approaches, tailored to each organization's unique context and goals.

The AI industry has already shown remarkable creativity in its technical innovations, as well as an openness to comparable innovations in ownership and governance. Now is the time to develop organizational designs that are as sophisticated and forward-thinking as the technology itself.


Nathan Schneider is an assistant professor of media studies at the University of Colorado Boulder, where he leads the Media Economies Design Lab. His most recent book is Governable Spaces: Democratic Design for Online Life.

Divya Siddarth is the Co-founder and Executive Director of the Collective Intelligence Project.

Joal Stein is the Communications and Operations Director of the Collective Intelligence Project.

Special thanks to Eleanor Shearer and Peter Koehler for their feedback on this piece.

Next
Next

Predistribution over Redistribution: Beyond the Windfall Clause