Share your views on Generative AI

On the occasion of the second-ever Summit4Democracy, the Collective Intelligence Project and Audrey Tang invite you to share and discuss your thoughts on the impact of generative AI.

Your input and insights are really important; we will use the outcomes of this conversation to map the debate and shape upcoming, broad-based discussions on AI so that we can feed collective views into concrete decisions of AI labs and policymakers. We will also trial the use of language models to better understand the outputs of this conversation.

Please add your views as statements and vote on others’ statements below

Tip: If you almost agree with someone’s statement but not quite, please rewrite it so that you agree with it and submit that.

To see the group’s beliefs, check out the dynamically-updating report of this conversation.

  • Generative AI are AI tools that generate new outputs like text, images, code, audio, etc. such as ChatGPT, Midjourney, and Copilot. ChatGPT, released by AI company OpenAI, is the fastest growing consumer application in history. Sectors like education, programming, marketing and writing have already been changed by these technologies. Given this trajectory, generative AI has the potential to significantly impact society and democracy in many ways — more resources on this in the below “Background Context” section.

  • In our society, many things are gated on the creation of language (e.g. application forms, comments, homework essays). How might things change?

    How might generative AI impact what we think of as knowledge or truth?

    How could we steer uses of generative AI in collectively beneficial directions?

    How might generative AI change how we relate to each other?

    How can generative AI improve democracy and our institutions? 

  • We are using Pol.is, an open source, real-time system for gathering, analyzing and understanding what large groups of people think in their own words, enabled by statistics and machine learning, based on how people vote on others’ statements.

    Your votes are gathered entirely anonymously, but when you return to this page we will remember your randomly-generated visitor ID, so you won’t be re-voting on statements you’ve already seen.

    When you submit a statement, it will show up in others’ interfaces for them to vote on.

    We are using this to create a preliminary map of the opinion space (you can see an initial report here). This is not intended to be, and cannot be, fully representative.

Thank you for helping to steer a vital conversation about AI!

To see the group’s beliefs, check out the dynamically-updating report of this conversation.

Background context

Try out generative AI:

To use a generative model to create text, try ChatGPT or Poe. To create images, try Stable Diffusion or Midjourney.

For more context on Generative AI, see the below resources:

We used language models to generate a number of the summaries below!

On the Opportunities and Risks of Foundation Models - Stanford Center for Research on Foundation Models (CRFM) 

Summary: AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles (e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations). Though foundation models are based on conventional deep learning and transfer learning, their scale results in new emergent capabilities, and their effectiveness across so many tasks incentivizes homogenization. Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream. 

Generative AI and Democracy - Collective Intelligence Project 

ChatGPT summary: The article discusses the growing use of generative AI, which are machine learning algorithms capable of learning from content like text, audio, video, images, and code to output new and often original content. While this technology has many useful applications, it also raises concerns about the quality and veracity of the generated content, as well as the impact on the digital and knowledge commons. The article argues that the concentration of power around AI could be undemocratic and that it could potentially degrade self-determination and democracy by undermining good epistemic norms, education, and the quality of the knowledge commons. The author suggests that we need to find ways to use this powerful technology to enhance critical thinking, democracy, and the commons while also injecting pluralism into the way it's designed.

We Don’t Need to Reinvent our Democracy to Save it from AI - Bruce Schneier

Claude summary: The article warns of AI systems capable of generating synthetic yet plausible text, images, and other media could be misused to manipulate public opinion and policymaking. While regulation and detection of AI-generated content are important, the article argues they are insufficient and potentially counterproductive responses given the pace of AI advancement. Instead, the greater focus should be on reducing the influence of powerful interests, increasing transparency, and boosting civic engagement. Stronger democratic governance and inclusion, not just constraints on technology, are needed to address the threats posed by AI 'hacking' and ensure that human voices are heard.

Generative AI: A Creative New World - Sequoia Capital US/Europe 

ChatGPT summary: The article discusses the potential of generative AI, a category of artificial intelligence that is focused on creating something new rather than analyzing something that already exists. It has the potential to revolutionize various industries that require humans to create original work, including social media, gaming, advertising, architecture, coding, graphic design, product design, law, marketing, and sales. Generative AI can make workers at least 10% more efficient and/or creative, thereby generating trillions of dollars of economic value. The article outlines the history of AI and explains how better models, more data, and more computing power have enabled generative AI to become a reality. As the platform layer solidifies, models continue to get better, faster, and cheaper, and model access trends towards free and open source, the application layer is ripe for an explosion of creativity. The article predicts that just as the inflection point of mobile created a market opening for a handful of killer apps a decade ago, killer apps will emerge for generative AI, and the race is on.

ChatGPT and large language model bias - CBS News

Claude summary: The article discusses ChatGPT, an AI chatbot trained on internet data that can produce human-like text. While ChatGPT's capabilities impress, experts warn its outputs may reflect biases in its training data and lack diversity. The massive datasets ChatGPT uses could exclude marginalized groups and encode prejudices. Although tools can detect and remove harmful outputs, experts argue AI systems need oversight and broader reform to address risks. The debate around bias includes claims that ChatGPT exhibits ideological bias, which its maker aims to reduce through guidelines and customization.

Planning for AGI and beyond - OpenAI

ChatGPT summary: The article discusses the mission of OpenAI, which is to ensure that artificial general intelligence (AGI) benefits all of humanity. AGI has the potential to elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge. However, AGI also comes with serious risks of misuse, accidents, and societal disruption. The article highlights the importance of a gradual transition to a world with AGI and gaining experience with operating them in the real world. Society and AI need to co-evolve and people collectively figure out what they want while the stakes are relatively low. The optimal decisions will depend on the path the technology takes. As AI systems get closer to AGI, OpenAI is becoming increasingly cautious with the creation and deployment of their models. They believe that the risks of AGI are existential and require much more caution than society usually applies to new technologies.