Global Dialogues Challenge

We asked the world what kind of future they want from AI.

In 2025, we launched a challenge to see what people around the world would build with our data.

The Collective Intelligence Project maintains the Global Dialogues - a platform to bring the world’s voices into the development and governance of AI.

We’ve built a global infrastructure to collect, process, and amplify diverse public input, and our dataset will always be open-source for anyone to build on.

Global Dialogues

We are collecting responses from thousands of people in over 70 countries around the world. Each with their own distinct languages, religions, and cultural backgrounds. 

CIP initiated Global Dialogues to understand what the world thinks about AI, and is using it to inform the AI ecosystem through development, evaluations, benchmarking, and policy-making.

The Challenge

We opened up our global dataset and invited people from around the world to join the Global Dialogues Challenge.

We had more than 500 participants from across 30 countries; they used the data to create stories, research papers, games, films, poems, and new benchmarks to help guide the future of AI.

The judges scored each submission on the quality of the storytelling, creativity, and applicability, and determined an overall winner as well as individual winners for each category.

The Winners

OVERALL WINNER

  • Teaching Our Kids to Build AI That Actually Works for Everyone

    Picture this: your 12-year-old comes home from school excited about AI. What did they learn? How to code a chatbot? How do neural networks function?

    Here's what they probably didn't learn: that a kid in rural India sees AI medical diagnosis as a lifeline when the nearest doctor is hours away. That their classmate whose grandparents survived authoritarian surveillance has very different feelings about facial recognition. That the future they're building together depends on understanding these differences.

    We're teaching our children to build the future of AI, but we're only giving them half the tools. They're learning the "how" brilliantly: coding, algorithms, technical skills. But we're forgetting the "why" and "for whom."

    My project, the AI Cultural Intelligence Agency, transforms global AI dialogue data into a detective game where young minds discover these differences not as conflicts to resolve, but as wisdom to integrate. Children don't just learn what people think about AI—they learn why cultures dream and nightmare so differently about our technological future.

    We're at a crossroads. We can continue teaching AI through technical tutorials that create brilliant engineers who build systems for people just like themselves. Or we can nurture cultural detectives who understand that the same technology lands differently depending on the cultural soil it grows in.

    The children playing detective today become the leaders building inclusive AI tomorrow. This isn't about making technology education fun, it's about preventing a future where brilliant minds create systems that inadvertently harm the communities they never learned to understand.

    Today's kids will decide whether AI helps or hurts real people. They'll design the systems, write the policies, make the choices that affect billions of lives. Let's make sure they're ready to make it work for everyone, and not just people who look and think like them.

    The future is too important to leave to brilliant minds with narrow vision. Let's give our kids both the technical skills and the human wisdom they'll need.

STORYTELLING WINNER

  • A global, cross-cultural exploration of how people want to interact with Artificial Intelligence; not just through screens, but through presence, care, rhythm, and respect.

CREATIVITY WINNER

  • If Words Were Enough is a poetic research tool that explores how AI handles language shaped by memory, emotion, and cultural context. Built on data indicating people's need to preserve cultural nuance, it invites users to co-create limericks with AI using untranslatable words and phrases often misunderstood by models. Each five-line poem becomes a trace of what matters -- part benchmark, part archive, part miniature policy -- revealing what AI systems hear, and what they still miss. The project asks not just how AI understands us, but what it erases in the process.

APPLICABILITY WINNER

  • The AI Social Contract transforms fragmented global conversations about AI into actionable governance insights through the Atlas of AI Sentiment—an interactive tool that maps how 40+ countries perceive artificial intelligence. Using novel methodology to analyze thousands of survey responses, the project reveals three key patterns: developed nations focus on AI ethics while developing nations prioritize economic concerns, technologically advanced countries have more critical AI debates, and low institutional trust amplifies AI skepticism regardless of technical merit.

HONORABLE MENTIONS

  • Aryan Goenka and Alice Benoit

    AI Threat “Meta-Perception” Benchmark

  • Amy Tang, Evelyn Tsoi,Rishi Gupta, Leila Yokoyama, Maddy Chang

    Sync

  • Meghana Kotcherlakota

    Global AI Attitudes and Moral Foundations Theory

  • Mark Schutera

    Mira

  • Subho Majumdar, Aditya Karan, Leif Hancox-Li, 

    Simulating Diverse User Preferences on AI Interactions

  • Ben Windeler

    AI Imagination Quiz

  • Atharva Joshi & Antaraa Vasudev (CIVIS)

    Global Dialogues on AI Trust Dashboard

  • Sabiha Choudhary

    Encoding Consent: Rethinking AI Development through the Lens of Gender, Language, and Power in the Global South

Our Judges