Skip to content

AI and its impact on public involvement

Strategies and perspectives for charting a new course

Engaging the Public in AI Decisions and Applications
Engaging the Public in AI Decisions and Applications

AI and its impact on public involvement

In the rapidly evolving landscape of Artificial Intelligence (AI), ensuring that public opinions truly count in policymaking has become a pressing concern. A recent rapid evidence review, titled 'What do the public think about AI?', highlighted the need for meaningful public participation in decisions on AI regulation, design, and deployment[1].

The UK Global Summit on AI Safety, held at the end of 2023, aimed to address these concerns by focusing on advanced AI models and their potential implications[2]. However, a key criticism was the lack of representation of citizens and affected groups in the summit's official communiqueé, with Professor Noortje Marres pointing out that AI is 'profoundly undemocratic'[3].

To bridge this gap, the combination of AI-driven analysis with human-centered, culturally aware facilitation and education emerges as a promising solution. This approach, which leverages advanced AI tools—especially large language models (LLMs)—to analyse rich, open-ended public input at scale, can make broad public participation manageable and meaningful[1][3]. Human moderators, crucial in maintaining trust and fostering a deliberative environment, contextualise AI findings and ensure inclusivity across regions[1].

Building trust requires ensuring transparency about AI’s role in policymaking, its reliability, and ethical use. Given diverse cultural contexts, public education on AI—especially AI literacy tailored to local contexts—is essential for meaningful participation and acceptance[4]. Adapting methods to sociopolitical contexts is also crucial, as different regions have unique political, social, and technological environments affecting public mobilization and trust in AI[2].

Emerging research shows AI can predict policy preferences, helping to represent diverse societal values accurately. However, this must be done carefully to avoid biases and ensure rights and needs are respected[5]. The next instalment of the blog series will share research on and experiences of public participation (bottom-up as well as top-down) in data and AI policymaking from different countries and at different scales, from local to national to regional.

Notable voices at the summit and parallel fringe events emphasised the need for the inclusion of diverse voices from the public[6]. Marietje Schaake, a member of the AI Advisory Board for the United Nations and former Member of the European Parliament, suggested improving openness and participation by involving a random selection of citizens in any advisory body on AI[7]. An open letter, signed by international organizations and coordinated by Connected by Data, the Trades Union Congress, and Open Rights Group, called for a wider range of voices and perspectives in AI policy conversations, particularly from regions outside of the Global North[8].

The People's Panel on AI, a randomly selected group of people broadly reflective of the diversity of the population in England, wrote a set of recommendations for policymakers and the private sector, including the need for a system of governance for AI in the UK that places citizens at the heart of decision-making[9]. The evidence review draws on examples of public participation embedded in governance structures, such as the Organisation for Economic Co-operation and Development's (OECD's) 'Institutionalising Public Deliberation' framework, Belgium's Ostbelgien model, Paris's model for a permanent citizens' assembly, and Bogota's itinerant assembly[10].

In-depth involvement of the public is particularly important when what is at stake are not narrow technical questions but complex policy areas that permeate all aspects of people's lives, as is the case with the many different uses of AI in society. Complex or contested topics, those that can threaten civil and human rights in particular, require in-depth public participation. AI uses related to accessing government services or requiring the use of health and biometric data require a serious and long-lasting engagement with the public[11].

As AI affects and transcends countries and regions across its supply chain, it is even more important that we consider examples of public engagement from different places and contexts. AI is part of global interlocking systems of oppression, and its harms can therefore only be addressed 'from the bottom up, from the perspectives of those who stand the most risk of being harmed'[12].

References: 1. Rapid Evidence Review: What do the public think about AI? 2. UK Global Summit on AI Safety 3. Marres, Noortje. 2023. 'AI is profoundly undemocratic', The Guardian 4. AI Literacy and Transparency 5. Responsible Use of Predictive AI to Reflect Public Preferences 6. Summit Speakers Emphasise Need for Public Inclusion 7. Schaake, Marietje. 2023. 'Improve openness and participation in AI policymaking', The Economist 8. Open Letter Calls for Wider Voices in AI Policy Conversations 9. People's Panel on AI Recommendations 10. Examples of Public Participation in AI Governance 11. In-depth Public Involvement in AI Policymaking 12. Addressing AI Harms 'From the Bottom Up'

Technology and artificial intelligence can play a significant role in bridging the gap between policymakers and the public in the decision-making process of AI regulation, design, and deployment. Advanced AI models, such as large language models, can analyze vast amounts of public input, making broad participation manageable and meaningful, while human moderators ensure inclusivity and trust across regions.

Read also:

    Latest