A striking feature of current AI governance discourse is how consistent the diagnosis is across groups that rarely agree on anything else. Researchers, unions, civil society organisations, children’s rights advocates, and international institutions have converged on a common set of concerns. This post maps what each group is saying and identifies the patterns those voices collectively reveal.
Public Opinion
The most prominent quantitative signal in recent public discourse comes from two US surveys published in spring 2025. A Gallup-Bentley University poll found that 77% of Americans distrust both businesses and government to use AI responsibly. A simultaneous Pew Research survey found that 55% of the general public rate AI bias as a high concern, a figure matched almost exactly by 55% of AI experts asked the same question. The convergence of lay and specialist concern is unusual; it suggests the worry is grounded in observable behaviour rather than abstract anxiety. Both surveys found majority support for government regulation across most AI application domains. See research: Gallup-Bentley and Pew Research — Public Distrust of AI Reaches 77%.
Civil Society
At the 2025 Paris AI Action Summit, a coalition of 44 civil society organisations surveyed by The Future Society pushed collectively for legally binding prohibitions on high-risk AI and mandatory independent third-party audits. The organisations explicitly rejected voluntary frameworks. Their specific demands — AI workforce observatories, enforceable prohibitions, and structured public participation in governance decisions — amount to a comprehensive alternative to the self-regulatory model that has dominated AI governance so far. See research: The Future Society — 44 Civil Society Organisations Demand Legally Binding AI Rules.
UNICEF and the US Technology Policy Committee added a child-protection dimension. Both raised concerns about generative AI systems mediating children’s social, educational, and political information environments without adequate transparency or safeguards. The USTPC explicitly recommended mandatory AI literacy as a protective measure alongside technical safeguards, treating it as a rights-based obligation rather than an optional curriculum addition. See research: UNICEF and USTPC — AI Risks to Children Require Literacy as a Protective Measure.
Labour and HR
Workers and their advocates are the group with the most immediate and specific grievance. The Society for Human Resource Management identified AI governance in employment as one of its five top workplace issues for 2026. The documented harms include automated hiring decisions that workers cannot contest, algorithmic performance scores they cannot inspect, and electronic surveillance they cannot opt out of. State-level protections exist in Colorado, Illinois, and New York; the majority of US workers have none. See research: SHRM and Labour Advocates — Algorithmic Management and AI Workplace Surveillance as Top 2026 Issue.
Academic and Expert Analysis
The University of Oxford published expert commentary in March 2026 on the Pentagon-Anthropic dispute, arguing it is a symptom of governance failures that predate the current administration. Oxford’s framing is precise: AI governance is being “improvised in real time rather than designed through deliberate policy.” Significant legal gaps remain in domestic surveillance and autonomous weapons, gaps that current law leaves open to contested interpretation. See research: University of Oxford — Pentagon-Anthropic Dispute Exposes Structural AI Governance Failures.
Stanford’s 2025 AI Index quantified one dimension of the cost: AI-related data privacy risks increased 56% year-on-year. The rate of measurable harm is rising faster than the rate of regulation across every domain studied. The highest exposure is in health, education, and employment — the domains affecting children, workers, and patients. See research: Stanford AI Index 2025 — AI-Related Data Privacy Risks Increase 56% Year-on-Year.
The World Economic Forum’s February 2026 analysis debunked eight governance myths, including the belief that voluntary principles are sufficient and that AI risk can be managed by technical teams alone. Its data point — 94% of global companies are using or piloting AI, while only 44% say their security architecture is equipped to support it — measures the deployment-readiness gap across the whole private sector. The WEF’s framing is economic rather than ethical: organisations that bypassed responsible governance measurement face material credibility and market consequences. See research: World Economic Forum — Eight Myths Sabotaging Modern AI Governance.
International Institutions
UNESCO’s response to the civil servant literacy gap is itself a data point. In June 2025, it delivered AI literacy training through a global train-the-trainer programme; by October 2025, over 70 countries had engaged. UNESCO is running emergency training because governments deploying AI governance frameworks had not trained the officials responsible for applying them. The programme exists because the default — voluntary preparation — produced inadequate readiness. See research: UNESCO — AI Literacy Training for Civil Servants Across 70+ Countries.
The Alliance for AI & Humanity’s year-end 2025 analysis characterised the overall state as framework-rich and enforcement-poor: principles are announced, audits are rare, and accountability mechanisms are absent when harms occur. The three pressures it identified as dominating 2026 — governing automated systems, managing workforce disruption, and confronting AI’s infrastructural limits — describe problems that voluntary frameworks have not addressed and show no signs of addressing. See research: Alliance for AI & Humanity — “Beautiful Principles, Messy Implementation”.
Government: Moving in the Wrong Direction
Against this backdrop, the most significant US government action in recent months has been the December 2025 executive order directing the Attorney General to challenge state AI laws, with the FTC instructed to classify state-mandated bias mitigation as a deceptive trade practice. Nearly two dozen state attorneys general filed opposition letters. Bipartisan governors, including Ron DeSantis and Gavin Newsom, objected. Legal experts noted that the EO’s logic — that requiring AI to correct bias forces developers to produce “less truthful” outputs — would functionally prohibit states from mandating that AI systems treat people fairly. The FTC deadline for a preemption policy statement was March 11, 2026. See research: Trump Administration — Federal Push to Preempt State AI Laws Draws Bipartisan Opposition.
What the pattern shows
Across all of these groups — the general public, civil society, labour advocates, academic researchers, international institutions, and bipartisan state governments — the diagnosis is consistent: AI is being deployed broadly, voluntary frameworks are not producing accountability, and the people most directly affected (workers, children, civil servants, the general public) are the least protected. The one actor moving against this consensus is the US federal government, which is dismantling the only functioning governance that exists at the state level rather than building toward a federal alternative. The discourse is not waiting for the policy; the policy is retreating from the discourse.
See editorial: We Need to Be Educated on AI — Starting Now
Last updated: 2026-03-16