← All research updates Research Update

Workplace AI Training: What Employers Are Not Doing and Why It Matters

March 16, 2026

AI deployment in workplaces has accelerated sharply over the past two years. The training, literacy, and accountability structures that employees and employers need to navigate that deployment have not. This post draws on recent research and expert discourse to map the current state of workplace AI training — what exists, what is missing, and what that gap is costing.

What the pattern shows

Across employer practice, international policy, expert analysis, and public opinion, the same structure recurs:

  • AI is deployed broadly, governance frameworks exist on paper, and the training required to make those frameworks meaningful is absent.
  • Employers are not training workers to understand the AI systems evaluating their performance.
  • Governments have not trained the civil servants responsible for AI oversight. i
  • The strongest AI law in force contains a literacy requirement that sets no standard.
  • Public distrust is at 77%, while the policy response in the US is moving toward preempting state protections rather than establishing federal ones.
  • The gap between deployment and understanding is not a lag that time will close automatically; it requires mandatory training obligations with defined standards and enforcement mechanisms.

Policy

The most comprehensive AI law currently in force, the EU AI Act, reaches its primary compliance deadline in August 2026. Employment systems, including hiring algorithms, performance monitoring tools, and automated management software, are classified as high-risk and subject to the Act’s full obligations. Article 4 of the Act requires providers to ensure their staff have “sufficient AI literacy.” The provision exists in name only: it sets no minimum standard, specifies no curriculum, and carries no enforcement mechanism. An employer can satisfy Article 4 by doing nothing visible, because there is no defined threshold against which a regulator can test compliance. The most advanced AI regulation in the world has a training obligation with no substance.

Workplace Discourse

The Society for Human Resource Management listed AI governance in employment as one of its five top workplace issues for 2026, a designation that signals the concern has moved from activist fringe to mainstream HR compliance territory. The specific harms SHRM and labour advocates are documenting include automated hiring decisions workers cannot contest, algorithmic performance scoring workers cannot inspect, and electronic surveillance workers cannot opt out of. Most US workers remain outside the coverage of the state-level legislation — in Colorado, Illinois, and New York — that has begun to address some of these practices. See research: SHRM and Labour Advocates — Algorithmic Management.

UNESCO’s response to the parallel gap in government workplaces is instructive. In June 2025, UNESCO delivered AI literacy training for civil servants through a global train-the-trainer programme; as of October 2025, over 70 countries had engaged with the programme. The programme exists because governments deploying AI governance frameworks had not trained the officials responsible for applying them. UNESCO is filling an emergency capacity gap that governments should have addressed through policy. See research: UNESCO — AI Literacy Training for Civil Servants.

Expert and Public Discourse

The World Economic Forum’s February 2026 analysis of AI governance myths provides the clearest quantitative illustration of the deployment-readiness gap. Ninety-four percent of global companies report using or piloting AI in their IT operations; only 44% say their security architecture is equipped to support AI securely. The WEF identifies the belief that voluntary principles are sufficient — and that AI risk can be managed by technical teams alone — as primary contributors to this mismatch. The report argues that organisations that bypassed responsible AI governance will face material credibility and market consequences. See research: World Economic Forum — Eight Myths Sabotaging Modern AI Governance.

That framing is reinforced by the Alliance for AI & Humanity’s year-end 2025 analysis, which characterises the current state as framework-rich and enforcement-poor. Principles are announced. Audits are rare. When harms occur, accountability mechanisms are absent. The Alliance identified governing increasingly automated systems and managing workforce disruption as two of the three pressures most likely to define 2026. See research: Alliance for AI & Humanity — “Beautiful Principles, Messy Implementation”.

Public opinion surveys provide the demand-side data. A 2025 Gallup-Bentley University survey found that 77% of Americans distrust both businesses and government to use AI responsibly. A simultaneous Pew Research survey found that 55% of the general public and 55% of AI experts share high concern about AI bias — an unusual convergence of lay and specialist opinion. Majorities in the same survey supported government regulation of AI across most application domains. See research: Gallup-Bentley and Pew Research — Public Distrust of AI Reaches 77%.

See editorial: We Need to Be Educated on AI — Starting Now

Last updated: 2026-03-16