← All editorials Editorial · Opinion

Train People First: The Policy Choice Behind Every AI Layoff

March 16, 2026 · Audience: Companies / Governments

Most AI-related job cuts are anticipatory rather than necessary; the real risk is not AI replacing workers but AI-proficient workers replacing AI-unaware ones, and the policy response should make training a requirement, not a discretionary spend.

The problem

In 2025, companies in the United States directly cited AI when announcing 55,000 job cuts. I have spent time tracking what is actually behind those numbers, and the picture that emerges is more troubling in a specific way: the majority of those decisions were driven by what executives hope AI will do, not by what it has done.

Research published by Harvard Business Review in January 2026 found that over 600 executives reported making layoffs in anticipation of AI’s future capabilities. Only 2% of organisations surveyed had made significant cuts tied to actual AI implementation. Sixty percent had cut headcount expecting AI to perform work it does not yet reliably perform. The same research found that 55% of employers who made AI-related cuts regretted doing so.

Workday eliminated 1,750 jobs, 8.5% of its workforce, to “reallocate resources toward AI.” Amazon cut 14,000 corporate roles, citing leaner structures enabled by AI. In most of these cases, the AI systems cited as justification were either not deployed at scale or not yet capable of replacing the roles eliminated. Workers were cut on a bet. Most of those bets are not paying off.

The story here is about how organisations are choosing to respond to AI, and choosing poorly, with no policy framework requiring them to choose differently.

Why it matters

The World Economic Forum’s Future of Jobs Report 2025 projects that AI and automation will displace 92 million jobs by 2030 while creating 170 million new ones, a net gain of 78 million roles. That net gain is conditional. It depends on workers being trained to operate in an AI-augmented environment. The WEF estimates that 120 million workers are at medium-term risk of redundancy if they do not receive that training. Eleven million of those are unlikely to receive it under current conditions.

The framing of AI as a job replacement technology misses the more immediate dynamic. The primary competitive threat most workers face is not AI performing their role. It is a colleague who uses AI to perform it faster, better, and at greater scale. The analyst who uses AI to run ten scenarios in the time it takes to run one is not the same hire as the analyst who does not. The writer who uses AI to research, draft, and refine produces more and better output. The project manager who can build automations for routine tasks frees capacity for higher-value work. These are not edge cases in technology companies. They are already the normal conditions of work across sectors.

This is the knowledge divide in practice. And the organisations cutting headcount on the premise that AI will cover the work are making the same mistake twice: first by eliminating the experienced people who could have learned to use AI well, and then by leaving the remaining workforce without the training to do so. Meanwhile, 77% of employers say they plan to upskill workers in response to AI disruption. The intent is there. What is absent is any obligation to follow through, any standard defining adequate training, and any consequence for companies that cut first and train never.

What should happen

I am arguing for three things.

First: governments should require companies to invest in AI training for all employees before they are permitted to cite AI as justification for large-scale redundancies. If a company is cutting 10% of its workforce because it expects AI to cover the work, regulators should require that company to demonstrate, with evidence rather than press releases, that AI has actually replaced the function, and that workers in remaining roles have been trained to work alongside the systems now doing part of their former colleagues’ jobs. A company that cannot show this has made a cost-cutting decision with an AI label attached.

Second: AI reskilling should be treated as a public infrastructure investment, with binding employer contribution requirements. The 120 million workers at medium-term risk of redundancy cannot wait for companies to voluntarily decide to train them. Reskilling is not an HR initiative. It is the mechanism by which the productivity gains from AI become broadly distributed rather than concentrated among the already-skilled. A company benefiting from AI productivity gains should be required to contribute to the training of workers affected by those gains, including workers affected at other companies in the same sector.

Third: the goal of AI in the workplace should be capability expansion for everyone, not headcount reduction for some. Workers who understand how AI systems work, who can build their own automations, who can identify where AI adds genuine value and where it produces confident errors, are more productive and more valuable than workers who are handed tools and expected to figure them out. The organisations that will do best with AI are those where the widest possible range of employees can use it fluently. That does not happen without deliberate, funded, mandatory training. Leaving it to individual initiative produces exactly the knowledge divide that creates inequality within teams, within industries, and across the workforce as a whole.

What already exists

  • The WEF’s deployment-governance gap: 94% of companies using AI, only 44% with adequate governance infrastructure. See WEF governance myths.
  • The EU AI Act’s Article 4 literacy obligation: a training requirement with no defined standard and no enforcement mechanism. See EU AI Act.
  • SHRM’s identification of algorithmic management and AI workplace surveillance as the top HR issue of 2026. See SHRM workplace AI.
  • The UNESCO civil servant training programme: emergency upskilling in 70+ countries because governments had not trained the people responsible for AI oversight. See UNESCO civil servant literacy.

External sources:

What you can do

If you lead or work in a company: The question to ask is not whether AI will replace your team. It is whether your team is being trained to use AI well enough to remain competitive with teams that are. Ask your leadership what the training plan is for everyone, not just engineers. Ask whether people are being shown how to build automations, evaluate AI output critically, and integrate AI into their specific workflows. The productivity advantage of AI is only accessible to people who know how to use it.

If you work in policy or government: Look at your jurisdiction’s employment law and ask where the training obligation is. When a company cites AI to justify large-scale redundancies, what must it demonstrate? In most jurisdictions, the answer is nothing. The EU AI Act’s Article 4 is a start; it needs minimum standards, a measurement mechanism, and an enforcement consequence. Tax frameworks and state aid conditions could require demonstrated investment in worker AI training as a precondition for AI-related incentives.

If you are a worker: Start building your AI proficiency now, across the tools relevant to your role, and make that visible. The 55% employer regret rate on AI layoffs carries weight; most companies that cut for AI have since concluded they made the wrong call. Workers who have already developed AI capability are harder to cut and easier to promote. That is the practical case for not waiting for your employer to provide training.