← All editorials Editorial · Opinion

Pro-Worker AI Already Exists. The Question Is Who Gets to Ask for It.

March 26, 2026 · Audience: Companies / Governments

The debate about AI and job displacement is framed around the wrong question; the MIT taxonomy of technological change shows that pro-worker AI is technically real and already operating in the field, but market incentives will not produce it at scale without workers who understand what they are demanding.

This editorial was inspired by coverage in the AI Daily Brief. Their consistent work making AI research and news coverage legible to a general audience is a genuine contribution, and I am grateful for it.


The problem

The conversation about AI and employment has collapsed into two equally unhelpful positions: denial, meaning AI creates more jobs than it destroys and always will, and doom, meaning mass unemployment is structurally inevitable and nothing will stop it. Neither is exactly wrong. Both miss the more important question: what kind of AI is being built, by whom, and for whose benefit?

A paper by MIT economists Daron Acemoglu, David Autor, and Simon Johnson cuts through the noise.1 They identify five categories of technological change: labor-augmenting, capital-augmenting, automation, new task-creating, and expertise-leveling. They evaluate each against three dimensions, labor productivity, the value of human expertise, and labor’s share of national income, and find that only new task-creating technologies are unambiguously pro-worker. Their diagnosis is precise and uncomfortable: the market is currently overwhelmingly focused on task automation and AGI development. The categories of technological change that expand what humans can do, rather than replace what humans do, are being systematically under-invested.

This is not a natural law. It is a design and investment choice.

Why it matters

The choice is not being made democratically. It is being made by the companies with the capital to invest in AI development and the talent to build it. The workers whose jobs are being re-engineered are, in most cases, not at the table. The structural reasons are interconnected: they lack the vocabulary to participate, they lack the technical literacy to evaluate what is being proposed, and they lack the legal frameworks that would give them standing to ask.

The Atlassian layoffs are a useful illustration.2 Nine hundred of the 1,600 cuts fell on software R&D workers, announced in the same breath as 25% revenue growth. The efficiency gains are real. Whether those gains flow to workers, to shareholders, or to customers is not a technical question. It is a power question, and power follows comprehension.

This is where the AI knowledge divide connects directly to the employment question. The Harvard Business School finding that workers enrolled in AI training programs report 34% higher workplace anxiety than colleagues not in such programs does not indict training as such.3 It indicts training without agency. You can teach someone what AI does without giving them any say in what AI does to them. Those are not the same thing.

When 90% of global enterprises report critical IT and AI skills shortages, and IDC projects $5.5 trillion in losses from that gap by 2026, the instinct is to read it as a supply problem, not enough courses, not enough instructors.4 The more accurate reading is that it is a design problem: organisations are failing to produce AI literacy that connects to power, that gives workers a vocabulary for the decisions being made around them.

What should happen

Governments and employers need to separate two questions that are currently conflated. The first is whether AI displaces jobs in aggregate. The second is what kind of AI gets built in any specific workplace, industry, or investment cycle. The second question is far more tractable.

In my view, employer incentives need to be attached to outcomes, not inputs. The AI Workforce Training Act introduced by Congressman Gottheimer, proposing a 30% tax credit for up to $2,500 per employee per year in AI training costs, is a step in the right direction.5 But a tax credit for purchasing a course is not the same as a credit conditional on workers in AI-affected roles having increased capability, retained roles, and genuine say in how AI is used in their work. The distinction matters more than the dollar amount.

Raimondo’s proposal in the New York Times, employer tax credits tied to on-the-job training, combined with state pilot programs that reward worker retention and incentivize reinvesting AI-driven savings into job creation, begins to address this.6 The ECB study of 5,000 Eurozone firms provides the empirical baseline that makes this viable: companies making significant use of AI are approximately 4% more likely to hire additional staff, not less.7 The relationship between AI adoption and employment is not fixed. That is the premise any serious policy intervention rests on.

The training mandate matters too. The U.S. Department of Labor’s framework across all 50 state workforce agencies and American Job Centers is a real foundation.8 It is also, as I have argued before, a foundation without a building: no minimum hours, no ethics content requirement, no funding for displaced workers, no enforcement mechanism. Each of those gaps is a political choice as much as the design choices happening inside AI labs.

What already exists

The MIT paper gives concrete examples of pro-worker AI already operating in practice: an electrician’s assistant that draws on prior case data and uploaded photos to help workers troubleshoot complex problems, keeping judgment in the worker’s hands rather than removing it. That tool exists. The taxonomy Acemoglu, Autor, and Johnson provide gives policymakers and employers a concrete framework for evaluating any proposed AI tool before deploying it: which of the five categories does this tool fall into, and what does that mean for the workers using it?

Relevant background in the research archive: the U.S. Department of Labor mandate, the Harvard Business School findings on training anxiety, IDC’s skills gap data, the Gottheimer Act, and SHRM’s documentation of algorithmic management without worker rights.

What you can do

If you work in HR, operations, or learning and development: the MIT taxonomy is worth reading in full. The five categories give you a vocabulary for a conversation most organisations are not having. The question is not “are we training workers to use AI?” It is “are we deploying AI that increases what our workers can do, or AI that substitutes for what they do?” Those require different answers, different tools, and different investments.

If you are a policymaker: attach incentives to outcomes. Training credits conditioned on retention, capability demonstration, and role security are materially different from credits conditioned on enrollment. Write the difference into the legislation.

If you are a worker: the MIT paper is readable and not long. Knowing the difference between an automation-first deployment and an augmentation deployment, and being able to name that difference in a conversation with your manager or your union representative, is a form of power that costs nothing to acquire and is harder to ignore than a general complaint about job security.

The kind of AI that gets built is a political choice. The people best positioned to push for the pro-worker version are the ones who know it exists.


Footnotes

  1. Acemoglu, D., Autor, D., & Johnson, S. (2025). Building Pro-Worker Artificial Intelligence. National Bureau of Economic Research.

  2. Atlassian. (2026, March). An important update on our team. Atlassian Blog.

  3. Harvard Business School. (2026). AI Training, Job Anxiety, and Satisfaction Decline. Research summary on file.

  4. IDC. (May 2024). Enterprise Resilience: IT Skilling Strategies, 2024 (Doc #US52080524). International Data Corporation. Synthesised by Workera at The $5.5 Trillion Skills Gap. Note: the $5.5T figure covers the broader IT skills shortage; AI is identified as the top in-demand skill category within it.

  5. Gottheimer, J. (2026, March). Bipartisan AI Workforce Training Act. U.S. House of Representatives.

  6. Raimondo, G. (2026, March 6). America Cannot Withstand the Economic Shock That’s Coming. The New York Times.

  7. European Central Bank. (2026, March 4). Artificial Intelligence: friend or foe for hiring in Europe today?. ECB Blog.

  8. U.S. Department of Labor. (2026, February 13). Training and Employment Notice No. 07-25. Employment and Training Administration.