← All editorials Editorial · Opinion

We Need to Be Educated on AI, Starting Now

March 16, 2026 · Audience: All

The gap between people who understand AI and those who do not is already an economic divide; governments, schools, and companies need to treat AI literacy as a universal entitlement, not a technical privilege.

About this site

This platform exists for one argument: that knowing how to use AI, understanding what it does, and thinking critically about it are no longer optional. They are the conditions for full participation in work, education, and civic life. The gap between people who have that knowledge and those who do not is not a future risk. It is a present inequality, and it is widening.

A person who knows how to use AI to draft, research, analyse, present, and automate does not occupy the same position in a workplace as someone who does not. A student taught to evaluate AI outputs critically is not in the same position as one who accepts them uncritically. The divide is not between humans and machines. It is between people with access to AI knowledge and people without it.

The research on this site tracks what governments and organisations are doing about that divide. The editorials argue for specific changes. The goal is a world where AI literacy, including its ethical dimensions, is a right, not an advantage.


The problem

AI is being deployed in schools, workplaces, courts, hospitals, and hiring processes at a pace that no education system has kept up with. Students are using AI tools they do not understand. Workers are being evaluated by AI systems they cannot see. Citizens are subject to automated decisions they have no framework to question.

I have spent the past weeks tracking what governments and official bodies are doing about this. The picture is worse than I expected.

In the United States, 28 states have issued guidance on AI in schools. Only two states, Ohio and Tennessee, have passed laws. Ohio’s law, which takes effect July 1, 2026, requires every school district to have an AI policy. It does not say what that policy must contain. A district can write one sentence and be compliant. That is paperwork dressed as governance.

The EU AI Act, the most comprehensive AI regulation in the world, contains a single provision on AI literacy: Article 4, which requires providers to ensure their staff have “sufficient AI literacy.” It sets no standard for what sufficient means. No curriculum. No verification. No enforcement. The most advanced AI law on the planet treats education as a footnote.

India has done more than anyone. It has mandated AI curriculum in all schools from Grade 3 upward, funded teacher training nationally, and allocated roughly $60 million to a Centre of Excellence in AI for Education. Even India’s mandate stops at tools and skills. The curriculum does not ask students to examine who controls AI systems, whose interests they serve, or how to participate in decisions about their governance.

The OECD and the European Commission published a framework in May 2025 that defines what AI literacy should look like: using AI, understanding AI, creating with AI, and critically engaging with AI. It is thorough and well-grounded. It is entirely voluntary. No country is required to implement it.


Why it matters

The immediate consequence of the AI knowledge gap is economic. The person who can prompt an AI system effectively, build a simple automation, generate a presentation, or evaluate an AI output critically is already more productive than the person who cannot. That productivity gap translates directly into hiring decisions, performance reviews, salary negotiations, and promotion. It is not speculative. It is happening now, in every sector, at every level.

The workers most at risk are not, primarily, people whose jobs will be automated. They are people who will be outperformed by colleagues who use AI well. A junior analyst who builds AI-assisted workflows is a different proposition to a hiring manager than one who does not. A teacher who understands how AI grading tools work is better placed to advocate for their students than one who does not. A civil servant who knows how to interrogate an AI recommendation is more valuable to a functioning government than one who accepts it uncritically. The knowledge gap is a career gap, and without deliberate intervention it will compound.

The ethics dimension is not separate from the productivity argument. Using AI without understanding what it does is a liability, not just a personal risk. A worker who does not understand that an AI hiring tool trained on historical data will reproduce historical biases cannot identify when their company is exposed to discrimination claims. A student who does not know how an AI system evaluates their work cannot challenge an unfair assessment. An employee who cannot distinguish reliable AI output from confident nonsense will produce unreliable work. Ethics and critical thinking are the difference between AI as a tool and AI as a source of unchecked error.


What should happen

I am arguing for three things, and I want to be specific.

First: governments should mandate AI literacy as a core curriculum subject in schools, with a defined ethics component. A requirement, with funding attached and measurable outcomes. The purpose is not to produce AI engineers. It is to produce people who can use AI effectively, evaluate it critically, and participate in decisions about it. Singapore’s four-part framework, learn about AI, learn to use AI, learn with AI, learn beyond AI, is a workable model. The OECD/EC framework published in 2025 provides the technical standard. What remains is the political will to make either of them binding.

Second: the EU and other governments should require companies to train all employees on AI use and AI ethics. All employees, not only those who write code, because the knowledge divide does not respect job categories. A marketing coordinator who understands how to use AI tools will produce more and better work. An HR manager who understands algorithmic hiring will make better decisions and avoid legal exposure. An accountant who can evaluate AI-generated analysis is less likely to sign off on something wrong. The EU AI Act’s Article 4 is a start; it needs minimum standards, verified training, and enforcement behind it.

Third: AI ethics education should be enacted as a legal obligation. The international consensus, from UNESCO, the OECD, and the Council of Europe, is that AI literacy including ethics is a fundamental requirement for informed citizenship in the current era. That consensus has produced zero binding mandates. A legal obligation changes the incentive structure: companies train employees because they must, not because they choose to. Schools teach AI literacy because it is required, not because a district decided to. That is the only mechanism that closes the gap at the necessary scale.


What already exists

The evidence base for all three proposals is documented in the research sections of this site:

The pattern across every jurisdiction surveyed is consistent. Deployment is accelerating. The knowledge divide is widening. The policy response is lagging far behind both.


What you can do

If you work in a school or university: Ask what students are being taught about how AI systems work, who governs them, and what it means to use them critically. If the answer covers only how to use the tools, the school is preparing students to be consumers of AI, not participants in it. That is the gap this site is documenting.

If you work in a company: Ask whether your AI training covers all employees or only technical teams. Ask whether it teaches people to use AI effectively and to evaluate it critically, or whether it is a compliance checkbox. The employees who understand AI will produce better work. The ones who do not will be outperformed by colleagues who do.

If you work in government or policy: Look at your jurisdiction’s AI regulation and find where the education mandate is. If there is guidance without a requirement, ask why. The OECD framework, India’s national curriculum, and Singapore’s strategy all demonstrate that binding AI literacy policy is feasible. The question is whether there is political will to pursue it before the knowledge divide becomes a structural one.

If you are a citizen: Start building your own AI literacy now, and push your children’s school, your employer, and your elected representatives to make it a requirement. The gap between those who understand AI and those who do not is already an economic gap. It does not close on its own.