The problem
The EU AI Act contains the most comprehensive AI literacy obligation on the planet. Article 4 requires every employer in the European Union to ensure that their workforce possesses sufficient AI literacy to work with, or be subject to, AI systems. That obligation has been legally binding since February 2, 2025. Full compliance is due on August 2, 2026, now less than four months away. The penalty framework is real: up to €35 million or 7% of global annual turnover for the most serious violations of the Act, with a tiered structure reaching €15 million or 3% for high-risk system breaches and €7.5 million or 1% for failures like providing incorrect compliance information.
As of March 2026, eight of 27 member states have designated a single contact point for AI Act governance. Eight. The regulation requires each member state to establish at least one AI regulatory sandbox by the same deadline. The European Commission’s own support instruments for compliance are not expected until the second quarter of 2026, leaving businesses with weeks rather than months to prepare once guidance arrives. And the Commission’s Digital Omnibus proposal, introduced in late 2025, has floated the idea of postponing certain high-risk obligations to December 2027, injecting further uncertainty into a timeline that was already being treated as notional.
I wrote last week about the gap between what the Digital Europe Programme funds and what Article 4 requires. That piece focused on the Commission’s spending choices. This one is about something more fundamental: the gap between writing a law and standing behind it. Because the EU is not alone in this. The enforcement gap is global, and this week’s research reveals its shape with unusual clarity.
Why it matters
Start with what enforcement looks like when it exists.
Beijing now has compulsory AI education in all 1,400-plus public primary and secondary schools, with a minimum of eight class hours per semester covering basic concepts, applications, implementation, and ethics. AI course results are included in students’ comprehensive quality assessments, meaning schools cannot ignore the requirement without it showing up in evaluation outcomes. Zhejiang province has followed a similar path: Hangzhou has made AI classes compulsory, and Wenzhou is investing in 1,000 AI experimental schools. These are not aspirational targets. They are operational deployments in schools that are open now, serving students who are being assessed now.
In Tokyo, the metropolitan board of education has deployed Toritsu-AI, a generative AI platform built on Azure OpenAI, across all 256 metropolitan schools. Approximately 140,000 students and staff can use it. The system includes safety filtering, tenant separation to prevent student data being used for model training, and an explicit pedagogical policy requiring students to understand that AI responses can be wrong and that fact-checking is non-negotiable. Tokyo did not mandate AI literacy and then leave it to schools to work out what that means. It built the tool, deployed it, and defined the terms of use.
Now consider what a mandate without infrastructure looks like.
Pakistan’s Khyber Pakhtunkhwa province has introduced mandatory AI education for grades 6 through 12 from March 2026. The province needs 5,525 IT labs that do not exist. It needs 7,555 teachers it has not yet hired. This sits alongside Punjab’s parallel curriculum rollout, where 100 schools are delivering AI instruction but teacher training materials are still being prepared. Both provinces have mandated something they cannot yet physically deliver. The intention is real. The infrastructure is not.
India’s CBSE curriculum mandate is more credible on the implementation side, with IIT Madras leading curriculum development and CBSE’s 28,000-school network providing the delivery mechanism, but the trajectory is also clearer on timescale: board exams will not include AI until 2029. The mandate exists; the accountability moment is three years away.
In the United States, the pattern is different but the gap is the same. Boston became the first major district to introduce AI literacy in public high schools, funded partly by a $1 million private donation. Massachusetts launched a three-phase K-12 strategy with a 30-district pilot reaching 1,600 students. Both are good-practice examples. Neither is backed by state or federal law, neither compels other districts or states to follow, and neither changes the fact that the United States has no national AI literacy obligation for schools, workers, or employers. The federal government has issued guidance. Guidance is not enforcement.
Ghana presents perhaps the most instructive case this week. The country has approved a $250 million AI Centre and will formally launch its National AI Strategy on April 24. Its civil servant training programme, delivered through a training-of-trainers model, is already operational. The ambition is substantial and the speed is impressive. But the programme runs on a presidential directive, not a statute. No minimum competency standard is defined in law. No enforcement mechanism exists for agencies that do not participate. Ghana is moving faster than most countries in Africa, and doing so without the legal scaffolding that would make its commitments durable beyond the current administration.
What should happen
The enforcement gap is not an accident. It is a policy design failure that recurs because governments treat AI literacy mandates as messaging exercises rather than operational commitments. Writing a mandate is popular. Funding it, building the infrastructure to deliver it, defining what compliance looks like, and penalising organisations that fail to meet it are expensive and politically uncomfortable. So the mandate arrives and the enforcement does not.
Three things need to change.
First, the EU needs to decide whether August 2, 2026 is a real deadline or a symbolic one. If 19 member states have not designated a contact point four months before the deadline, the Commission needs to escalate now, not after the date passes. The infringement procedure exists for exactly this situation. Using it sends a signal that the obligation is real. Not using it sends a different signal, one that every employer paying attention will read correctly.
Second, any government mandating AI literacy in schools needs to publish, alongside the mandate, a funded implementation plan that accounts for teacher training, physical infrastructure, and assessment standards. Pakistan’s KP mandate is a case study in what happens when the sequence is reversed: the obligation precedes the infrastructure, and the result is a law that schools cannot comply with. India’s approach, sequencing the mandate three years ahead of the accountability moment, is more honest about the gap. Beijing’s approach, deploying first and mandating second, is more effective still.
Third, the countries that are relying on voluntary initiatives and good-practice examples need to reckon with the limits of that approach. Boston and Massachusetts are doing useful work. They are also serving a fraction of a fraction of the student population. Without a federal mandate or state legislation with enforcement mechanisms, the US approach will continue to produce islands of competence in a sea of inaction. The same applies to any country where AI literacy is a guideline rather than a requirement.
What already exists
The EU enforcement readiness finding is at EU AI Act Member State Enforcement Readiness. The Article 4 obligation itself is at EU AI Act Article 4 Training. The Digital Europe funding gap analysis is the subject of a previous editorial.
Education findings this week: Beijing compulsory AI education, Zhejiang/Hangzhou compulsory AI classes, Tokyo Toritsu-AI platform, Pakistan KP mandatory curriculum, India CBSE board exam mandate, Pakistan Punjab curriculum rollout, Boston AI literacy, Massachusetts K-12 pilot.
Policy findings: Ghana $250M AI Centre and strategy launch, Ghana civil servant training.
What you can do
If you are in EU policy: the 19 missing contact points are a matter of public record. Ask your national government whether it has designated one, and whether it will have a functioning regulatory sandbox by August. If the answer is no, the follow-up question is what the Commission intends to do about it. The infringement procedure is not a threat to be avoided; it is the mechanism that makes the directive credible.
If you are an employer anywhere: a mandate without enforcement is still a mandate. The absence of a penalty today does not guarantee the absence of a penalty tomorrow, and the reputational and operational costs of having an untrained workforce interact with AI systems are immediate regardless of whether a regulator is watching. The cheapest time to invest in AI literacy is before the enforcement architecture catches up.
If you work in education, in any country: the contrast between Beijing and KP is the most useful data point this week. One deployed infrastructure before mandating curriculum. The other mandated curriculum before building infrastructure. The outcomes are not equivalent, and the lesson applies everywhere: the order of operations matters. Start with what you can deliver, not with what you can announce.