← Take Action For officials & policymakers

Governments & Policy

The gap between people who understand AI and those who do not is an economic divide, and it is widening. Voluntary frameworks have had time to work; they have not worked. Binding policy is the mechanism that operates at the necessary scale, and the technical standards for what that policy should require already exist.

What to audit in your jurisdiction

  • Does your AI regulation contain an education or literacy mandate? If so, does it set a minimum standard with defined consequences for non-compliance, or is it language without enforcement?
  • Do your schools have binding AI literacy requirements, with funding attached and measurable outcomes? Real policy defines a standard, attaches a budget, and specifies accountability for delivery.
  • Are companies operating in your jurisdiction required to train all employees on AI use and AI ethics, or only technical staff? The knowledge gap runs through all roles.
  • Are civil servants who work with or oversee AI systems trained to understand what those systems do, where they fail, and how to evaluate their outputs critically?
  • When a company in your jurisdiction cites AI to justify large-scale redundancies, what must it demonstrate? In most jurisdictions, the answer is nothing.

What you can do this week

  • Pull your jurisdiction's score from the Global AI Gap Index and identify the single lowest-scoring dimension. A specific dimension score with a direct question changes the terms of a committee session, budget review, or legislative debate far more than a general reference to "the AI challenge".
  • Check whether your AI regulation contains a defined minimum standard for literacy or training, with funding and an enforcement mechanism attached. A functioning mandate defines a standard, names a responsible body, and specifies consequences for non-compliance. Note which element is missing; that is the specific gap to address.
  • Identify which body currently has authority to enforce AI literacy obligations in your jurisdiction and what the consequence is for non-compliance. In most jurisdictions, no such body exists specifically for training obligations, even where AI regulation is in force. Naming that gap explicitly is the first step to filling it.
  • Table a written question or motion at the next relevant parliamentary, congressional, or legislative session asking what AI literacy standard schools in your jurisdiction are required to meet, whether it is funded, and who is accountable for delivery.

What good policy looks like

  • Mandatory AI literacy in schools, with a defined standard, funded teacher training, and measurable outcomes. India's national mandate shows scale is achievable. The OECD/EC framework provides the technical standard. What is needed is political will to make it binding.
  • An integrated ethics component in all AI literacy mandates. Understanding how AI systems can fail, reproduce bias, and serve particular interests is part of using them responsibly; it belongs in the same mandate, not a separate one.
  • Mandatory AI training for all employees in companies using AI in consequential decisions, with employer contribution requirements tied to demonstrated delivery. The EU AI Act's Article 4 points in this direction; it needs minimum standards and enforcement behind it.
  • Pre-redundancy training requirements. A company citing AI to justify large-scale layoffs should be required to demonstrate that AI has actually replaced the relevant functions and that remaining workers have been trained. Without this, AI-labelled cost-cutting faces no scrutiny.
  • Implementation of the Council of Europe Convention as a domestic legal standard, with education and training obligations derived from its human rights commitments and given the same weight as its technical provisions.

The gap the research shows

Across every jurisdiction surveyed in this site's research, the pattern is consistent: the most advanced AI regulation in any given region leaves literacy and training to discretion. The EU AI Act's Article 4 requires sufficient AI literacy with no defined standard. The Council of Europe Convention mandates human rights compliance with no education dimension. The US has no federal AI law and is actively preempting state-level worker protections.

The technical standards already exist. The OECD/EC framework, UNESCO competency standards, and Singapore's four-part model all define what AI literacy should require. The remaining gap is political: the decision to make those standards binding rather than aspirational. Voluntary frameworks have had time to work. They have not worked.

Go deeper

Open resources

The research archive behind this site is public and freely shareable. Every policy finding traces to a primary source. If you know of a regulation, mandate, or enforcement mechanism not yet covered, submit it here.