How we score
The index scores on this site are an attempt to make a diffuse problem visible. Governments publish strategies. Frameworks get endorsed. Commitments get made. Most of it does not change what a student learns, what a worker knows, or what a citizen can do when an AI system makes a decision about them. The scoring model tries to separate signal from noise by asking a simple question for each dimension: is there anything put in motion here, or just a statement of intent?
This is not a scientific index. It is one person's honest attempt to track a real gap using publicly available evidence. The model has limitations, the research is incomplete, and some scores will be wrong. Where I have made a mistake, I want to know. The methodology is documented here so that anyone understands where it comes from. Feedback welcome.
The scale
Every dimension scores 0, 1, or 2:
The distinction between 1 and 2 is the central argument of this site. Voluntary guidance and binding law are not points on a spectrum, they produce different outcomes. A school district can ignore guidance. It cannot ignore a law. A company can deprioritise a recommendation. It cannot deprioritise a penalty.
The four groups
Dimensions are organised into four groups, each covering a different domain where the AI knowledge gap plays out. The maximum score per group is set by how many dimensions it contains, each worth up to 2 points.
Whether the next generation is being prepared to understand, use, and evaluate AI critically, not just as a tool, but as a system with social consequences.
- AI literacy curriculum exists at national or regional level.
- Curriculum is binding — a law or mandate, not a voluntary framework.
- Ethics component included — the curriculum requires students to think critically about AI, not only learn to use it.
- Teacher training mandated and funded before rollout begins. A curriculum without trained teachers is an aspiration.
- Dedicated funding attached to the curriculum requirement. An unfunded mandate is not a mandate.
Whether workers, all workers, not only technical staff, have rights and protections as AI enters their workplaces.
- Employers legally required to train all employees on AI, not only those whose job titles suggest they need it.
- Training must cover critical evaluation and ethics, not only how to operate specific tools.
- Pre-redundancy obligation — if a company cites AI as a reason for layoffs, it must demonstrate that training was offered first.
- Workers have legal rights when AI is used in employment decisions — hiring, performance, promotion, dismissal.
Whether governments have the institutional capacity to oversee AI, require training for those deploying it in public roles, and enforce obligations around AI literacy.
- Named AI governance institution with a real mandate — authority to set rules, conduct oversight, or enforce compliance. Advisory panels and task forces score 1. Full regulators score 2.
- Civil servant AI training — government staff who work with AI systems are required to receive training, with a defined minimum standard.
- Training enforcement — a mechanism exists specifically for AI literacy or training obligations, with consequences for non-compliance. Enforcement of AI system regulation alone does not qualify for this dimension.
Whether people who are not in school or formal employment can access AI literacy — the general public, including those most likely to be excluded.
- Free or publicly funded AI literacy resources available to the general population, not only students or employees.
- Digital infrastructure support for schools and public institutions — connectivity and equipment are prerequisites for AI literacy, and they are not equally distributed.
What a high index score means — and what it does not
A high index score means a jurisdiction has enacted binding policy across multiple dimensions. It does not mean those policies are working, that implementation quality is high, or that the AI knowledge gap is actually narrowing. A law on paper and a trained workforce are not the same thing. The model measures the former because it is observable; the latter requires data that does not yet exist at scale.
A low index score does not mean a jurisdiction is doing nothing. It may mean it is investing in ways that are voluntary, experimental, or too recent to have been enacted into law. It may also mean the research has not captured what exists. The index scores should be read as a floor, not a ceiling.
Where the model falls short
Several things matter to the AI knowledge gap that this model does not yet capture well.
Implementation quality. A mandate with no budget, no inspection regime, and no consequence for non-compliance scores 2 on the binding dimension but may produce nothing in practice. The model cannot distinguish a well-implemented mandate from a poorly implemented one. That requires a different kind of research.
Sub-national variation. In federal systems, like the United States, India, or Canada, the real action is often at the state or province level. A national index score can mask enormous variation. Sub-national scoring is available for some states or subregions. The aim is to expand this.
Private sector activity. Some companies are training their employees well, funding AI literacy programmes, and acting ahead of regulation. This does not show up in the index, which measures public policy obligations. The absence of a legal mandate does not mean a workforce is unprepared, it means preparation is left to discretion, which is unequal.
Coverage. The regions tracked here were chosen randomly as an initial sample. There are significant gaps. The absence of an index score for a country does not mean it has no relevant policy. It means it has not been researched yet.
Speed of change. AI policy is moving faster than this site can track. Index scores are updated when new research is published, which means some index scores will lag behind current reality. Each region page shows the date of the most recent update and links to the underlying research files.
How index scores are updated
Research is filed to the archive on a best effort basis, organised by type: policies, education initiatives, and public discourse. Each research file covers a single item and includes a score_impact field listing the dimensions it affects. When a research item changes the evidence for a dimension, the region index score is updated and a dated entry is added to the region's history. Previous index scores are never overwritten, the full history is preserved.
Index score changes are conservative: a score only changes if a research file provides direct evidence of a change. Inference and extrapolation are not used. This means index scores may understate reality if relevant evidence has not yet been filed.
If you are aware of a policy, law, or initiative that should change an index score, submit it via the form on this site. Include the region, the source URL, and which dimension you think it affects. Every submission is read.