Build your own AI literacy now
Meaningful AI proficiency is accessible to anyone, regardless of technical background. The most valuable skills involve knowing what to ask AI to do, how to evaluate what it produces, and where it will confidently produce wrong answers.
- Start with the tools you already use. Most productivity software, writing tools, spreadsheets, email, and search, now has AI features. Learning to use them well in your specific work is more immediately useful than general AI literacy courses.
- Learn to evaluate AI output critically. AI systems produce fluent, confident text that is sometimes wrong. Understanding how to check claims, identify gaps, and recognise when a result cannot be trusted is the difference between AI as a reliable tool and AI as a source of unchecked error.
- Understand the basic questions to ask about any AI system affecting you: who built it, on what data was it trained, what does it optimise for, and who is accountable when it fails? You do not need a computer science degree to ask these questions.
- Make your AI proficiency visible. In a job market where AI use is increasingly a differentiator, demonstrating what you can do with AI matters more than simply knowing it exists. The 55% employer regret rate on AI-related layoffs shows that companies which cut skilled workers underestimated the cost. Workers who have built proficiency are harder to cut and easier to promote.
What you can do this week
- Spend 30 minutes with one AI tool you have not used before, applied to a task you actually do, not a generic test. The question is whether it changes something specific in your work. That specificity is where genuine proficiency starts.
- Ask your employer what AI training is available to all staff, not just engineers or technical teams. If training exists only for one function, the knowledge gap is being reproduced inside your organisation. The answer, whatever it is, tells you something concrete about the company's position on this.
- Ask your child's school which AI literacy standard their curriculum follows, OECD/EC, UNESCO, or a national equivalent, and whether teachers received training before rollout. If no standard is named, that is itself informative.
- Write to your elected representative with one question: where is the binding AI literacy mandate in the national education plan? The OECD/EC framework defines what schools should be required to teach. Asking why it has not been made binding is a reasonable and specific question for any constituency.
Know your rights
AI is used in decisions about you — in hiring, credit, benefits, education, healthcare, and public services. Your rights depend on where you live, but they are expanding.
- In the EU: The AI Act classifies many AI systems as high-risk and requires transparency, human oversight, and the ability to contest automated decisions. These rights become enforceable in August 2026.
- In Colorado, Illinois, New York (US): Protections against AI-driven discrimination in employment decisions, with rights to notification and appeal. If you believe an AI system affected a hiring or employment decision unfairly, you have grounds to request a human review.
- In most jurisdictions: Existing anti-discrimination law applies to AI systems. An AI system that produces discriminatory outcomes is not exempt from discrimination law because the decision was automated.
If an AI system is used in a decision about you, you can ask what system was used, how it was trained, and whether there is a right to human review. In an increasing number of places, you are entitled to an answer.
Push for change
- Ask your children's school what students are being taught about how AI works, where it fails, and how to evaluate it, beyond how to use it. A school preparing students to use AI uncritically is preparing them to fall behind students who can evaluate and question it.
- Ask your employer what AI training is available to all employees, not just technical teams. If training is confined to engineers, the knowledge gap is being replicated inside the organisation.
- Write to your elected representatives and ask where the AI literacy mandate is in your country's education policy. The OECD/EC framework defines what schools should be required to teach. Ask why it has not been made binding.
- Support organisations working on binding AI governance: The Future Society, AlgorithmWatch, Access Now, and AI Now Institute are among those pushing for legally binding standards that voluntary frameworks have not produced.
- Submit findings to this site: news, corrections, or research that should be covered. The submission form is always open. Public awareness of the knowledge gap is part of how it gets closed.
Go deeper
- EU AI Act plain-language guide: your rights as an EU resident under the world's most comprehensive AI law. See research file.
- Pew Research: public views on AI risk and regulation: 77% of Americans distrust businesses and government to use AI responsibly. See research file.
- UNICEF and USTPC on AI risks to children: AI literacy as a protection for young people navigating AI systems they cannot yet evaluate. See research file.
- Council of Europe Framework Convention on AI and Human Rights: the international treaty framing AI governance as a human rights issue. See research file.
- AlgorithmWatch: civil society research and advocacy on algorithmic accountability and AI governance.
- AI Now Institute: independent research on the social implications of AI, with a focus on power, accountability, and rights.
Open resources
Everything on this site, research, analysis, and index scores, is public and freely shareable. If you have found an error, know of a policy not yet covered, or have research worth looking at, submit it here. All submissions are read.