← Policy tracker Research · Policy

EU AI Act — High-Risk AI Systems Compliance Deadline (August 2026)

RegionEuropean Union
DateMarch 16, 2026
StatusIn force — phased implementation
Sourcehttps://artificialintelligenceact.eu/
enforcementethicseducationworkplacepolicy-gap

The EU AI Act entered into force in August 2024 and reaches its most significant compliance milestone on August 2, 2026. By that date, providers of high-risk AI systems — covering biometrics, critical infrastructure, education, employment, law enforcement, migration, justice, and democratic processes — must have quality management systems, risk assessments, technical documentation, conformity assessments, and EU database registrations in place. Transparency requirements under Article 50 become enforceable simultaneously: AI chatbots must disclose their artificial nature, deepfake content must carry machine-readable watermarks, and emotion recognition systems require user notification. Penalties for non-compliance reach €35 million or 7% of worldwide annual turnover for prohibited practices.

Who it affects: Any company placing AI systems in the EU market, including non-EU companies. Education and employment AI systems are explicitly classified as high-risk, meaning schools, universities, and employers using AI in consequential decisions face direct obligations.

What is notably missing: The Act says little about mandatory AI literacy or ethics training for employees or students. The AI literacy obligation in Article 4 requires providers to ensure staff have “sufficient AI literacy” but sets no minimum standard, no curriculum, and no enforcement mechanism. Education institutions that use high-risk AI are regulated as deployers but are not required to teach students about AI or its governance.