← Policy tracker Research · Policy

South Korea Framework Act on AI Development and Establishment of Trust Enters Effect

RegionSouth Korea
DateJanuary 22, 2026
StatusActive
Sourcehttps://cset.georgetown.edu/publication/south-korea-ai-law-2025/
governanceethicsliteracyeducationenforcement

South Korea’s Framework Act on the Development of Artificial Intelligence and Establishment of Trust (the AI Basic Act) took effect on 22 January 2026. Passed by the National Assembly in December 2024 and signed into law in January 2025, it makes South Korea the second jurisdiction globally — after the European Union — to enact comprehensive AI legislation. The Act consolidates 19 separate AI bills into a unified framework, establishing a National AI Committee chaired by the president, an AI Policy Center for strategic development, and an AI Safety Research Institute for risk evaluation and standards. It introduces transparency and safety obligations for developers and deployers of high-impact AI and generative AI, requires businesses to conduct AI risk assessments, and includes transparency rights for individuals affected by AI systems in consequential decisions. The Ministry of Science and ICT (MSIT) is the primary supervisory authority. An Enforcement Decree issued by MSIT in September 2025 details implementation. The government has also announced a grace period of one year before administrative fines are imposed on businesses.

Who it affects: Businesses that develop or deploy high-impact AI systems and generative AI in South Korea — including foreign businesses meeting certain revenue or user thresholds — face transparency, safety, and risk assessment obligations. Individuals affected by AI-based decisions have transparency rights under the Act. Government agencies and civil servants working with AI are subject to oversight by the National AI Committee.

What is notably missing: The AI Basic Act does not create a binding obligation on employers to train all employees in AI literacy or ethics — workforce training provisions are aspirational rather than mandatory. There is no penalty structure specifically for failing to provide AI literacy training. The Act’s focus is on governing AI systems rather than on ensuring the human workforce using those systems has adequate knowledge of risks, biases, or limitations. Civil servant training requirements are not defined to a minimum standard.