Both UNICEF and the US Technology Policy Committee raised concerns in 2025 about AI systems — particularly generative AI chatbots — that mediate social, educational, and political information for minors without adequate safeguards. UNICEF emphasised that AI systems impacting children must be designed with explicit transparency and accountability mechanisms to prevent exploitation. The USTPC warned specifically about the manipulative potential of chatbots interacting with young users and recommended mandatory AI literacy as a protective measure alongside technical safeguards.
Published by: UNICEF (United Nations Children’s Fund, international institution) and the US Technology Policy Committee (expert advisory body).
Key finding: Both organisations frame AI literacy for children as a protective measure, not merely an educational one. Children are exposed to AI systems that shape their social, educational, and political information environments without the capacity to evaluate or question what those systems are doing.
Context: Treating AI literacy as child protection rather than curriculum development shifts the policy framing significantly. UNICEF and the USTPC are not making an argument about workforce preparation; they are arguing that children without AI literacy have no defences against manipulation. This framing puts AI literacy in education alongside other established child safety standards.