← Public discourse Research · Discourse

Stanford AI Index 2025 — AI-Related Data Privacy Risks Increase 56% Year-on-Year

enforcementethics

Stanford University’s 2025 AI Index reported a 56% increase in AI-related data privacy risks year-over-year. The report documents harms arising from training data misuse, inference attacks, and the growing deployment of AI systems in sensitive domains — health, education, and employment — without adequate privacy frameworks. The rate of measurable harm is increasing faster than the rate of regulation across all domains studied.

Published by: Stanford University (academic research institution), through the annual AI Index — a widely cited benchmark for tracking AI development and its societal impacts.

Key finding: AI-related data privacy risks grew 56% in a single year. The highest exposure is concentrated in the domains affecting the most vulnerable populations: children in schools, patients in healthcare, and workers in employment. In every domain studied, harm is outpacing regulation.

Context: A 56% annual increase in privacy risks is one of the most concrete quantitative measures available of the cost of delayed governance. The Stanford AI Index is published annually and tracks these trends over time, making it a reliable source for assessing whether the gap is narrowing or widening.