The problem
Most people who use AI today are using a chat box. They type a question, they receive an answer, and they evaluate it more or less as they would a search result. This is not nothing. It is considerably less than what AI can currently do, and the gap between this basic usage and what is already possible for technically capable users is not a gap that closes gradually. It is widening by the week.
I am not describing a future risk. Anthropic released over forty product updates in Q1 2026 alone: custom memory, persistent projects, computer-use agents, extended context windows, operator-level customisation, and a model API capable of coordinating chains of tasks without human input at each step. Most users who interact with Claude daily are not using any of these features. They are writing questions in a single box and reading replies. The features exist; the knowledge of how to use them does not.
Meanwhile, a smaller group is operating at a fundamentally different level. Developers and technically fluent knowledge workers are building personal agents that monitor inboxes, synthesise research, draft outputs, and trigger actions in other applications without direct instruction at every step. They are connecting AI to second-brain tools such as Obsidian, routing outputs through automation platforms such as Zapier, and using Telegram bots as lightweight interfaces for tasks that once required a laptop. They are running models locally on personal hardware, fine-tuning them on their own documents, and routing between different models depending on task type. The productivity difference between this group and the chat-box group is not marginal. It is structural.
The phrasing “AI literacy” as it currently appears in policy documents does not capture this divide. Article 4 of the EU AI Act requires AI providers to ensure staff have “sufficient AI literacy.” It does not define sufficient. Even the strongest national definitions, from Singapore’s four-part framework or the OECD/EC model published in 2025, treat literacy as a ladder that tops out at critical engagement with AI systems. The usage gap I am describing sits above that ceiling. It is not about understanding AI or even using AI. It is about orchestrating AI as an integrated layer of daily professional work.
Why it matters
The productivity consequences are already visible and are accelerating. The worker who uses AI to draft, research, and review does more than a worker who does not. The worker who uses AI as an orchestration layer, routing tasks between specialised tools, maintaining persistent context across days and weeks, and running parallel processes that surface results when needed, operates at a different scale entirely. These are not the same advantage, and confusing them leads to policy responses that address the first problem while the second compounds.
The catch-up problem is real even for people who consider themselves technically proficient. I have spoken with software engineers who are current on language models but have not built a personal agent. I have spoken with researchers who use AI to summarise papers but have not connected their tools together in any persistent way. The reason is not ability. It is time. The pace of feature release across every major AI platform in early 2026 has made it genuinely difficult to track what is available, let alone integrate it. For a non-technical user, the difficulty is not keeping pace; the difficulty is knowing that there is anything to keep pace with.
That asymmetry is the problem. The advanced users are not keeping their methods secret, but their methods are documented in GitHub repositories, Discord servers, and YouTube tutorials aimed at people who already know enough to follow them. The majority of workers who would benefit from these capabilities are not reading those materials, because nothing in their professional environment has told them they should be.
There is a second structural barrier that belongs in this analysis: corporate security and IT governance. In most organisations with more than a few hundred employees, any new AI tool must pass a vetting process before employees are permitted to use it. This process exists for legitimate reasons. It also takes months, produces inconsistent outcomes, and in practice results in most tools being blocked by default while a small list of approved applications receives all the sanctioned use. The worker who wants to connect an AI assistant to their calendar, their document store, and their email to build a basic workflow is not facing a knowledge problem or a motivation problem. They are facing a procurement queue.
The data-training question has made this significantly worse. The concern that AI providers may use enterprise data to train future models is not irrational, and the public record of how several providers have handled data retention is genuinely ambiguous. Corporate legal and governance teams read that ambiguity as risk, and their default response to risk is prohibition. The result is that employees in governed organisations are often restricted to a stripped-down browser interface with data-sharing turned off, while their peers at smaller companies or in personal use contexts are running full integrations. The governance reflex meant to protect the organisation is, in practice, widening the usage gap between sectors and between company sizes.
What should happen
I believe three things need to happen, and none of them are being seriously proposed at the moment.
First: AI literacy standards need a usage tier above critical engagement. The OECD/EC framework’s fourth dimension, “critically engaging with AI,” is the right top of the ladder for citizenship. It is not the right top for professional competence in 2026. A professional-tier definition should include the ability to connect AI tools to external systems, build and run basic automated workflows, evaluate when task delegation to an agent is appropriate, and maintain awareness of capability changes in the platforms being used. This is teachable. It needs to be named before it can be required.
Second: employers need to treat AI capability tracking as an ongoing obligation, not a one-time training event. A training course designed in late 2024 does not describe the tools available in mid-2026. Companies that ran AI onboarding two years ago and have not revisited it have employees working with an outdated mental model of what is possible. The obligation should be continuous, with a defined update cycle tied to platform change rates, not to the employer’s training budget cycle.
Third: AI providers need to make their data governance commitments auditable, not just stated. The reason corporate security teams default to prohibition is that the alternative requires trusting a vendor’s privacy policy. That is not an unreasonable position given how those policies have evolved. Providers who want enterprise adoption at full capability, not just through locked-down interfaces, need to offer independently verifiable commitments: no training on customer data by default, clear contractual data residency terms, and audit rights that a legal team can actually act on. Without that, the security vetting process will continue to function as a capability ceiling for everyone inside a governed organisation.
Fourth: the platforms themselves, meaning Anthropic, OpenAI, Google, and their peers, have a responsibility they are not currently meeting. Feature releases aimed at developers and sophisticated users are accompanied by extensive documentation. Feature releases that would change how ordinary users approach their work, such as persistent memory, project context, and agent capabilities, are announced through blog posts read primarily by people already tracking the space. The duty to inform users of capability changes in plain language, oriented to common professional use cases, is not being fulfilled.
What already exists
The research base for the productivity and skills arguments is documented in this site’s archive. IDC projects $5.5 trillion in losses from IT and AI skills gaps by 2026, documented in IDC’s 2024 IT skilling report (the figure covers the broader IT skills shortage; AI is the top in-demand skill category within it). The OECD/EC framework, the most complete current definition of AI literacy, is summarised at OECD AI Literacy Framework. The Harvard Business School finding that AI training without agency produces anxiety rather than capability is documented at Harvard: AI Training, Workplace Anxiety.
What does not yet exist in this archive, and what I intend to document, is a systematic account of the gap between current AI capability ceilings and the average usage level of the professional workforce. That gap is the next research priority for this site.
What you can do
If you are a knowledge worker who uses AI only in a single chat interface: you are using a fraction of what is available to you at no additional cost. The most useful next step is not a course. It is one hour spent finding out what the platform you already use can do, specifically whether it supports persistent memory, project context, file uploads, or automated workflows. Most people who find out are surprised by the answer.
If you work in learning and development or HR: the training programme you designed in 2024 needs revision. The correct revision cycle for AI training content in the current environment is every six months, not every two to three years. The measure of success is not completion. It is whether employees are aware of and using capabilities that were not available when they were last trained.
If you work in IT security or corporate governance: a blanket vetting queue is not a neutral position. Every month a useful tool spends in review is a month employees find workarounds, which produces exactly the shadow-IT risk the vetting process is meant to prevent. A faster, tiered review process, distinguishing between tools that handle sensitive data and tools that do not, would serve the organisation better than a single slow queue. Separately: if your organisation’s AI provider cannot give you a clear, contractual answer on data training, that is a reasonable reason to escalate, and a reasonable thing to ask your provider to resolve.
If you build AI products: plain-language explanation of capability changes, written for the non-technical professional user and oriented to common tasks, is a product responsibility. Release notes written for developers are not a substitute. The people who would benefit most from knowing what has changed are precisely the people who are not reading your API documentation.
The usage gap will not close by itself. The capability distance between the basic user and the advanced user increases every time a platform ships a feature that one group knows how to use and the other group does not know exists.