The three-week pattern
Week one is excitement. The CEO sends an all-hands email. Everyone tries ChatGPT. Slack channels pop up. Someone in marketing drafts a blog post in ten minutes. Momentum feels real.
Week two is reality. Someone pastes AI-generated text into a client deliverable without checking it. A draft goes out with hallucinated statistics. Trust cracks. The enthusiasts keep going. Everyone else gets quiet.
Week three is silence. People go back to the old way. Nobody checks. Nobody notices. Six months later, the license renewal comes around and someone in procurement asks "are people even using this?"
While 59% of enterprise leaders acknowledge the AI skills gap, only 53% say they're doing anything about it. That six-point gap sounds small. It represents millions of employees in organizations that see the problem and still haven't acted.
Why this happens (it's not the technology)
Nobody teaches what AI is good at, and what it isn't. Without that calibration, every bad output feels like betrayal. People don't think "I gave it bad context." They think "this doesn't work."
AI gets positioned as a separate activity. "Use AI more" is not a workflow instruction. It's a guilt trip. When someone has to open a different tool, figure out what to ask, then figure out what to do with the answer, most people just do it the old way.
There's no accountability loop. Leadership asked for training. Training happened. Box checked. Nobody measured whether anyone works differently three weeks later.
Nobody ever checked. That's the tell. If leadership doesn't ask "show me what changed," nothing will.
What makes adoption stick
The companies where AI stuck treated it as an operational change. The companies where it fizzled treated it as an IT deployment.
- Embed AI into existing workflows: If someone has to leave their primary tool to use AI, you've lost. It has to feel like work, not a detour.
- Designate AI champions by function: Not IT staff. Ops leads, finance managers, marketing directors who drive adoption within their own teams. They know the workflows. They have the credibility.
- Measure outcomes, not activity: How much time did finance save on monthly reporting? "Number of people trained" is an input metric. Time saved on real tasks is the one that justifies the investment.
The enterprise reality
I've watched this pattern play out across departments, geographies, and seniority levels at a $40B enterprise. The teams that adopt permanently are never the ones with the most training hours. They're the ones where AI became part of the workflow.
You can buy training the way you buy a gym membership and get the same results. Habits form from structured practice on real problems, with someone checking whether anything changed.