Three common investments, one shared failure
86%
of companies increasing AI budgets
Deloitte, 2026
35%
have a mature upskilling program
Deloitte, 2026
5–15%
completion rate on self-paced platforms
Training Industry
Most AI training budgets go to three things: platform licenses (LinkedIn Learning, Coursera), lunch-and-learns, and prompt workshops. While 86% of companies are spending more, only 35% describe their programs as mature. That gap is where the money disappears.
Platform licenses are the default move. L&D signs a deal, sends an email, and waits. Completion rates for self-paced AI courses run between 5% and 15%. For comparison, instructor-led programs see above 85%.
Six months in, 80% of employees haven't logged in since week one. The license renews anyway. Canceling it would mean admitting the initiative failed.
Lunch-and-learns have a different problem. They're led by whoever's available, not by someone who's built AI into real work. I've talked to L&D directors who run these monthly. Not one has followed up to check whether anyone applied a single thing.
Prompt workshops are the newest thing. Half-day sessions on crafting the perfect prompt. But prompting is to AI what typing is to writing. Teaching someone to prompt better without showing them where AI fits in their work is like teaching someone to type faster without teaching them what to write.
What all three have in common: nobody measures whether anything changed. Ask how training went and you hear "people seem more comfortable" or "engagement was positive." That's sentiment. It doesn't tell you whether anyone works differently.
What a training strategy looks like
A strategy starts with a specific pain point, not "AI awareness." Which tasks eat the most time for the least value? The operations team compiling weekly status reports. The legal team reviewing vendor contracts. The finance team writing monthly commentary. Start there.
The measurement changes too. Completion rates tell you who clicked through. Login rates tell you who remembered their password. Neither tells you whether work changed.
The best programs track one metric: can someone now do in 30 minutes what used to take 3 hours? If yes, training worked. If they can't name a task that changed, it didn't.
When AI gets embedded into existing workflows, adoption becomes invisible. Nobody has to remember to "use the AI tool." The AI is the workflow. That's the difference between "we trained on AI" and "AI is how we work now."
The line-item test
Look at where AI training sits in your L&D budget. If it's next to compliance training and the annual leadership retreat, it will deliver the same results: a checkbox.
Compliance training protects the company from liability. Leadership retreats make people feel valued. Neither changes how work gets done. AI training that sits alongside them inherits the same expectation: attend, check the box, move on.
How you frame it matters. Call it an investment and someone asks about returns. Call it a perk and nobody does.
If you can't name three workflows your team does differently since the last training, the training didn't work.
The skill gap looks the same in a classroom and a conference room
I teach 75+ students at the University of Chicago and lead AI at a $40B enterprise. The skill gap is the same on both sides.
A 22-year-old graduate student and a senior VP with two decades of experience make the same mistakes. They hit the same walls and have the same breakthrough moments. The VP just takes a little longer to admit they're stuck.
It's not a knowledge problem. Everyone's heard of AI. Most have tried ChatGPT. The gap is a workflow problem: nobody has shown them where AI fits in their specific work. The reports they write every week, the follow-up emails they send after every meeting.
Until training gets that specific, it stays theoretical. And theoretical training is what a line item buys.