This week we tackled the question I hear most often from leaders working through the AI Leadership Essentials course — why do well-funded, well-intentioned AI initiatives still fail? If you haven't tried Module One yet, it's free and it's the foundation for everything we cover in this newsletter. No credit card, no obligation. [Start here → Module One]
Now, here's what we explored this week.
Quick Wins This Week
Tuesday opened with a story that should sound familiar: a company that spent two million dollars on an AI system that was delivered on time, worked flawlessly, and still sat at 12% adoption eight months later. The technology wasn't the problem. Nobody planned for the humans. The post explored why the gap between a successful implementation and a successful adoption is almost always a people problem — and why the business case gets 40 slides while change management gets an announcement email.
Thursday got practical about what "planning for the humans" actually looks like in real conversations. Specifically: why the same message lands completely differently depending on who's in the room. Your executives need to hear about business outcomes and risk mitigation. Your managers need to know how to support their teams. Your frontline employees need specifics — not platitudes — about what changes for them and why it makes them more valuable, not less.
Two different posts. One through-line.
Deep Dive: The Real Reason AI Initiatives Fail
Here's the connection most leaders miss:
The reason your AI pilot stalls isn't usually the technology. It's that your team never felt safe enough to actually use it.
You can have the right tool, the right use case, a validated business case, and a vendor who delivers on time — and still watch adoption flatline. Because nobody told the team what was happening, why it was happening, or what it meant for them individually. Silence filled with fear. Fear produced resistance. Resistance killed momentum.
Research consistently shows that 70% of AI projects fail not because of bad technology or insufficient budget — but because organizations treat AI adoption as a technology problem instead of a people problem. They invest heavily in the tool and almost nothing in the conditions that make people willing to use it.
The leaders who get this right don't just build an AI strategy. They build the psychological safety for their teams to experiment, fail, and try again. And critically — they communicate differently to different audiences rather than sending one generic announcement and hoping it lands.
That's not a soft skill add-on. That's the actual work. And it's the work that most expensive consultants never put in their 200-page deck.
This Week's Action Item
Before your next AI-related conversation with your team, ask yourself: who specifically am I talking to, and what do they actually need to hear?
Then rewrite your message for that specific audience using this simple filter:
Executives → Lead with business outcome and risk mitigation. Skip the tool name.
Managers → Lead with how you'll support them in supporting their teams. Be specific about resources.
Frontline employees → Lead with what specifically changes in their day and why it makes them more valuable. No vague reassurances.
If your current communication is one message sent to all three audiences, that's your starting point this week.
NRM Spotlight
If this week's themes resonated — the adoption gap, the people problem, the communication disconnect — Module One of AI Leadership Essentials is where to start. It covers the foundation most AI initiatives skip entirely: how to cut through vendor hype, what AI can and can't realistically do for a company your size, and how to evaluate proposals before you spend a dollar.
Free. No credit card. No obligation.
Coming Next Week
Next week we're tackling something that frustrates me every time I see it — the Fortune 500 AI case study problem.
You've seen them. A glossy success story from Amazon, Google, or a major financial institution showing 300% productivity gains and transformational ROI. And somewhere in a boardroom, someone is pointing at that slide and saying "why can't we do this?"
Next Tuesday I'll break down exactly why those case studies are misleading for companies your size — and more importantly, what to look for instead when you're evaluating what's actually possible for your budget, your team, and your infrastructure.
Then Thursday we're going deeper on the people side with something most AI playbooks completely overlook: how to identify and empower AI champions inside your organization. Spoiler — it's probably not who you think. Peer influence almost always outperforms top-down mandates, and I'll show you exactly how to build a champion network that actually moves the needle.
Two posts, one week, a lot of practical ground to cover.
See you next week!
Nikki