The AI Skeptic's Dilemma: When Continuous Improvement Leaders Can't Improve Together
- Eric Olsen
- Nov 7
- 6 min read
"I can't believe that I haven't been able to do this before."
Marc Braun's confession during the second Lean Into AI webinar caught everyone's attention. Here was a CEO coach with 30 years in continuous improvement, someone who built cultures of coaching and problem-solving for a living, admitting he'd been stuck. Not stuck technically—his team members played with AI individually and saw its potential. They were stuck collaboratively. Despite being continuous improvement experts who knew that team-based learning accelerates capability, they weren't improving together with AI—and that bothered him profoundly.
Watch the full webinar: https://www.fpwork.org/fpw-videos
The breakthrough came from an unexpected source: a 24-year-old who simply sat alongside the team for four hours, creating permission to experiment collectively on a shared platform. No grand implementation strategy. No change management program. Just structured space for mutual discovery. Within three months, the entire leadership team could build custom bots. Within six months, they had company-wide AI strategy. The technical tools were always available. What was missing was the permission structure for collaborative learning.
This moment captures something essential about AI adoption that technical discussions often miss: the tools aren't the bottleneck. The collective learning infrastructure is.

The 95% Problem Has a Familiar Shape
When Tyson Heaton from the Lean Enterprise Institute noted that 95% of AI projects fail—the same rate often cited for lean implementations—the parallel became impossible to ignore. But Heaton pushed back on taking these "failure" narratives at face value. While not every lean implementation hits consultant-promised KPIs, lean's artifacts, methodologies, and thinking patterns have fundamentally shaped how modern organizations operate. The pervasive influence matters more than perfect execution.
Both transformations stumble on identical challenges: organizations wanting quick wins, leaders pushing top-down mandates, teams lacking safe spaces to experiment, and technology deployed before processes mature. The conversation revealed why lean practitioners might be uniquely positioned to guide AI adoption. They've lived through transformation that works and transformation that fails. They understand that sustainable change requires engaged problem-solvers, not just new tools. They recognize when organizations try to automate dysfunction rather than optimize processes first.
Heaton described it clearly: AI and lean both function as amplification and reflection tools that magnify existing organizational conditions. Strong foundations get stronger. Dysfunction accelerates. Data governance issues that teams tolerated become impossible to ignore when AI tries to operate across silos. The technology becomes a diagnostic tool, revealing exactly where organizational capability needs development—precisely what value stream mapping does for manufacturing processes.
Quality Can't Take a Holiday
One principle emerged with particular force: experimentation cannot excuse declining output quality. Braun emphasized this through his CEO coaching bot story. He built the bot privately, testing and refining until it matched or exceeded his own coaching quality. Only then did he demonstrate it to clients—and the result was transformative. The client asked if he was scared the bot would replace him. Braun's response: "I built the bot."
This approach matters because it preserves trust while building capability. Teams see AI enhancing work, not creating excuses for sloppy output. The learning happens internally. The value appears externally. Recipients experience improved service, not half-baked experiments.
The principle extends to every role. Generate reports using AI tools, but validate accuracy before distribution. Use AI to coach through problems, but maintain responsibility for decisions. Write your own emails first, then let AI suggest improvements—but you still own what gets sent. The technology augments judgment; it doesn't replace accountability.
Start Where You Are: Your Own Role
Both speakers advocated strongly for beginning with personal role optimization before attempting organizational transformation. Identify tasks consuming disproportionate time or energy. Experiment with AI to automate or improve them. Maintain quality standards. Build sophistication through felt experience.
Braun shared how his wife, initially skeptical, used AI to scope wool crafting projects. The value became undeniable. She told him to invest fully in AI capabilities—not because he convinced her through argument, but because the technology demonstrably improved work she cared about. That's how believers develop: through personal experience of time saved and quality improved, not through mandates or vision statements.
The pattern holds across contexts. Heaton described using AI to analyze his sales calls—capturing transcripts, generating summaries, conducting company research, then asking the AI to critique what he missed. Each application built on the previous one, deepening capability progressively rather than attempting comprehensive transformation immediately.
The Fields That Lead Show Us Why
Certain industries progress faster with AI: DevOps teams with lean backgrounds managing software lifecycles, marketing teams with coherent campaign frameworks and rapid feedback. Their success reveals prerequisites for effective AI adoption—well-defined data structures, clear quality measures, short feedback loops.
These leading fields also demonstrate that AI creates new bottlenecks requiring systemic thinking. DevOps can generate software faster, but that accelerates problems in deployment and quality assurance. The technology doesn't eliminate the need for systematic improvement; it intensifies the requirement. Organizations without lean thinking capabilities find themselves managing dysfunction at unprecedented speed.
Manufacturing and knowledge work face longer cycles and more nuanced quality assessment, making AI integration harder. But the solution isn't abandoning AI—it's building the structural readiness that successful industries demonstrate: frameworks for assessment, mechanisms for feedback, standards for quality. Lean practitioners do this work naturally.
Moving Left: From Reactive to Proactive
The most compelling example came from contract chemical manufacturing. A CEO created regulatory compliance AI tools for different agencies, then used them to audit operations before clients arrived. The company identified and addressed issues proactively, transforming the client relationship from subordinate-inspector to peer-colleagues.
When the client audit found only three variances from the AI-generated assessment, the company could ask intelligent questions: are these regulatory requirements or company preferences? The conversation shifted from defensive justification to collaborative problem-solving. The technology didn't replace expertise—it democratized access to sophisticated analysis, enabling proactive quality management.
This "moving left" pattern—addressing issues before they become problems—represents AI's most powerful application for continuous improvement. Predict equipment failures before breakdowns. Identify quality issues before production runs. Generate root cause analyses that experienced team members would produce, but available to front-line operators at 2 AM when specialists are home sleeping—augmenting operator expertise rather than replacing it.
The Path Forward Requires Permission
The session's most actionable insight might be the simplest: get your team in a room for four hours with someone comfortable with AI, use shared platforms, and create structured time for collective learning. Not implementation. Not deployment. Just permission to discover together what becomes possible when technology augments collaborative problem-solving.
This isn't about abandoning individual experimentation. It's about recognizing that isolated learning creates reinvented wheels and repeated mistakes. Team-based discovery accelerates capability while building shared language. Weekly demonstrations where colleagues show improvements they've made create momentum. Daily huddles sharing small wins compound over time.
The lean community has spent decades understanding how to build cultures of continuous improvement. Those same principles—respect for people, systematic problem-solving, visual management, rapid experimentation—apply directly to AI adoption. The technology changes. The transformation dynamics remain remarkably consistent.
For practitioners wondering where to start: pick one aspect of your role that consumes too much time or delivers inconsistent quality. Experiment with AI to improve it. Keep what works. Learn from what doesn't. Share discoveries with colleagues. Build capability through practice, not through planning.
AI doesn't eliminate the need for lean thinking. It amplifies the importance of systematic improvement, engaged problem-solvers, and organizational learning. The question isn't whether to adopt AI—it's whether we'll apply the wisdom gained from decades of continuous improvement to guide adoption that serves people and creates value.
Knowledge Map: Connecting to Your Context
Process Keywords: Team-based learning, quality maintenance during experimentation, individual role optimization, collective discovery, prompt engineering progression, custom bot development, proactive problem-finding, systematic capability building, visual management of improvements, rapid feedback loops, peer demonstration forums, platform standardization
Context Keywords: AI skepticism, transformation fatigue, isolated experimentation, permission structures, capability gaps, trust erosion, cultural readiness, governance challenges, generational differences, technical intimidation, output quality concerns, time scarcity
Application Triggers:
• Facing AI skepticism in your team → Team-based 4-hour workshop approach
• Seeing inconsistent AI adoption → Shared platform strategy with daily improvement sharing
• Struggling with AI governance → Lean practitioner participation in AI councils
• Unclear where to start → Personal role optimization before organizational scaling
• Experiencing quality concerns → Internal experimentation with external quality standards
Related Continual Improvement Themes: Creating psychological safety for experimentation, building problem-solving capability, developing standard work for new technologies, visual management of progress, respect for people through augmentation not replacement
Continue the conversation:
Follow FPW on LinkedIn: https://www.linkedin.com/company/future-people-work/
Join monthly FPW discussions: https://forms.gle/yXPbCXURdfvYtjmn9
Learn more: https://www.fpwork.org/
Watch the webinar: https://www.fpwork.org/fpw-videos
People to Connect With:
@Kelly Reo @Marc D. Braun @Tyson Heaton @Steve Pereira @Jamie Bonini @Bruce Hamilton @Carlos Scholz @Rachel Reuter @Josh Howell
Hashtags:
#FutureOfPeopleAtWork #LeanIntoAI #ContinuousImprovement #AIAdoption #Leadership #OrganizationalLearning #LeanThinking #OperationalExcellence
Attribution:
This post was developed through collaboration between Future of People at Work leadership and synthesized with Claude.AI assistance. It represents ongoing work by the Future of People at Work initiative, a collaboration of Catalysis, Central Coast Lean, GBMP Consulting Group, Imagining Excellence, Lean Enterprise Institute, Shingo Institute, The Ohio State University Center for Operational Excellence, and Toyota Production System Support Center (TSSC).




Comments