Agents Unleashed
Agents Unleashed is a podcast for curious change agents building the next generation of adaptive organizations — where people and AI learn, work, and evolve together.
Hosted by Mark Richards, Ali Hajou, Stephan Neck, and Nikolaos Kaintantzis, the show blends stories from the field with experiments in agility, leadership, and technology. We explore how work is changing — from agile teams to agentic ecosystems — through honest conversation, a dash of mischief, and the occasional metaphor that gets away from us.
We’re not selling frameworks or chasing hype. We’re practitioners figuring it out in real time — curious, hopeful, and sometimes hilariously wrong.
Join us as we unpack what it really means to be adaptive in a world where intelligent agents (human and otherwise) are rewriting the rules of change.
Episodes

33 minutes ago
33 minutes ago
If you put "AI Architect" on your LinkedIn headline tomorrow, what would you actually have to know—or explain—to deserve it? And in a landscape where the ground shifts weekly, how do you make architectural decisions without drowning in technical debt or chasing every buzzword that appears in your YouTube ads?
Mark anchors a conversation with Stephan and Niko exploring what it means to be an architect when the tools, expectations, and pace of change have all shifted under your feet. All three confess their architect credentials are 10-15 years old—but they've spent those years in the trenches coaching architects through agile transformations, cloud migrations, and now AI disruption. This isn't theory. It's practitioners who know what architects are actually struggling with, thinking out loud about what's changed and what endures.
Key Themes:
From Gollum to Collaborator Niko opens with a vivid metaphor: the pre-agile architect as Gollum—alone, schizophrenic, clutching "my precious" architecture in an ivory tower. Agile transformed the role into something more collaborative. The question now: how does AI continue that evolution? The hosts agree that architects who try to remain gatekeepers will simply "be blown away."
The LinkedIn Headline Test What would earning "AI Architect" actually require? Stephan wants to see evidence—real AI design work, not just buzzword collection. Niko warns against reducing AI to technology: "It's not about frameworks. It's about solving business problems." Mark adds that good architects have always known when to tap experts on the shoulder—the question is whether you understand enough to know what questions to ask.
Balancing Executive Hype vs. Reality YouTube promises virtual employees in an hour. Enterprise reality involves governance, security, and regulatory compliance. The hosts explore the translation work architects must do between executive excitement and responsible implementation—work that looks a lot like change management with a technical edge.
Decisions in Flux Classic architect anxiety—making choices that create lasting technical debt—gets amplified by AI's pace. Stephan returns to fundamentals: ADRs (architectural decision records), high-level designs, IT service management. Niko offers a grounding metaphor: "You can't build a skyscraper with pudding. You have to decide where the pillars are." Document your decisions, accept that you're deciding with incomplete information, and trust that you'll decide right.
For architects navigating AI disruption, this conversation offers something practical: not a new framework to master, but a reframe of what endures. Document your decisions. Build context for AI to help prioritize your learning. Make friends who are learning different things. And recognize that "adoption rate is lower than innovation rate"—so stay calm. The ground is moving, but the work of bridging business problems and technical solutions hasn't changed. Just the speed.

Friday Nov 14, 2025
Friday Nov 14, 2025
Are product managers training for a role AI will do better?
Stephan Neck anchors a conversation that doesn't pull punches: "We've built careers on the idea that product managers have special insight into customer needs—but what if AI just proved that most of our insights were educated guesses?" Joining him are Mark (seeing both empowerment and threat) and Niko (discovering AI hallucinations are getting scarily sophisticated).
This is the first in a series examining how AI disrupts specific roles. The question isn't whether AI affects product management—it's whether there's a version of the role worth keeping.
The Mechanical vs. Meaningful DivideMark draws a sharp line: if your PM training focuses on backlog mechanics, writing features, and capturing requirements—you're training people for work AI will dominate. But product discovery? Customer empathy? Strategic judgment? That's different territory. The hosts wrestle with whether most PM training (and most PM roles in enterprises) have been mechanical all along.
When AI Sounds Too Good to Be TrueNiko shares a warning from the field: AI hallucinations are evolving. "The last week, I really got AI answers back which really sound profound. And I needed time to realize something is wrong." Ten minutes of dialogue before spotting the fabrication. Imagine that gap in your product architecture or requirements—"you bake this in your product. Ooh, this is going to be fun."
The Discovery QuestionStephan flips the script: "Will AI kill the art of product discovery, or does AI finally expose how bad we are at it?" The conversation reveals uncomfortable truths about product managers who've been "guessing with confidence" rather than genuinely discovering. AI doesn't kill good discovery—it makes bad discovery impossible to hide.
The Translation Layer TrapWhen Stephan asks if product management is becoming a "human-AI translation layer," Mark's response is blunt: "If you see product management as capturing requirements and translating them to your tech teams, yes—but that's not real product management." Niko counters with the metaphor of a horse whisperer. Stephan sees an orchestra conductor. The question: are PMs directing AI, or being directed by it?
Mark's closing takeaway captures the tension: "Be excited, be curious and be scared, very scared."
The episode doesn't offer reassurance. Instead, it clarifies what's at stake: if your product management practice has been mechanical masquerading as strategic, AI is about to call your bluff. But if you've been doing the hard work of genuine discovery, empathy, and judgment—AI might be the superpower you've been waiting for.
For product managers wondering if their role survives AI disruption, this conversation offers a mirror: the question isn't what AI can do. It's what you've actually been doing all along

Saturday Nov 08, 2025
Saturday Nov 08, 2025
What comes first in your mind when you hear "AI and ethics"?
For Mark, it's a conversation with his teenage son about driverless cars choosing who to hurt in an accident. For Stephan, it's data privacy and the question of whether we really have a choice about what we share. For Niko, it's the haunting question: when AI makes the decision, who's responsible?
Niko anchors a conversation that quickly moves from sci-fi thought experiments to the uncomfortable reality—ethical AI decisions are happening every few minutes in our lives, and we're barely prepared. Joining him are Mark (reflecting on how fast this snuck up on us) and Stephan (bringing systems thinking about data, privacy, and the gap between what organizations should do and what governments are actually doing).
From Philosophy to PracticeMark's son thought driverless cars would obviously make better decisions than humans—until Mark asked what happens when the car has to choose between two accidents involving different types of people. The conversation spirals quickly: Who decides? What's "wrong"? What if the algorithm's choice eliminates someone on the verge of a breakthrough? The philosophical questions are ancient, but now they're embedded in algorithms making real decisions.
The Consent IllusionStephan surfaces the data privacy dimension: someone has to collect data, store it, use it. Niko's follow-up cuts deeper: "Do we really have the choice what we share? Can we just say no, and then what happens?" The question hangs—are we genuinely consenting, or just clicking through terms we don't read because opting out isn't really an option?
Starting Conversations Without Creating ParalysisMark warns about a trap he's seen repeatedly—organizations leading with governance frameworks and compliance checklists that overwhelm before anyone explores what's actually possible. His take: "You've got to start having the conversations in a way that does not scare people into not engaging." Organizations need parallel journeys—applying AI meaningfully while evolving their ethical stance—but without drowning people in fear before they've had a chance to experiment.
Who's Actually Accountable?The hosts land on three levels: individuals empowered to use AI responsibly, organizations accountable for what they build and deploy, and governments (where Stephan is "hesitant"—Switzerland just imposed electronic IDs despite 50% public skepticism). Stephan's question lingers: "How do we make it really successful for human beings on all different levels?"
When Niko asks for one takeaway, Mark channels Mark Twain: "It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so. My question to you is, what do you know about AI and ethics?"
Stephan reflects: "AI is reflecting the best and the worst of our own humanity, forcing us to decide which version of ourselves we want to encode into the future."
Niko's closing: "Ethics is a socio-political responsibility"—not compliance theater, not corporate governance alone, but something we carry as parents, neighbors, humans.
This episode doesn't provide answers—it surfaces the questions practitioners should be sitting with. Not the distant sci-fi dilemmas, but the ethical decisions happening in your organization right now, every few minutes, while you're too busy to notice.

Monday Oct 27, 2025
Monday Oct 27, 2025
“Use AI as a sparring partner, as a colleague, as a peer… ask it to take another perspective, take something you’re weak in, and have a dialog.” — Nikolaos Kaintantzis
In this episode of SPCs Unleashed, the crew tackles a pressing question: how should leaders navigate AI? Stephan Neck frames the challenge well. Leadership has always been about vision, adaptation, and stewardship, but the cockpit has changed. Today’s leaders face an environment of real-time coordination, predictive analytics, and autonomous systems.
Mark Richards, Ali Hajou, and Nikolaos (Niko) Kaintantzis share experiences and practical lessons. Their message is clear: the fundamentals of leadership—vision, empowerment, and clarity—remain constant, but AI raises the stakes. The speed of execution and the responsibility to guide ethical adoption make leadership choices more consequential than ever.
Four Practical Insights for Leaders
1. Provide clarity on AI useUnclear policies leave teams guessing or hiding their AI usage. Leaders must set explicit expectations. As Niko put it: “One responsibility of a leader is care for this clarity, it’s okay to use AI, it’s okay to use it this way.” Without clarity, trust and consistency suffer.
2. Use AI to free leadership timeAI should not replace judgment, it should reduce waste. Mark reframed it this way: “Learning AI in a fashion that helps you to buy time back in your life… is a wonderful thing.” Leaders who experiment with AI themselves discover ways to reduce low-value tasks and invest more time in strategy and people.
3. Double down on the human elementsCertain responsibilities remain out of AI’s reach: vision, empathy, and persuasion. Mark reminded us: “I don’t think an AI can create a clear vision, put the right people on the bus, or turn them into a high performing team.” Ali added that energizing people requires presence and authenticity. Leaders should protect and prioritize these domains.
4. Create space for experimentationAI adoption spreads through curiosity, not mandates. Niko summarized: “You don’t have to seduce them, just create curiosity. If you are a person who is curious, you will end up with AI anyway.” Leaders accelerate adoption by opening capacity for experiments, reducing friction, and celebrating small wins.
Highlights from the Episode
Treat AI as a sparring partner to sharpen your leadership thinking.
Provide clarity and boundaries to guide responsible AI use.
Buy back leadership time rather than offloading core duties.
Protect the human strengths that technology cannot replace.
Encourage curiosity and create safe spaces for experimentation.
Conclusion
Navigating AI is less about mastering every tool and more about modeling curiosity, setting direction, and creating conditions for exploration. Leaders who use AI as a sparring partner while protecting the irreplaceable human aspects of leadership will build organizations that move faster, adapt better, and remain deeply human.

Monday Oct 20, 2025
Monday Oct 20, 2025
“We’ve been talking about agile at scale forever; now we’re talking about experts at scale.” — Ali Hajou
In this episode of SPCs Unleashed, hosts Ali Hajou, Mark Richards, and Stephan Neck explore how AI is transforming the skills practitioners need to thrive. The conversation moves beyond tools and techniques to examine mindsets, behaviors, and organizational patterns that separate the AI-literate from the AI-illiterate.
The parallels to Agile adoption are striking. Just as some engineers resisted continuous integration or test automation, today some professionals resist experimenting with AI. The difference? This wave is moving much faster, and the gaps between those who adopt and those who lag are widening at unprecedented speed.
Actionable Insights for Practitioners
1. Lead with Curiosity
Mark Richards reminded us that the foundation of AI literacy isn’t prompt engineering—it’s curiosity. “The number one skill is probably not a skill set so much as a mindset: curiosity.” Like learning a new feature in Excel or Word, curiosity drives the discovery of AI-powered tools hidden in plain sight. Practitioners who pause to ask, “What’s new here, and how could it make me more effective?” will naturally outpace those who only stick with familiar patterns.
2. Treat Critical Thinking as Non-Negotiable
Ali Hajou stressed that AI may produce convincing but flawed outputs. Whether drafting legal documents, generating reports, or writing code, practitioners must evaluate and fact-check. As he put it, “The AI haves will outrun the AI have-nots, especially if they develop critical thinking skills.” Debugging, diligence, and discernment become just as valuable as creativity.
3. Redefine Collaboration
Stephan Neck reframed collaboration for the AI age: “Collaboration is nothing new, but now it’s in a different setup—working with intelligent partners who may also hallucinate.” Working effectively means learning to communicate not only with human colleagues but also with AI agents. That includes setting boundaries, clarifying context, and handling mistakes with resilience.
4. Mind the Gap—or Be Left Behind
Skill gaps will widen quickly. Stephan warned of a “fear of missing out” dynamic, where those who delay adoption may never catch up. Recovery becomes harder the longer you wait, and organizations will inevitably favor those who can harness AI to accelerate delivery.
5. Scale Expertise, Not Just Effort
Perhaps the most powerful theme was Ali’s reflection that AI enables experts at scale. Instead of overwhelming high performers with endless requests, organizations can use AI to systematize their thinking. Capturing an expert’s mental models in prompts or digital assistants allows their knowledge to cascade without bottlenecking at the individual.
Conclusion
The AI shift is not about flashy tools; it’s about cultivating curiosity, critical thinking, and collaborative fluency. For practitioners, leaders, and change agents, the message is clear: the sooner you experiment, the sooner you build the muscle to thrive. In the AI era, expertise doesn’t just scale—it multiplies.

Monday Oct 13, 2025
Monday Oct 13, 2025
“What the heck am I doing here? I’m just automating a shitty process with AI… it should be differently, it should bring me new ideas.” — Nikolaos Kaintantzis
Building AI Into the DNA of the Organization
In this episode of SPCs Unleashed, the hosts contrast the sluggish pace of traditional enterprises with the urgency and adaptability of what they call “extreme AI organizations.” The discussion moves through vivid metaphors of camels and eagles, stories from client work, and reflections on why most enterprise AI initiatives fail. At its core, the episode emphasizes a fundamental choice: will organizations bolt AI onto existing systems, or embed it deeply into the way they operate?
Mark Richards reflects on years of working with banks, insurers, and telcos — enterprises where patience is the coach’s most important skill. He contrasts this with small, AI-driven startups achieving more change in three months than a bank might in two years. Stephan Neck draws on analogies from cycling and Formula One, portraying extreme AI organizations as systems with real-time coordination, predictive analytics, and autonomous responses. Nikolaos Kaintantzis highlights the exponential speed of AI advancement, reminding us that excitement and fear walk together: miss the news for a week, and you risk falling behind.
Actionable Insights for Practitioners
1. Bake AI in, don’t bolt it on.Enterprises often rush to automate existing processes with AI, only to accelerate flawed work. True transformation comes when AI is designed into workflows from the start, creating entirely new ways of working rather than replicating old ones.
2. Treat data as a first-class citizen.Extreme AI organizations treat data as a living nervous system — continuous, autonomous, and central to decision-making. Clean, structured, and accessible data creates a reinforcing loop where the payoff for stewardship comes quickly.
3. Collapse planning horizons.Enterprises tied to 18-month or even quarterly cycles are instantly outdated in the world of AI. The pace of change demands lightweight, experiment-driven planning with rapid feedback and adjustment.
4. Build culture before capability.AI fluency is not just a tooling issue. Extreme AI organizations cultivate a mindset where employees regularly ask, “How could AI have helped me work smarter?” This culture of reflection and experimentation is more important than any single tool.
5. Keep humans in the loop — for judgment, not effort.The human role shifts from heavy lifting to guiding direction, evaluating options, and applying ethical oversight. Energy is conserved for judgment calls, while AI agents handle more of the execution load.
Conclusion
Enterprises may survive as camels, built for endurance in their chosen deserts, but the organizations that want to soar will need to transform into eagles. Strapping wings on a camel isn’t a strategy — it’s a spectacle. The path forward lies in embedding AI into the very DNA of the organization: data as fuel, culture as the engine, and humans providing the judgment that keeps the flight safe, ethical, and purposeful.

Monday Oct 06, 2025
Monday Oct 06, 2025
“Learning AI isn’t just about acquiring a new skill… it’s about unlocking the power to fundamentally reshape how our organizations work.” – Stephan Neck
In this episode of SPCs Unleashed, the hosts — Stephan, Mark, and Niko — share their personal AI learning journeys and reflect on what it means for practitioners and leaders to engage with this fast-evolving space.
They emphasize that learning AI isn’t only about technical skills — it’s a shift in mindset. Curiosity, humility, and experimentation are essential. From late-night “AI holes” to backlog strategies for learning, the discussion highlights both the excitement and overwhelm of navigating an exponential learning curve. The hosts also explore how to structure an AI learning roadmap with projects, fundamentals, and experiments. The episode closes with reflections on non-determinism in AI: its creative spark, its risks, and the reminder that “AI won’t replace you, but someone who masters AI will.”
Practitioner Insights
Anchor AI learning in real problems. Mark emphasized: “Have a problem you’re trying to solve… so that every time you go and learn something, you’re learning it so you can achieve that thing better.”
Treat AI as a sparring partner, not a servant. Niko showed how ChatGPT improved his writing in both German and English — not by doing the work for him, but by challenging him to refine and think differently.
Use a backlog to manage your AI learning journey. The hosts compared learning AI to managing a portfolio — prioritization, focus, and backlog management are key to avoiding overwhelm.
Don’t get stuck on hype or deep math too early. Both Niko and Mark stressed that experimentation and practical application matter more in the early stages than diving into theory or chasing hype cycles.
Practice humility and collaboration. Stephan underlined that acknowledging blind spots and working with peers who bring complementary strengths is critical for sustainable growth.
Conclusion
The AI learning journey is less about chasing the latest tools and more about reshaping how we think, collaborate, and experiment. For practitioners, leaders, and change agents, the real challenge is balancing curiosity with focus, hype with fundamentals, and individual learning with collective growth. As the hosts remind us, mastery doesn’t come from endlessly consuming content — it comes from applying AI thoughtfully, with humility, intent, and a willingness to learn in public.
By treating AI as a partner and structuring your learning with intent, you not only future-proof your skills but also strengthen your impact as a leader in the age of AI.

Monday Sep 29, 2025
Monday Sep 29, 2025
“If you're not thinking about an agent being a part of every conversation, something’s wrong with you.” – Mark Richards
Episode Summary
Season 3 of SPCs Unleashed opens with a subtle shift. While the podcast continues to serve the SAFe community, the crew is broadening the conversation to explore how AI is disrupting agile practices. In this kickoff, hosts Mark Richards, Niko Kaintantzis, Ali Hajou, and Stephan Neck take on a provocative question: what happens to user stories in a world of AI-generated prototypes, specs, and conversations?
The debate highlights tension between tradition and transformation. User stories have long anchored agile communication, but the panel asks if they still serve their purpose when AI can generate quality outputs faster than humans. Their conclusion: the form may change, but the intent — empathy, alignment, and feedback — remains essential.
Actionable Insights
AI exposes weaknesses.Most backlogs already contain poor-quality “stories” that are tasks in disguise. AI could multiply the problem if used lazily, but also raise the bar by forcing clarity.
Feedback speed is the game-changer.Tools like Replit, Lovable, and GPT-5 enable instant prototyping, turning vague ideas into testable experiments in hours.
From stories to executable briefs.Stephan notes prompts may become agile’s new “H1 tag”: precise instructions that orchestrate human–AI swarms.
Context and craftsmanship still matter.AI cannot intuit the problem space. Human product thinking — empathy, vision, and long-term orientation — remains vital.
User stories may fade, intent will not.Mark sees classic stories as obsolete, but clear communication and shared focus endure.
Conclusion
This episode signals a turning point: SPCs Unleashed is no longer just about scaling frameworks — it’s about confronting how AI reshapes agile fundamentals. The verdict? User stories may not survive intact, but the practices of fast feedback, empathy, and shared understanding are more important than ever. Coaches and leaders must now help teams integrate AI as a collaborator, not a crutch.

Thursday Sep 25, 2025
Thursday Sep 25, 2025
“At some point you’ve got to look at a set of risks and say, how do we feel about our overall stance?” — Mark Richards
In this episode of SPCs Unleashed, hosts Mark Richards, Niko Kaintantzis, and Stephan Neck unpack the complexity of risk management in SAFe. Too often, risk management is reduced to ROAMing during PI Planning. While useful, ROAMing is only a starting point. The discussion centers on the continuum — from identifying risks to shaping an organizational risk posture that balances ownership, experimentation, and resilience.
The hosts explore who owns risk, how unforeseen disruptions like COVID expose organizational resilience, and why AI both enables and complicates risk management. The message is clear: effective risk management requires more than visibility. It demands ownership, accountability, and a proactive stance across all levels of SAFe.
Actionable Insights
Think in terms of risk posture firstInstead of obsessing over individual risks, ask: What is our overall stance? This broader view helps leaders balance tradeoffs and set expectations.
ROAMing is only the beginningROAMing surfaces and socializes risks, but it does not ensure ownership, tracking, or mitigation. Treat it as examination, not management.
Shared responsibility, clear accountabilityRisk is everyone’s job, but accountability sits with roles like business owners, product managers, and RTEs to ensure protocols are in place.
Build resilience for the unforeseenEvents like COVID remind us that Lean-Agile ways of working prepare organizations to adapt faster. Investing in agility is investing in resilience.
AI is both a tool and a riskArtificial intelligence can enhance prediction and monitoring but also introduces new risks around bias, governance, and misuse.
Conclusion
Risk management in SAFe cannot stop at ROAMing. That practice creates visibility, but true effectiveness comes from moving along the continuum — toward a well-understood and actively managed risk posture. For SPCs, RTEs, and change leaders, the challenge is to foster transparency, ensure accountability, and guide organizations toward resilience.
In your next PI Planning, go beyond simply documenting risks. Ask what risk posture your teams and business owners are really taking — and ensure that stance is owned, shared, and actively managed.

Thursday Sep 25, 2025
Thursday Sep 25, 2025
“We’re unable to build an entire component within just two weeks… so the question becomes: what can we verify at the end of a sprint? It’s about finding the shortest path to your next learning.” — Ali Hajou
In this episode of SPCs Unleashed, the hosts dive into the newly released SAFe for Hardware course and use it as a springboard to explore agility in hardware more broadly. Ali Hajou, joined by Mark Richards, Stephan Neck, and Niko Kaintantzis, reflects on how Agile principles—originally inspired by hardware product development—are now circling back into engineering contexts. The group unpacks the unique challenges hardware teams face: aging technical workforces, specialized engineering disciplines, and long product lead times. Through personal stories and coaching insights, the hosts surface strategies for fostering collaboration across expertise boundaries, reframing iteration around learning, and adapting SAFe without forcing software recipes onto hardware environments.
Actionable Insights for Practitioners
1. Honor Agile’s hardware originsScrum was born from studies of hardware companies like Honda and 3M. Coaches can remind teams that agility is not a software-only mindset but a return to hardware’s own innovative roots.
2. Reframe what “shippable” meansHardware teams cannot produce finished machines every two weeks, but they can deliver learning increments through simulations, prototypes, and verifiable designs.
3. Lead with humilityAs Niko described, success comes from co-working with engineers rather than posturing as experts. Admitting limits builds trust and invites collaboration.
4. Shift the conversation to riskTalking about risk reduction resonates more strongly with hardware engineers than software-centric terms like story slicing. It reframes iteration as de-risking the next step.
5. Context matters more than recipesThe SAFe for Hardware training emphasizes co-creation. Rather than copying software playbooks, practitioners should tailor practices to local constraints, supply chains, and compliance realities.
Conclusion
The conversation highlighted that agility in hardware is less about forcing software practices and more about adapting principles—short learning cycles, risk reduction, and humble collaboration—to fit the realities of physical product development. SAFe for Hardware provides a structure for that adaptation, but its real power lies in co-creating ways of working that respect both the heritage and the complexity of hardware environments.
