This week I attended a panel discussion hosted by BrainStation in London, titled Leaders in People and Culture: Shaping the Future of Tech Careers. I went expecting a conversation about skills, talent pipelines, and perhaps the usual debates around AI readiness. What I didn’t expect was how matter-of-fact the discussion about AI would be.
The panel brought together senior talent leaders from large, complex organisations: Sophie Bialaszewski, Head of Experience and Adoption, Modern Workplace at Lloyds Banking Group, and Chidera Dimude, Global Head of Performance and Talent Management at Wise. These are leaders operating at scale, in regulated environments, with real accountability for people outcomes, not futurists speculating from the sidelines.
What stood out most was not what was said about AI, but what wasn’t.
AI Is No Longer the Question
Throughout the conversation, AI was treated as a given. People are using it. Leaders assume it is already embedded in day-to-day work. Tools like Copilot, ChatGPT, Gemini, and Perplexity were mentioned casually, alongside HR platforms such as Workday that are still catching up in terms of AI integration.
There was no dramatic framing. No “should we or shouldn’t we?” debate. No sense that AI adoption is optional.
This signals something important: for many large organisations, we’ve already moved past the phase of AI as an experiment. The real work now sits firmly in people, culture, and capability.
From Managing People to Managing People and Agents
One of the most thought-provoking moments came from Sophie, who reflected on how leadership models are continuing to evolve. We’ve already seen the shift from command-and-control to coaching. The next shift, she suggested, is something newer and less well-defined: leading teams made up of both humans and AI agents.
This raises questions we don’t yet have good language for:
- How do leaders assess performance when output is augmented by AI?
- How do they spot talent when skills are increasingly amplified by technology?
- What does “good judgment” look like when decision-making is partially delegated to agents?
The ability to understand how skills are being augmented, and where human value still sits, may itself become a critical leadership capability.
AI Learning Is Social, Not Top-Down
Chidera highlighted something that aligns closely with what I see across organisations: when it comes to AI, people don’t want long formal training programmes. They want peer-to-peer learning. They want to see how colleagues are using tools in real contexts, solving real problems.
That places a different responsibility on organisations. Rather than controlling AI learning through centralised curricula, leaders need to create enabling environments – spaces where experimentation is encouraged, examples are shared, and learning feels safe rather than surveilled.
At Lloyds, Sophie described the use of clear AI roles:
- users,
- leaders,
- enablers,
- builders,
alongside a goal of building an AI-fluent workforce over the next few years. The ambition is not just adoption, but literacy: understanding what AI is doing, where it helps, and where it doesn’t.

Intergenerational Tension – and Opportunity
Another theme that emerged in the conversation was intergenerational collaboration. For the first time, we are seeing a workforce where new entrants are often AI-fluent, while many senior leaders are still finding their footing.
Rather than framing this as a deficit, Sophie spoke about the opportunity it creates. Reverse mentoring, where employees teach leaders how they use AI in practice, is one way of bridging that gap. It challenges traditional hierarchies and subtly shifts power dynamics, but it may be one of the most effective ways to build shared understanding.
Coupled with this was a strong emphasis on bite-sized learning — ten-minute interventions embedded into everyday work. Not courses. Not certificates for their own sake. Learning that fits into how people actually operate.
The Silence on AI Policy
Perhaps the most striking omission in the discussion was any explicit mention of AI policy. There was no talk of guardrails, intellectual property protection, or formal boundaries around use. That doesn’t mean these organisations don’t have them, they almost certainly do, but it was notable that policy did not feature in the lived experience being shared.
This raises an uncomfortable question: are we assuming that culture, trust, and common sense will carry more weight than formal governance? And if so, are leaders confident that those norms are sufficiently aligned across their organisations?
As AI becomes invisible, simply part of how work gets done, the tension between enablement and control is only going to grow.

What This Means for Talent Leaders
What I took away from this event is that the future of talent leadership in tech is not about predicting the next tool. It’s about navigating ambiguity, redesigning learning, and rethinking what leadership looks like when intelligence is no longer scarce.
The leaders on this panel weren’t positioning themselves as AI experts. They were positioning themselves as architects of environments. Environments where people can learn quickly, collaborate across generations, and work effectively alongside machines.
That, perhaps, is the quiet but profound shift underway.
