The words we use about AI are already shaping our organisations

Long before AI systems are deployed, integrated, or governed, something else happens first:

We start talking about them.

In leadership meetings, in internal communications, in learning materials, in hallway conversations that never make it into strategy decks. The language arrives early, and once it does, it begins to shape expectations, emotions, and behaviour in ways that are surprisingly durable.

This is often treated as a secondary concern. Semantics. Framing. Something to tidy up once the “real work” is done.

But in my view, in organisations, language is part of the real work.

Because organisations don’t just implement AI. They interpret it. And they do so largely through the words they choose.

Language as a design choice

I listened to quite a few Davos 2026 conversations and couldn’t help but notice that when AI is described by leaders, you would hear very different futures being implied.

AI is framed as automation – or as augmentation.
As something to be rolled out – or something to be enabled.
As a matter of compliance – or of responsibility.
As a tool to be controlled – or a capability to be stewarded.

These distinctions are often dismissed as stylistic preferences. But they are not neutral. Each one encodes assumptions about trust, agency, and power.

When AI is consistently talked about as automation, people quietly begin to wonder what – or who – might be replaced. When it is framed as something to be “rolled out,” it subtly positions employees as recipients rather than participants. When oversight is described primarily in the language of control, it can breed caution without understanding, or compliance without confidence.

I guess none of this is intentional. But it is consequential.

Words are powerful. Language, in this sense, is not decoration. It is an organisational design choice.

Why this matters more than ever

In complex and regulated environments, the stakes are higher.

Trust is not an abstract value. Trust is a condition for functioning. People are expected to exercise judgment, to navigate ambiguity, and to make decisions that carry real consequences, for clients, markets, and institutions.

In these contexts, AI introduces not just new capabilities, but new forms of uncertainty. What can be trusted? What must be verified? Where does accountability sit? What is acceptable experimentation, and what is not?

The way these questions are spoken about shapes how people experience them.

If the language around AI suggests that risk lives only in technology, oversight becomes someone else’s problem. If it suggests that responsibility has been outsourced to models or policies, people disengage their own judgment. And if the dominant narrative oscillates between hype and fear, learning collapses into either resistance or performative adoption.

Words, in other words, set the emotional and ethical climate in which AI is introduced.

The limits of “training”

This is where I see many organisations instinctively respond with training.

Courses are commissioned. Frameworks are shared. Glossaries appear. And all of this I’m sure has value, but only if it is supported by a deeper shift in how AI is talked about day to day.

Because learning is not just the transfer of knowledge. It is sense-making.

And therefore, if the prevailing language positions AI as something done to people, learning might become defensive. If it positions AI as something that requires shared understanding and ongoing judgment, learning can become collective.

The difference is subtle, but I’d argue it can be quite profound.

One treats learning as a remedial activity, designed to close gaps. The other treats it as infrastructure, something that allows the organisation to think, adapt, and act responsibly over time.

In environments where AI will continue to evolve, the latter is not a luxury. It is a necessity.

What leaders signal, often without realising it

Leaders play a disproportionate role here, not because they control every message, but because their language travels far and wide.

The metaphors they use, the questions they ask, the terms they repeat, these ripple outward and are picked up, echoed, and operationalised. Over time, they become part of the organisation’s common sense.

This is why paying attention to language is not about being p.c. or carefully scripted.

Not every word needs to be perfect. But it helps to notice patterns. To ask what kind of organisation is being implied by the way AI is described. To reflect on whether the language being used invites responsibility, curiosity, and judgment, or the opposite – quietly discourages them.

A quieter place to start

There is a temptation, when it comes to AI, to look first for tools, policies, or roadmaps.

Those matter. But as I always say, the first and most important characteristic of a good communicator is being able to truly l.i.s.t.e.n.

And that requires no technology at all.

Listening to how AI is spoken about across the organisation. Listening for fear, for overconfidence, for detachment. Listening for the stories that are already taking shape.

Because by the time AI systems are embedded, the organisation has already been shaped – by the words it chose early on, often without noticing.

And in the long run, those words may matter as much as any model ever will.

Just a thought, without any references to any company, that came to my mind as I’m listening to thought leaders in Davos. Felt like I had to share.

Your thoughts?

Leave a comment