You're ready to hire an AI Facilitator. But are you ready to lead one?

March 17, 2026
Dana Vetan

If you are a Chief Transformation Officer, Chief Digital Officer, or a VP or SVP who has been handed an explicit cross-functional mandate to make AI work across the business — not just within your own function — this article is for you.

You have budget authority, board-level pressure, and one to two years of scattered investment that hasn't compounded into anything you can confidently present at an ELT review. You've seen the missing middle up close — the gap between AI strategy and AI execution where use cases go to die, where cross-functional teams can't align, where nobody will make a decision about what to build and what to stop. And you've decided to hire an AI Facilitator to fix it.

Good instinct. But before you post the job description or identify the internal person to train into the role, there's a question worth sitting with: are you ready to lead one?

Because hiring and letting them fly in the wild is not leadership. The story plays out the same way across organizations. A leader gets budget, finds the right person — trained, methodical, credible — gives them a title and a vague mandate, and points them toward the organization. Six months later, the person has run a lot of sessions. Some were well received. Others were politely attended and then ignored. Nothing has scaled. The board question — "what do we have to show for this investment?" — still doesn't have a clean answer.

The problem is almost never the facilitator. It's the system they were handed. When the operating environment isn't designed to support this role — when the mandate is vague, the right people are unavailable, and leadership absorbs every uncomfortable conclusion with a collegial "let's revisit this" — the facilitator ends up carrying the weight of organizational dysfunction they were never designed to fix. They work harder, run more sessions, try to compensate with effort for what's missing in design. And eventually, nothing scales.

This is what leaders miss: hiring an AI Facilitator is not a talent decision. It's a system design decision.

The role only performs inside an environment the leader deliberately constructs. There are five elements to that environment. None require a long document or a new process. They require the leader to make specific choices — early, clearly, and publicly — about how this role operates inside the organization.

1. A mandate with teeth, not a title

Most leaders hire an AI Facilitator and give them a job title. That is not a mandate. A title tells the organization the person exists. A mandate tells the organization what the person is empowered to do — and what happens if people don't engage.

The AI Facilitator needs three things to function:

The first is the right to convene. They need to be able to request a session with any cross-functional team in the organization and have that request treated as a priority, not a favour. In practice, this means the leader publicly signals — in a leadership meeting, in a communication to the teams, in the first conversation they have with their peers after the hire — that structured AI discovery sessions are part of how this organization now makes AI decisions. Not optional enrichment. Part of the process.

The second is access to senior decision-makers. Discovery Pods only produce good decisions when the person with authority to act on the outcome is in the room. If the facilitator is consistently working with teams whose Decider is always too busy to attend, the session will produce recommendations that go upstairs and die. The leader needs to make it clear — to their own peers and to the teams below them — that the Decider belongs in the room.

The third is the authority for stop decisions to stick. This is the one most leaders underestimate. When a structured session concludes that an AI use case isn't worth pursuing, someone will push back. Usually someone senior who had personal investment in the idea. If the facilitator cannot point to unambiguous leadership backing for the conclusion, they will be pressured to soften it, revisit it, or find a different answer. The moment that happens once, the system loses its integrity. Every subsequent stop decision becomes a negotiation.

A mandate with teeth doesn't require a long document. It requires one clear signal, delivered publicly, early: this is the process through which we decide what to build with AI. When that process concludes that a use case isn't ready, that conclusion means something.

2. Choose the first problem deliberately

The facilitator's first significant session is the proof of concept for the entire system.

If it succeeds, the organization experiences what structured discovery can do and wants more of it. If it fails — or produces a result that nobody acts on — the facilitator spends the next three months defending the method rather than running it.

The most common approach is well-intentioned and almost always wrong: the leader tells the facilitator to go out, collect ideas bottom-up, find something promising, and run with it. It sounds like empowerment. What it actually does is hand the facilitator a political map they don't know how to read yet. The first session ends up shaped by whoever said yes first — the team that was most available, the stakeholder who was most enthusiastic, the problem that looked attractive from the outside. Not the problem that was strategically right.

The first problem should be chosen jointly, with the leader applying four filters the facilitator doesn't have full visibility into.

The first filter is strategic alignment.

The problem needs to connect visibly to something the organization already cares about at the top — an OKR, a board priority, a cost or revenue goal that has a name and an owner. A structurally interesting AI opportunity that isn't tied to a strategic goal will produce a validated use case that nobody has budget or mandate to act on. The leader knows which problems are genuinely on the agenda and which are organizationally orphaned.

The second filter is stakeholder momentum.

The best first problem is one where multiple people across functions are already paying attention — not because they've decided what the answer should be, but because they share a genuine interest in figuring it out. That shared interest is what fills the room with the right people and creates the conditions for a decision that sticks. A problem with one champion and ten skeptics is the wrong first problem.

The third filter is complexity.

The structured discovery process is built for problems that don't have an obvious answer - we could call them wicked problems: cross-functional, ambiguous, with no single owner and no clean solution. A process like  Problem Framing creates the conditions for collective intelligence to surface when individual experts alone can't. If the problem is too simple or too contained, the method is overkill and the room will feel it. If the answer has already been decided and the session is really a validation exercise, the process becomes obsolete. The first problem needs to be genuinely complex enough that the organization actually needs a structured way to think through it together.

The ideal first problem is aligned with strategy, has genuine stakeholders who want the answer, and is complex enough that nobody has solved it yet. That combination is not that rare and selecting it is the leader's job.

There is one more option worth considering: make the leadership team itself the first room.
An AI opportunity mapping session with the C-suite or senior leadership group — a structured half-day where leaders diverge on where AI could create the most value, then converge on what's worth pursuing first — does two things at once. It produces strategic alignment on priorities before any practitioner work begins. And it gives leadership a first-hand experience of what structured collective thinking actually feels like: the productive discomfort of divergence, the clarity that comes from a well-facilitated convergence, the difference between a meeting where the loudest voice wins and a session where the best thinking surfaces.

That experience is worth more than any briefing document. Leaders who have felt the method tend to protect it.

3. When the process produces an uncomfortable answer, back it

The AI Facilitator's job is to run a structured process: workshops that frame AI problems, design solutions, and test prototypes with real users before a single line of production code is written. The process is designed to do two things. Stop weak ideas early, before they become expensive commitments. And advance the ones that genuinely deserve investment — but only after they've been tested against reality: a redesigned workflow, a working AI agent MVP, evidence from real users about whether it actually solves the problem.

A healthy system stops 60% or more of the AI ideas that enter it. That is not a failure rate. It is the point.

But stopping an idea means telling someone — often someone senior, often someone with organizational weight — that their AI initiative isn't ready or isn't worth pursuing. What the leader does in that moment determines whether the system retains its integrity.

If the leader backs the conclusion — "the employees we tested with told us they wouldn't use this the way we designed it, and that's exactly the kind of thing we needed to know before building" — the facilitator learns that honest conclusions are safe. The system produces better decisions over time.

If the leader softens the conclusion — "let's revisit this" or "maybe there's a smaller version we can try" — the facilitator learns that early stops create political problems. Conclusions get softer. Ideas that should be deprioritized get reframed as "needing more exploration." The stop rate drops. The pipeline fills with weak ideas that nobody can officially put down.

This is almost never intentional. It comes from a reasonable instinct to be collegial, to protect relationships. But the cumulative effect is the same: the system that was supposed to create honest AI decision-making becomes another mechanism for avoiding it.

The leader's job is to be visibly comfortable when the process produces an uncomfortable answer — and to make that comfort visible to the organization.

4. Build the access architecture before it's needed

The structured discovery process runs on cross-functional teams. An AI Problem Framing workshop needs a business owner, a domain expert, a technical lead, a legal or compliance voice, and the AI Facilitator in the same room at the same time. An AI Workflow Sprint needs a slightly different configuration of people for a four-day session.

Getting those people in the same room is harder than it sounds. In large organizations, the people who need to be there are the people who are always unavailable. The business owner is traveling. The legal lead has six other priorities. The technical lead reports to a different part of the organization and needs their manager's permission to participate.

The facilitator cannot solve this problem. They don't have the organizational authority to convene senior people from multiple functions. The leader does.

What the leader needs to do — ideally before the facilitator starts their first session — is establish the principle that participation in a structured AI discovery session is a priority, not an optional enrichment activity. That means:

Telling their C-suite peers directly that cross-functional participation in these sessions is expected, not optional. Identifying, for each significant business area, the two or three people who would need to be in the room for that area's AI decisions to be credible — and flagging those people to the facilitator before any session is designed. Building a simple norm: when a Discovery Pod session is scheduled, the people invited don't send a delegate. They come, or they reschedule.

None of this requires a policy. It requires the leader to use their authority once, clearly, and early. The facilitator can run a session with whoever shows up. But the outputs will only be as good as the people in the room.

5. Measure what you actually want

What the leader measures determines what the facilitator optimises for. This is not a metaphor. It is a direct causal relationship.

If the leader asks how many workshops ran this month, the facilitator will run more workshops. They will accept any session request, prepare less thoroughly, and expand the scope of what they're willing to take on. Session volume will increase. Decision quality will not.

If the leader asks the right questions, the facilitator will build the right system. The right questions are:

What is the stop rate? Of the AI ideas that entered the structured discovery process this month, what percentage didn't survive evaluation? A number above 60% is healthy — it means the process is doing its job. A number below 30% should be a concern: either ideas are being selected for evaluation only when they're already certain to succeed, or conclusions are being softened to avoid conflict.

How long does it take to get from idea to decision? The time from a team submitting an AI idea to a confident build, iterate, or stop decision is the measure of decision velocity. A structured process should compress this dramatically compared to the informal, political, slow-moving alternative. If it's not compressing it, something in the process is broken.

How many distinct business units have engaged? A system that is working should spread. Teams that have experienced a structured session want another one. Other teams hear about it and want access. If the facilitator is running sessions for the same three teams in month three as they were in month one, the system is not taking root.

What did the early stops save? For each idea that didn't survive structured evaluation, what was the likely cost of a six-month pilot that would have reached the same conclusion? This is the most important number the facilitator can give you, and the one most leaders never ask for. It transforms the conversation from "what did we build?" to "what did we avoid building?" — which is the right conversation for an organization that has already wasted two years on scattered pilots.

These four questions, asked consistently, shape the facilitator's behaviour more powerfully than any goal-setting conversation. Ask them once a month. Make the answers visible to your peers. That's how you signal what this role is actually for.

The bad news

The AI Facilitator cannot build a system for AI decision-making in an organization that isn't ready to make AI decisions differently.

The five conditions above are not nice-to-haves. They are the prerequisites for the role to produce anything that scales. If the mandate is vague, the first problem is poorly chosen, stop decisions get softened, the right people are never in the room, and the leader measures sessions instead of outcomes — the facilitator will work hard, produce some good individual sessions, and leave behind nothing that outlasts their personal energy.

That's not a failure of the facilitator. It's a failure of the conditions.

The leader who hires an AI Facilitator and then gets out of the way has made a common mistake: treating this role like a consultant who delivers a product, rather than like a function that produces a capability. The capability only exists if the organization is structured to support it.

Setting the stage is the leader's job. The facilitator performs on it.