Stop asking "Where can we use AI?". Start asking "How do we operate differently because of AI?"

Why the question you ask about AI matters more than the technology you choose
Most product and innovation leaders are walking around with the same brief on their desk: find places to apply AI inside our product / business unit / function.
It is the obvious question. It is also the one that determines whether the AI investment compounds or runs out of steam.
The leaders we see producing real impact with AI are answering a different question. Not where can we use AI? but how can we operate differently because of AI? It sounds like a small shift. It is not. It changes who is in the strategy room. It changes how workflows get mapped. It changes how success gets defined. And it changes the kind of thing your team ships at the end of the quarter.
This article is for the Director or Head of Product or Innovation who has been handed an AI mandate, owns a quarterly deadline, and is starting to suspect that the brief they were given is not quite the right one.
What's wrong with asking "where can we use AI?"
Nothing, in isolation. The question produces a list of features and use cases, and some of those will be useful. The problem is what the question optimizes for.
Where can we use AI? searches the existing operation for places to insert a new tool. The operation stays the same. The roles stay the same. The structure stays the same. The system stays the same. You just bolt AI somewhere in the middle and hope it performs.
This is the AI version of a pattern that has played out with every major technology in the last thirty years. The early adopters of mobile in the 2000s built mobile-first companies. The late adopters built mobile versions of their existing websites. The early adopters of cloud built cloud-native architectures. The late adopters lifted-and-shifted their data centers. In each cycle, the gap between the two approaches compounded into the difference between market leaders and market participants.
AI is going to follow the same pattern. The companies that ask where can we use AI will produce a portfolio of incremental features and call it transformation. The companies that ask how do we operate differently because of AI will redesign the work itself — and find that the same AI tools produce dramatically more value inside a redesigned operating model than they ever could bolted onto the old one.
This is not a hypothesis. We are watching it happen in real time.
What changes when you ask the bigger question?
Four things shift the moment a product or innovation leader stops asking where and starts asking how. Each shift is small in isolation. Together they are the difference between AI work that compounds and AI work that fades.
Shift 1: Who is in the strategy room
The where question fits into existing strategy meetings. Product leaders, engineering leaders, maybe a data or AI specialist. The conversation is about the existing roadmap and where AI features could plug in. The room composition is familiar.
The how question does not fit into that room.
If you are redesigning how the work happens, the people who actually do the work need to be in the room. The workflow owner. The frontline employee. Legal and compliance, before the design is locked in. The operations or customer-success lead who sees what happens upstream and downstream. And someone whose job is the design of the employee experience itself, not just the technical architecture.
This is uncomfortable, because none of these people are usually in the strategy room. The product leader who sets up the how conversation is often the first person in their organization to do this. The shape of the meeting changes. The decisions take longer. The output is dramatically better.
A practical signal: if your AI strategy meeting can be held with the same people who hold every other strategy meeting, you are still asking the smaller question.
Shift 2: How work gets mapped
The where question runs on documentation. The team looks at the formal process maps, the standard operating procedures, the system diagrams. AI gets inserted at points that look promising on paper.
The how question runs on reality. The team maps how the work actually happens — not how the documentation says it happens. And the gap between those two is almost always the entire story.
When a cross-functional team maps a real workflow together for the first time, what they find is uncomfortable. Steps that belong to nobody. Handoffs that only function because one person has them memorized. Decisions made on instinct rather than process. Workarounds that exist because the official process is broken in ways nobody has documented.
This is institutional memory — the tacit knowledge that runs most of what large organizations actually do. It is invisible to documentation. It is invisible to consultancies. It is invisible to AI tools designed against the documentation. The leaders who ask the how question surface this knowledge before any AI gets designed against it. The leaders who ask the where question discover the gap in production, when it is too late and too expensive to redesign around.
Shift 3: How success gets defined
The where question produces success metrics about the AI itself. Model accuracy. Latency. Adoption rates. Tokens consumed. License utilization.
The how question produces success metrics about the work.
- How long does the workflow take now versus before?
- How many handoffs has it eliminated?
- How much rework has been removed?
- How much faster does the team make decisions?
- How has the role of the employee actually changed?
- What is the team doing now that they could not do before?
This is the difference between we deployed AI and we changed how the work happens. The first is an activity. The second is an outcome. Boards and executive teams quickly learn to tell the difference — and the leaders who can demonstrate the second are the ones whose AI programs survive the first round of budget review.
Shift 4: What you ship
The where question ships features. AI search inside the product. AI summarization in the dashboard. An AI assistant for a specific user task. Useful, often genuinely valuable, and almost always incremental.
The how question ships redesigned workflows. The output is not a feature. It is a new way the work gets done, with AI embedded at the points where it produces the largest leverage — and with the rest of the workflow restructured around that leverage.
The difference shows up most clearly six to twelve months later. The teams that shipped AI features have a portfolio of features and a steady stream of small wins. The teams that shipped redesigned workflows have a different operating model and a measurably different cost structure. Both are valuable. Only one compounds.
The trap most product leaders fall into
There is one specific failure pattern worth naming because almost every product team encounters it.
Someone on the team — often a senior individual contributor — figures out a genuinely impressive AI workflow. They have the prompts dialed in. They know where the AI is reliable and where it is not. They have built shortcuts. Their personal productivity is dramatically improved. They look like the future of work.
Leadership sees this and asks the natural question: can we roll this out to everyone?
The answer is almost always no, for reasons that are not immediately obvious.
The individual workflow works because it is tuned to one person's context — their way of thinking about the problem, their standards for quality, their instinct for when to trust AI and when to override it. This is tacit knowledge. It lives in the person's head. It does not survive a handoff.
The moment that workflow is given to fifty people, the cracks appear. The data they work with is not as clean. File formats vary. The edge cases the original person silently handled are now landing on people who do not know what to do with them. Some users trust the AI blindly because they cannot recognize when the output is wrong. Others hit one bad output and refuse to use the tool again. The literacy gap across the team is enormous, and the workflow was never designed for that.
What looked like a scalable solution turns out to be a personal system. To actually scale, you need what any operating system needs: shared ownership, agreed standards, a governance model, real training that goes beyond watch me do it. One person's workflow, however good, is not that. It is a starting point. It is not a system.
This is exactly the trap of the where question applied at the team level. Someone shows that AI works somewhere, and the response is to scale that somewhere instead of redesigning the system around what AI actually changes. The compound effect of doing this consistently over a year is a team that has dozens of clever individual workflows and no shared way of working.
The fix is not to stop the individual experimentation — that is where capability builds. The fix is to recognize that the individual workflow is the input to the redesign, not the output of it. The how question turns the individual breakthrough into a starting point for redesigning how the team works.
A practical path you can run this quarter
The reframe sounds large. The first move is small — one workflow, the right people in a room, four days. It does require targeted permission from whoever owns the workflow you want to redesign and the budget for a small team's time. It does not require a multi-year transformation program or organization-wide buy-in. The smaller, more honest ask is what makes this approachable inside a real product organization.
Step 1: Pick one workflow that matters
Not the easiest one. Not the most visible one. The one where, if it changed, the team would notice the difference. A high-frequency workflow with multiple steps, multiple handoffs, and known friction. The kind of workflow where everyone has an opinion but nobody has redesigned it.
Step 2: Get the right people in a room
Six to eight people who together hold everything the redesign needs. The workflow owner. Someone who actually does the work day-to-day. Someone who understands the data and what AI can realistically do with it. Someone who knows the legal and compliance constraints. Someone with design sensibility focused on the employee experience. The person who can approve resources and own the outcome — this last one is non-negotiable. Read more about AI Discovery Pods.
Step 3: Map the workflow as it really is
Not from documentation. From reality. Each person describes the work from their seat. The maps will not match. That is the point. The work of the room is to produce one shared, complete map. This is the moment the institutional memory becomes visible.
Step 4: Redesign the workflow before adding AI
Clean up the process before introducing the technology. Remove the handoffs that should not exist. Eliminate the steps that exist only because of past constraints. Get the workflow into the shape it should be in. Only then ask: where in this redesigned workflow does AI produce the most leverage?
This sequence is the entire point. Most AI projects skip the redesign and add AI to the existing flow. The leaders asking the how question redesign first. The leverage is dramatically higher.
Step 5: Build a small thing and validate it with real users
Not a fully built feature. A working enough version that one of your real employees can sit down with it and tell you whether it actually changes how they work. Five structured conversations are enough to know whether the redesign is on the right track or whether the team needs to iterate.
This is the entire arc. One workflow. The right people. Four days of focused work. A redesigned workflow with AI embedded where it produces real leverage, validated with the people who will live with it.
This is what an AI Workflow Sprint is. Read What is an AI Workflow Sprint article that covers the full structure if you want to go deeper.
Where to start
The leaders who get this reframe right are not the ones with the biggest AI budgets. They are the ones who stopped asking the small question and started asking the bigger one, then ran one workflow redesign to prove it works.
If you take one thing from this article, take this: stop asking where you can use AI and start asking how you can operate differently because of AI. Then pick a workflow that matters, get the right people in a room, and run the redesign once. The reframe becomes real the moment you do it.
Everything else — the budget conversations, the executive narrative, the team capability, the strategic positioning — follows from that one practical shift.
Watch the full webinar ↓

.png)
