Problem Framing vs. AI Problem Framing: What’s the Difference

August 1, 2025
Dana Vetan

Problem Framing and AI Problem Framing are both one-day workshops designed to align teams before they build — but they operate at different altitudes, involve different people, and produce different decisions. This article explains how each one works and how to tell which one fits your situation.

Read this if you're trying to work out whether your situation calls for Problem Framing, AI Problem Framing, or neither.

Problem Framing aligns senior leaders on which strategic problem is worth solving. AI Problem Framing helps cross-functional AI teams decide which AI use cases are worth building. Both are one-day workshops, but they're built for different rooms — different audiences, different decisions, different altitudes. Most organizations need one or the other, not both.

Why this distinction matters

For the last eight years at Design Sprint Academy, we built Problem Framing to help leadership teams escape analysis paralysis. The bottleneck back then was money, time, and executive support. Problem Framing fixed that. It got senior decision-makers on the same page, turned fuzzy mandates into clear problem statements, and gave product teams enough direction to move without guessing.

In the AI era, the bottleneck has flipped.

Resources are unlocked. Projects get approved fast. Executives don't need convincing — they're already sold. All it takes is two letters: AI.

The pressure to "do something with AI" is so high that many leaders skip the thinking, set a vague vision, wrap it in a mandate, and toss it downstream — straight into the hands of product teams, AI engineers, and designers.

"Here's the AI opportunity. Make it happen."

The teams are eager. Solution mode kicks in. Tools open. Prototypes appear. Something starts to move. But strategy, cross-functional thinking, and the space to ask "are we building the right thing?" — that's often missing. The result is predictable: AI solutions that solve half-problems, AI features looking for a problem, copycat implementations that don't move the business.

That gap is why we created AI Problem Framing at the beginning of 2025. It builds on the original Problem Framing method, but it serves a different moment, with different people, and produces a different decision.

What is Problem Framing?

Problem Framing is a one-day workshop for senior leaders to define the strategic problem before teams, budgets, and technology are committed.

Decision altitude: senior leadership level. This is where a VP, a Director, or a Head of function has to align stakeholders and commit budget to a direction — a new product, a roadmap, a transformation, a strategic shift inside their part of the business. There are no solutions on the table yet. The question is whether a problem deserves serious attention — and what trade-offs come with saying yes.

Use it when: Stakeholders are misaligned, multiple plausible directions are competing for attention, and a meaningful budget or commitment is about to be made. The moment to slow down and ask "are we solving the right problem, and is it worth solving?"

Key characteristics:

  • Grounded in real context and existing evidence (research, customer data, market signals)
  • Brings 6–8 senior leaders from different parts of the business into one room
  • Focuses on strategy, customer insight, and trade-offs
  • Ends with a tight problem statement linked to business impact

The output: Clarity at the top — what matters now, what can wait, and what should be left alone.

For the full method, see What is Problem Framing?.

What is AI Problem Framing?

AI Problem Framing is a one-day structured workshop for cross-functional AI teams to decide which AI use cases are worth building

Decision altitude: tactical-to-operational, cross-functional pod level. By this point, the organization has already decided AI matters. Broad AI opportunities have been defined by leadership. What's missing is focus — the call between three or four plausible directions that all sound reasonable on paper. This is where the strategic mandate gets converted into something concrete enough to build.

Use it when: Your team has more AI ideas than capacity to pursue them, engineering, product, and business each define the problem differently, and money is being spent on pilots that don't make it to production. The moment to slow down and ask "which of these AI ideas is worth building — and which should we drop?"

Key characteristics:

  • Run by an AI Discovery Pod — a cross-functional team of 6–8 (product, design, engineering, AI/data, domain experts)
  • Requires no upfront research or data — just the right experts in the room
  • Stress-tests AI ideas against business goals, customer pain, feasibility, and data availability
  • Produces a prioritized list of decision-ready AI use cases

The output: 2–3 AI Use Case Cards — each one a standalone decision artifact, specific enough to brief engineering, clear enough to defend to leadership.

For the full method, see AI Problem Framing.

What's the difference between Problem Framing and AI Problem Framing?

Problem Framing decides which strategic problem deserves attention.

AI Problem Framing decides how AI could be applied in a specific, workable way — across two or three concrete use cases.

Dimension Problem Framing AI Problem Framing
Decision altitude Senior leadership — VP, Director, Head of Tactical → operational — cross-functional pod level
Primary question Are we solving the right problem at all? How should AI be applied — and is this a good AI use case?
Typical mandate Clarify direction and align leadership Create execution-ready clarity for AI teams
Focus Business strategy, customer insight, organizational alignment Value, feasibility, risk, and responsible AI application
Output A clearly defined strategic problem statement A clear AI Use Case Card
Risk it reduces Solving the wrong problem Building the wrong AI solution

When do you need Problem Framing, and when do you need AI Problem Framing?

The two workshops are not options for the same room. They serve different audiences making different kinds of decisions — which is why most organizations end up needing one or the other, not both.

You need Problem Framing when leadership is the bottleneck.

Senior leaders disagree on direction. Multiple plausible strategic priorities are competing. A major investment, transformation, or roadmap decision is on the table and the people with authority over it haven't aligned. Problem Framing is built for that room — and only that room. It's not useful with operational teams, because they're not the ones making the strategic call.

You need AI Problem Framing when the use case is the bottleneck.

Leadership has set the direction — "we're going to use AI in this part of the business" — but the cross-functional team has three or four possible use cases on the wall and no structured way to choose between them. AI Problem Framing is built for that room. It goes deep into the how: how AI gets applied, where it adds value, how to stress-test feasibility.

The rare case where both apply is sequenced across time and people: a leadership team runs Problem Framing to decide a strategic priority. Months later, once that priority has been translated into an AI mandate, a separate cross-functional pod runs AI Problem Framing to convert the mandate into specific use cases. Same organization, different conversations, different rooms, different decisions — not the same group running both workshops back to back.

How does AI Problem Framing fit into the wider AI investment cycle?

For an organization that already has an AI mandate, AI Problem Framing sits inside a wider cycle of decisions:

  1. AI Problem Framing — the cross-functional pod converts an AI mandate into 2–3 specific AI use cases worth pursuing.
  2. Validation sprints — each use case moves into the validation format that fits what it's about:
    • AI Design Sprint for customer-facing use cases (a new AI-powered product feature, a customer experience built around an AI agent, a service redesign).
    • AI Workflow Sprint for employee-facing use cases (an internal AI agent, a workflow redesign, an operational automation).
    Each sprint produces a tested prototype and a build/iterate/kill decision grounded in evidence.

Each moment de-risks the next. Each moment is a chance to kill ideas cheaply before they become expensive mistakes — a six-month pilot, a misaligned engineering investment, a launched product nobody adopts.

Problem Framing sits outside this cycle. It happens earlier, at the strategic layer — typically before the AI mandate exists at all — and the conversation it kicks off may or may not lead toward AI as the answer.

The bottom line

Problem Framing helps senior leaders choose direction. AI Problem Framing helps cross-functional teams choose what AI to build. Two workshops, two rooms, two kinds of decision — and most organizations only need one of them.

If you have an AI mandate but the use cases haven't been chosen yet, AI Problem Framing is the workshop. If senior leaders are still arguing about direction, no AI workshop will resolve that — you need Problem Framing first, before AI is even on the table.

The diagnostic that matters: where is the disagreement? The answer points to the right room.

Not sure which one fits your situation?

If you're weighing one of these workshops for your team and want to talk it through with someone who runs them, book a call. Twenty minutes, no pitch — just an honest read on whether Problem Framing, AI Problem Framing, or neither is the right call for where you are.

Book a call →

FAQs

Is AI Problem Framing just Problem Framing with AI use cases swapped in?

No. The two workshops share a heritage — both are structured one-day decision-making sessions — but they're tuned for different audiences, different inputs, and different outputs. Problem Framing brings senior leaders together to choose a strategic problem; AI Problem Framing brings cross-functional builders together to choose between AI use cases. The exercises, the artifacts, and the room composition are different.

Who should be in the room for each workshop?

Problem Framing is for senior decision-makers — people with real authority over budget, direction, and resources. Six to eight of them, from different parts of the business. AI Problem Framing is for an AI Discovery Pod — a cross-functional team of six to eight people, including a product or business lead, designers, engineers or data scientists, AI specialists, and domain experts. Different rooms, different people, different decisions.

What is the AI Use Case Card?

The AI Use Case Card is a standalone decision artifact produced by AI Problem Framing. A single session typically produces 2–3 cards, each one about a different AI opportunity. A card captures the user and context, the problem to be solved, where AI adds value and where it doesn't, the expected business impact, and the open risks, constraints, and assumptions.

Each card is designed to be reviewed, ranked, tested, or dropped on its own merit — and to feed directly into the validation sprint that fits what it's about.

What comes after AI Problem Framing?

Validation sprints — typically more than one, because a single AI Problem Framing session produces 2–3 cards. The sprint format is chosen per card, based on what the card is about.

Customer-facing cards go into an AI Design Sprint, which tests the solution against real customer needs. Employee-facing cards — internal AI agents, workflow redesigns, operational automations — go into an AI Workflow Sprint, which redesigns the workflow with AI embedded and tests it with the people doing the work.

In both cases, the output is a tested prototype and a clear scale, iterate, or stop decision — grounded in evidence, not opinion.

How is AI Problem Framing different from a Design Sprint?

A Design Sprint validates a solution against customer needs. AI Problem Framing happens earlier — it picks which AI use cases are worth validating in the first place. You wouldn't run a sprint on a use case nobody has decided is worth pursuing. AI Problem Framing produces 2–3 cards; the sprint that follows — AI Design Sprint or AI Workflow Sprint, chosen per card based on the user — validates each one.

Can a team skip Problem Framing and go straight to AI Problem Framing?

The question itself doesn't quite fit, because Problem Framing and AI Problem Framing aren't sequential steps for the same team. They're separate workshops for separate rooms. Problem Framing is for a leadership group resolving a strategic disagreement; AI Problem Framing is for a cross-functional pod choosing between AI use cases.

A pod with a clear AI mandate has no reason to run Problem Framing — the strategic decision has already been made above them, and they're not the right room to revisit it. They go straight into AI Problem Framing because that's the workshop their decision actually calls for.

What's the cost of picking the wrong workshop?

The most common pattern: an organization runs an AI ideation session — or skips framing entirely — picks a use case that sounds exciting, builds a pilot, and only then realizes it doesn't tie to a strategic priority leadership cares about. Months of build time, vendor cost, and team energy get written off.

AI Problem Framing prevents this by forcing the cross-functional pod to stress-test each use case against business value, customer pain, and technical feasibility before anyone touches a prototype. The cost of one day of framing is typically far less than the cost of a six-month pilot that never makes it to production.