Design Sprints - the Google Way

When you think of Design Sprints, you probably think of startups — small teams, fast cycles, big bets tested in a week. That's where the methodology came from. Google Ventures designed it to help founders validate risky ideas without betting the whole roadmap.
But Felix Wang doesn't work at Google Ventures. He works at Google's Design Sprint Master Academy, on the Design Relations team — at the center of how a 180,000-person organization runs, evolves, and teaches sprints internally. What he shared is more complicated, more instructive, and more relevant to anyone running sprints inside large organizations than the startup origin story.
His framing for what Design Sprints are actually doing:
Smart people always come up with smart reasons for their guesses — but it doesn't mean their guesses aren't guesses.
That quote — from Tom Chi, one of the founders of Google X — is the whole argument for the sprint methodology in one sentence. The job isn't to generate better guesses. It's to stop guessing and start validating. Fast.
The Same Methodology. Four Completely Different Implementations.
Google doesn't run one kind of Design Sprint. Depending on which part of the organization you're in, the methodology looks significantly different.
Google X — the moonshot factory — works with small, nimble teams organized around specific projects. The culture is bottom-up. Teams self-organize quickly, test an idea, and move on. Sprints here look closest to the original GV model: fast, flat, and low-ceremony.
Google Ventures — now effectively running as an internal agency — works with early-stage founders. A single Decider role is central to how decisions get made. The sprint is structured around that authority and the founder's ability to commit.
Ads & Commerce — established, complex, data-driven products with a high cost of mistakes. Engineering perspective is fully integrated. Sprints here are heavily anchored in experimentation and involve more formal governance before a direction is committed to.
Corp Eng — internal software for Googlers. Limited UX resources. Engineering-centric culture. The sprint team is often larger, with more stakeholders. Co-creation and direct user access (other Googlers) shape the format significantly.
Same methodology. Same six phases — Understand, Define, Sketch, Decide, Prototype, Validate. Completely different implementation.
Felix's conclusion was direct: design sprints are not plug-and-play. They must be adapted. The recipe works — that's precisely why it's been adopted by thousands of organizations worldwide. But applying it well in a large, complex organization requires understanding what the recipe is actually doing at each step, so you know which parts are fixed and which parts flex. The teams that struggle with Design Sprints inside large organizations are almost always struggling not because the method doesn't work, but because they haven't adapted the format to fit the culture, team size, risk tolerance, and decision-making dynamics of their specific context.
The Six Phases — The Google Way
Google's sprint runs six phases. Here's how Felix described each one — not as a recipe to follow, but as a set of moves with choices inside each.

Understand
The team unpacks the problem and creates shared knowledge. The goal is to get everyone on the same page: what do we all know right now? How Might We exercises are a core tool here — turning pain points into opportunities. The dark art, as Felix put it, is finding the right level of specificity: not so broad it generates nothing useful ("how might we make people happy"), not so narrow it closes off ideas ("how might we make this button blue"). Other methods: user interviews, lightning talks, experience mapping, Six Thinking Hats. The phase is about generating shared understanding, not generating solutions.
Define
Once the team understands the space, they align on what they're actually trying to do. User journey mapping, design principles, success metrics, value proposition canvas — these are the tools for scoping the ideal journey and agreeing on what a good outcome looks like. This phase answers: what exactly are we building, and how will we know if it worked?
Sketch
Broad idea generation in a time-boxed format. Crazy 8s is the anchor exercise — eight ideas in eight minutes, quantity over quality, speed over polish. The point is to move past the first obvious idea. Felix also uses comparable problem analysis here: if you're working in finance on savings habits, look at how fitness apps build workout habits. The problems are different; the mechanics might transfer. Solution sketches follow — a more detailed, three-panel treatment of the best idea, built to stand alone without verbal explanation.
Decide
Narrowing from many ideas to one direction. Dot voting and decision matrices help the group evaluate options against the sprint brief and success criteria. The key question in this phase: given everything we know, which idea best answers the challenge we defined? The Decider role matters here — someone with authority to make the call, informed by the group but not overridden by it.
Prototype
Create something real enough to get genuine user feedback. What "real enough" means varies significantly by project type. Physical products: cardboard, clay, form factor mockups. Software: clickable mocks, interactive prototypes. Process-driven challenges: a concept walkthrough, a storyboard of the new process. As Felix put it: "It's up to you as the facilitator to pick and choose what method to use to fit the overall brief and challenge." The prototype is not the answer. It's the question, made tangible.
Validate
Put the prototype in front of users and find out. Generate learning. The team observes, captures insights, and then decides: does this path hold? Do we continue, iterate, or go back to the drawing board? This phase is not about confirming what the team already believes. It's about finding out what's actually true.
Across all phases, Felix emphasized one thing consistently: there is a long list of methods you can swap in and out at each stage. The phases are fixed. The methods inside them are choices. That's where facilitation judgment lives.
What the Six Phases Don't Tell You
One thing Felix was direct about: the six phases are not the sprint. They're the middle of it.
The planning phase — stakeholder interviews, problem framing, participant selection, understanding the history of the challenge — is where most sprint outcomes are determined before Day 1 starts. And the follow-up phase, where you make sure the insights and decisions from the sprint are actually acted on, is where most sprint value is lost when it's skipped.
For experienced facilitators, this isn't a surprise. But it's worth naming because it's where the gap between a well-run sprint and a sprint that actually changes something tends to open up.
Felix was also clear on outcomes: not all sprints end in positive results, and that's not a failure.
"Even if it might not be totally successful — if you take a step back, you realize what mistakes were made and you don't have to repeat those mistakes later. So it kind of is a success."
For a practitioner room, that reframe matters. The sprint produces learning. Sometimes the learning is that the direction was right. Sometimes it's that the direction was wrong. Both answers are worth the week.
What Google Uses to Anchor Every Sprint
Across all four contexts, one preparation practice is consistent: a comprehensive Sprint Brief.
The Brief captures the challenge, goals, user types, platform considerations, and timeline before the sprint starts. It forces the clarity the sprint needs to run well — and it surfaces misalignment between stakeholders before it shows up in the room on Day 1.
For experienced facilitators, this isn't new information. What's worth noting is that even inside Google, even with experienced facilitators and mature sprint culture, the Brief remains non-negotiable. It's not a formality. It's the shared contract the team returns to every time a decision gets contested.
DSA's Design Sprint Brief is available as a standalone tool →
The Principle That Outlasts the Process
The most transferable insight from Google's approach isn't a specific exercise or a phase modification. It's this:
Sprints only work at scale when you make them fit the organization — not the other way around.
This matters for any facilitator working inside a large, complex organization. The recipe works — the structured sequence of exercises, the time-boxing, the decision-making rituals — these are proven. What varies is the context they're applied to. The teams that get the most from Design Sprints are the ones who understand what each step is doing well enough to adapt the form without losing the substance. That judgment — knowing when to hold the structure and when to flex it — is what separates a skilled facilitator from someone following a playbook for the first time.
Felix's Advice to Facilitators
The closing of the webinar was the most useful part for practitioners. Felix's advice was direct:
Start small. Don't wait until you have the full methodology figured out before you run anything. Facilitate a How Might We session. Facilitate the prototyping phase on its own. Break the sprint into individual exercises and run them at different times. The more you do it, the more you develop your own facilitation style — what you're naturally good at, what you need to work on.
"It's really hard to give you a 10-step plan for becoming a good facilitator. Even for me, having done this for 3–4 years, there are still things I find I can improve on. Just go out and do it."
And on the question of how to get teams who resist sketching or creative exercises to participate:
"Think of yourself as the host of the party. You have special abilities to get people to do things. If you face real hesitancy, just pull out your facilitator card and tell people what they need to do."
Design Sprints, Felix noted, are effectively design thinking in action rather than in theory — "design doing rather than design thinking." The methodology packages the same human-centered principles that IDEO pioneered into a structured format that's easier to teach, easier to run, and easier to repeat.
That teachability is precisely what makes it worth upgrading.
Where That Principle Leads in 2026
If you're already running Design Sprints, you've probably noticed that client conversations have shifted. A year ago, clients wanted to validate product concepts.
Now they're asking different questions:
- Where in our organization could AI meaningfully change how work gets done?
- Which tasks should stay human decisions, and which should we redesign around AI?
- How do we evaluate an AI use case before committing engineering resources to it?
- How do we get operations, technology, data, compliance, and business leadership in the same room and actually make a decision?
These are not product questions. They are workflow and organizational design questions. And they don't map cleanly onto the Design Sprint format — which was built to prototype and test product concepts, not to redesign how work happens.
The facilitation challenge is also different. In a product sprint, you're managing creative tension and decision-making around a prototype. In an AI workflow session, you're managing the gap between people who understand the business operations and people who understand what AI can realistically deliver — two groups who rarely share a language, rarely agree on scope, and rarely trust each other's constraints.
The room is harder. The stakes are higher. And the facilitator's methodology needs to be built for it.
The AI Workflow Sprint — The Next Adaptation
The AI Workflow Sprint is a 4-day workshop for redesigning employee workflows around AI — and deciding, with evidence, whether to build them.
It brings together the people who understand the work with the people who understand AI — and gives them a structured process to answer one question: how should we operate differently because of AI?
Read more about the AI Workflow Sprint →
What This Means for Your Practice
If you already facilitate Design Sprints, you have the foundation. You know how to manage a room, hold a process, and move a group toward a decision. That's not a small thing — most AI workshop facilitation fails precisely because the facilitator doesn't have that capability.
What you may not yet have is the AI-specific methodology layer: a structured way to help a team map a workflow, evaluate AI feasibility and risk, prototype a redesigned process, and produce a decision that stakeholders will actually stand behind.
That's the gap the AI Facilitator Training is designed to close.
It's a three-day, in-person training in Berlin for facilitators and consultants who want a clear system for guiding AI decisions with teams — from opportunity discovery through to validated workflow design. It covers two structured methods: AI Problem Framing and the AI Workflow Sprint. You leave with the full playbook, not just the certificate.
It's not designed for beginners. It's designed for practitioners who already know how to run a room — and want the AI-specific methodology to run this new kind of room well.
.png)

