<img alt="" src="https://secure.enterprisingoperation-7.com/789197.png" style="display:none;">

Choose another country or region to see content specific to your location.

Request a Demo

4DX Part 1: Wildly Important Goals and Product teams

Tim Boughton
By Tim Boughton — February 28, 2022 -

Welcome to Part 1 in our blog series looking at how our Product and Engineering teams have been learning to adopt 4DX and make it effective. 

Thinking of using 4DX in your product or tech business? Read on.

(Not sure what 4DX is? Check out our intro blog here.)

While 4DX is pretty straightforward for teams with obvious lead measures - such as a Sales team making more outbound calls, sending more emails, etc - the same can’t be said for a Product delivery team. As we’ve experienced first-hand, applying lead measures to what we do can feel near-impossible – we’ve certainly found ourselves in a cul-de-sac or two!

How are we applying 4DX in Product and Engineering?

We’ve always had a “lean product team” approach to building software. We have small cross-functional teams of product managers, UX designers, tech leads and engineers who work together on a part of the product roadmap. They are led by a north star metric and understanding customer pains (rather than a pre-defined roadmap of features).

You could say that we’ve already been applying some of the 4 disciplines in our work up to this point because the lean product process is already a way to manage regular accountability. 

For example, we have a regular cadence of planning cycles, where the team agrees the objectives of their two week sprints. We have daily standups for accountability and course correction. And we have a regular continuous improvement process with regular retrospectives. 

But priorities can shift and it can become all too easy to drift off our north star metric or for progress to become unclear.

Here are some of the lessons we’ve learned so far while applying the 4DX approach in our Product teams.

1. Focus on the Wildly Important

Our product-related WIGs focus on delivering new products to market and the commercial upside we expect to see off the back of them.

The problem with our lean product process is that we don’t always know exactly what that product will be, even six months out. We must first define then validate the client pain points so that we can grow more confident we’re building the right thing. We have to trust the process to produce the right product.

So the product WIG in our case might be “Deliver a product that has the potential to drive £xM of product upsell revenue” as opposed to anything more specific. That’s ok. But it takes a mature product team to be happy with this level of uncertainty at the start of the process.

Lesson 1: it requires discipline from the PM and tech leads to help everyone find their feet.
We found we needed to encourage everyone, especially the engineers, to feel comfortable with the early stages of the process. Even if they couldn’t naturally contribute to the validation or client research parts of the goal, they could be working on early spikes or prototypes or even foundational pieces of technology that would be needed along the way.

We tried to make this into a positive: that it encourages everyone to play a part - even in areas they are not specialised in. And to get better context on the problem at hand. One of the upsides is that the whole team understands why we’re doing the validation and can take an interest, since it will lead to what they’ll be building later.

This question of: “how can my work help the WIG?” or “what should I be doing now to help?” was tricky to answer early on for some team members.

Lesson 2: the “Whirlwind” can be confusing for product teams.
There’s always a lot to do in a product team: answering questions, writing documentation, fixing defects, maintenance, technical debt, meetings with the team, managers, partners. In 4DX language, this is the “Whirlwind”.

The WIG forces the team to focus in one direction, rather than 10 different ones. In a Product team dedicated to a single WIG, this might be as high as 80% of their time. In others it might be significantly less. As long as the WIG comes first, the team is free to work on other important, if secondary, items.

Sometimes things come up that are more important than the WIG: a P1 defect, a security vulnerability, a specific client need. As long as these are surfaced and the priority calls made, they can live alongside the WIG and fill in any gaps around the WIG commitments.

In an ideal world it’s the discretionary, non-important, things that should drop from the list when 4DX is in place. Discretionary time the team used to spend on many different, unimportant things, now gets focussed forward into the one most important goal.


Lesson 3: it can be tricky to allocate product resources across multiple WIGs
We have several WIGs across the business. There’s no getting around that we want to achieve a lot of different things – and most of them require input from Product.

In theory, each individual belongs to just one WIG team. In practice, more than one WIG team needs product input. 

And so inevitably some people belong to two or three WIGs.

These dependencies and conflicts are a natural part of business. You could say that the good thing about 4DX is that it makes them more apparent and forces the right priority to be selected at each conflict.  Or at least the right conversation to be had!

Currently we feel like it’s the product managers’ job to own those priority calls and to make sure enough work gets done on each WIG. Or to escalate when priorities conflict. It’s something we’re acutely aware of.

2. Act on the Lead Measures

We’ve found that finding good, predictive lead measures of success is hard in product teams. Harder than elsewhere.

Lesson 4: Break down any product delivery into stages to find appropriate lead measures
To try to help, we’ve broken down each WIG into project stages (e.g. Pain validation, Ideation, Development, Alpha testing, Beta testing and Launching). This has allowed us to uncover different lead measures at each stage. 

For example in the Validation stage, success might be all about having a number of interviews with clients (or getting a number of survey responses for a quantitative validation). In Alpha testing it might be about recruiting enough clients or collecting enough pieces of useful feedback. In Development it might be story points delivered or progress towards specific milestone releases.

The biggest problem here is that lead measures in Product tend to be not terribly predictive of achieving the end goal. If you do more validation and find you haven’t got a pain point big enough to solve, you need to change tack to find one that is. More validation doesn’t necessarily guarantee success. Or if later down the project you have a story point burn-down lead measure, how do you make certain that you’re building the right thing or that your estimates are accurate enough?

Again however, having these conversations publicly so everyone in the team knows where they are and the urgency of a change of direction is a major advantage that 4DX encourages.

Overall it means that progress against the WIGs is more subjective than we’d like. We need to take some of the scoreboards with a pinch of salt. 

We’re considering adding a lead measure of “team confidence in our ability to hit the WIG”, measured by survey to better assess overall progress taken from across the team.


Lesson 5: It’s ok for the amount of weekly work each individual spends against each WIG to ebb and flow
To begin with, the focus of the WIG may be "lighter" on the engineering team and "heavier" on the PMs/Design team as they figure out what to deliver.

Sadly, that doesn't mean us engineers get to put ourr feet up. Instead, we continue on other work (e.g. prototyping, investigation spikes, technical research, technical debt, foundational components) while chipping in to help the PMs (e.g. providing feedback or commenting on technical feasibility). Whereas later in the project it might be more intensive: all the team actively working on designing, building and releasing features.

In other words, it's ok for a team member to spend an hour contributing to a WIG one week, and almost all their time another week. The WIG team must collectively hold themselves accountable for what will make it possible to achieve the WIG. Each member needs to ask themselves “what are the one or two things I can do THIS WEEK to move my team closer to the WIG?”

The team must keep talking to be clear on what each person is and isn’t doing to satisfy competing needs.

Once we get up to speed this “what should I be doing?” problem should go away as the product manager should be running the validation in the future in parallel with the delivery cycle.

3. Keep a Compelling Scoreboard

People play differently when keeping score. Perhaps you’re one of those people who enjoys playing social sports or board games with friends, ‘just for the fun of it,’ but perhaps you also find yourself changing strategy, playing harder and paying more attention to the details once you start keeping score. It’s the same in business, and it impacts levels of engagement and ultimately, success.

As it says in the book: “If you’re not keeping score, you’re just practising.”

If you get the lead measures to be things that the team can understand and get behind, then keeping score is no different in a product team than any other team.

There is though perhaps a higher degree of scepticism about the value of keeping score amongst product teams! So it requires more selling up front.

4. Create a Cadence of Accountability

The fourth discipline is in fact, the most important one: holding the team individually accountable for their progress together.

Accountability is driven by the weekly WIG meetings where each team member accounts for the commitments they made last week and commits to specific tasks this week. The team can check their progress, remind themselves of the importance and course correct since they can see what is happening on the ground.

There’s one big change lurking here - that in a typical lean product team using an agile methodology, you’re typically thinking of the team as one, collectively responsible, rather than individually responsible. There is a slightly jarring feeling here when individuals are making their own commitments.

We’ve stated that people play differently when keeping score, which is to say, when they’re responsible and accountable for their own score keeping, they play more effectively. It’s about accountability; not to their leaders, but to themselves and to their peers. Committing to goals that are their own, is key.  So we have to accept this difference and try to maintain it.

Our progress so far

A few weeks into putting the disciplines into practice, we’re doing okay. We’re organised into WIG teams, we have scoreboards and we’re making progress. We’re yet to hit our goals, but we’re more confident than we were under OKRs that we will.

The true test lies ahead: 

  • What happens when we’re mid goal and the lead measures are all off track? How do we double down to change course?

  • Can we each hack this level of accountability? There is nowhere to hide. And it can feel relentless when a team is behind.

  • What happens when the “Whirlwind” becomes a “Tornado”? How do we make allowances when a WIG is knocked off course by other urgent events?

Watch out for Part 2 - we’ll run another article in a few months once we’ve got more answers and learnings from our process.

And if you want to find out what it's really like working with 4DX, we're recruiting.

See open roles

You might also enjoy

What’s new from Mention Me in 2024? Check out our latest innovations and improvements — all designed to help you maximise your referral programme and drive optimisations across your wider marketing strategy.

Here’s your sneak peek…

Mention Me Product Highlights 2024

Discover our latest product innovations from the past year and how they're helping you optimise your referral programme like never before.

Referral marketing -

We analysed over 9,000 A/B tests from 800 businesses to discover which types of experiments really boost referral performances. 

By tweaking aspects like the text, rewards, and images in referral offers — and measuring results when tests were statistically significant — we now know the most impactful tests to run in 2025.

We also used Generative AI to analyse parts of our referral offer designs, combining these findings with our historic A/B test results to see which imagery captivates customers the most.

So, which design elements spark referrals? Are simple offers or detailed promotions better for acquiring new customers? 

Read on to find out…

The Most Important A/B Tests to Run in 2025: Our Research Revealed

We analysed over 9,000 A/B tests from 800 brands to find out which experiments really drive results for referral programmes. Click to see key stats and insights.

customer retention metrics
Retention -

Read time: 5 mins

The Top 10 Customer Retention Metrics for 2025

Learn the 10 best customer retention metrics to help your brand retain your most valuable customers in 2025 and beyond.

Stay in the know

Subscribe to our blog and get monthly emails packed full of the latest marketing trends and tips