A few weeks ago in our Manifesto for the Data Informed, one of the five beliefs presented was Company-wide familiarity with metrics rather than outsourcing to ‘data people.’
Immediately, we were pummeled with questions: Does this really matter? Is this a realistic expectation? How can an organization achieve this? So for our next few posts, we’ll deep dive into how to make this lofty aspiration practical.
As with anything else in life — whether people, languages or customs — there is no shortcut to gaining familiarity; the only way is through direct and frequent exposure.
With data, most teams inherently understand this — that is why dashboards are built and links are passed around and we are all reminded to “please bookmark it and check it often.”
Unfortunately, unless your job title includes the word data, the practice of loading said bookmark does not frequently arise to the top of your to-do list, even if you really truly do think data is important! Thus begins the great death spiral of dashboards — because they go unused, they become unmaintained. Because they are unmaintained, when you finally have a need to look at them, they’re broken and useless.
This is why data-informed teams rely on practices other than sheer will to create data familiarity. The big three are 1. weekly metrics reviews, 2. weekly insight reports, and 3. insights reviews.
In this installment, we’ll tackle one of the single most impactful practices of building a data-informed team: the weekly metrics review.
What is a Weekly Metrics Review?
A weekly metrics review is a synchronous team meeting to review the key metrics for a scaling, post-PMF product with all functional team members present — ie, PM, engineering, design, operations. This type of review can (and should!) happen at the executive level, with the CEO and C-level executives, and recurse down to individual product teams.
A weekly metrics review should be short and sweet (think 5–15 minutes, typically at the start of a regular team meeting) and led by the data person who walks the group through the key metrics for your collective area of work (e.g. new user growth, revenue, conversion rates, tickets resolved).
The group should examine how key metrics have progressed over the past few weeks, ideally by looking at a series of time-series line charts. The presenter can also prepare a few key segments to review, for example if a certain type of user, platform, or market is strategically important to the team, or if the team has launched something that impacts a particular segment (like a new feature in a test market).
It’s best to keep the meeting lightweight. Preparation should be easy, ideally no more than 30 minutes. Many great metrics reviews simply start with screenshots of dashboards. The data person shouldn’t have to have all the answers at their finger tips (why did active users spike two weeks ago?). It’s fine to circle back with an answer later.
What is a successful outcome for a weekly metrics review?
Metrics reviews are unlike product reviews or decision meetings; the point is a shared understanding of how the team is progressing towards its goals, not making decisions or creating action items.
This is an unusual proposition, especially when ambitious team are (rightfully!) wary of wasting time in meetings. After reading an article like this and trying out this style of meeting, you may find yourself wondering: Is this meeting actually helping us accomplish anything? It’s common for the weekly metrics review to get cancelled, demoted to an e-mail update, or become the spawning ground for long lists of follow-up questions just so the group feels they did something.
And yet, we argue that familiarity with the impact of the team’s work is in of itself an important enough goal to create a synchronous cadence around. Because we work in a team environment, having shared context across different functions is critical. The ideal outcome of a metrics review is that each person develops a shared understanding of the following questions:
1. How are we doing against our goals?
Few things focus a group as effectively as seeing a chart of week-over-week progress inching towards a goal target. The ritual of doing this together forces the room to confront questions like Are we likely to hit our goal? and Is our recent work having the impact we expected?, which ultimately ladder to the Holy Grail question: Is our strategy working?
2. Which levers are most important in reaching our goals?
If it were obvious what a team should do, then success would be guaranteed. Alas, this is not how the world works.
Creative teamwork is an infinite treadmill of trial and error — come up with ideas you hypothesize will lead to a desired outcome, then do the work to see if you’re right. When you take account of the results is when you learn to hone your skills.
Take an example: let’s say we spent three months shipping Feature X. We might then have a decision to make, should we work on Xv2 next? Simply looking at X’s results in isolation isn’t enough to answer this question; we must consider this relative to other possibilities — what about improving features A, B or C? When we take the time to regularly reflect on the impact of our work through metric reviews, we build towards a richer knowledge of what really drives our business.
Put another way, a major benefit of implementing weekly metrics reviews is that everyone on the team will improve their product thinking. For example, a designer will be less likely to propose an important button on the bottom of the page if he knows there’s roughly a 2x decrease in CTR compared to the top. An engineer will be more vigilant about holding the line on performance as she implements a new feature if she’s aware that snappy loading is a huge driver of usage.
3. What is normal?
Every business has some level of volatility and seasonality. For example, enterprise apps are more used during weekdays, while video games are played more on weekends. Say you expect Januarys to be worse than Decembers for your business, and as predicted, you see a January dip. How big would the dip have to be for you to be worried that something is wrong?
Familiarity with the normal rhythms of your business is essential to quickly identifying when you might need to take action because something is amiss, or when there’s likely to be an error in your measurement. For example, if a team member reports that a recent experiment led to a 20% increase in Key Metric X, but you know X to be extremely difficult to move, you would be suspicious and probe further. Over time, a well-honed understanding of what is normal leads to better forecasting and future planning.
Done well, the practice of weekly metric reviews creates a shared understanding of progress, hones a team’s strategic chops, and enables faster detection of issues.
But beware one of the secret killers of metric reviews — too many follow-up questions.
To be clear, asking questions is an important part of building familiarity, and good questions can lead to important new discoveries. However, not all questions are worth the effort it takes to answer them. It’s easy to rattle off a dozen questions just because you can and you’re a curious person, but you may unwittingly be creating hours of work in chasing follow-ups. Instead, aim to ask questions that are likely to lead to a change in decision.
A good litmus test for whether a data question is worth asking is this: What is my estimate for the most optimistic answer, and what will I do if that’s the real answer? Conversely, What is my estimate for the most pessimistic answer, and what will I do if that’s the real answer? If your actions for both questions are the same, you have a question that’s probably not worth answering.
As an example, consider the question: What is our worst-performing country, and how much worse is it doing compared to an average country?
If your pessimistic answer is Honestly I’m not going to do anything with this information unless it’s one of our Top 10 countries, then having someone spend a bunch of time querying for the result may not be worth it. (A better question might be: Which Top 10 country is performing the worst this quarter?)
How can I start a Weekly Metrics Review?
Practically speaking, starting the habit of metric reviews isn’t always easy, bacause you need convince other functions that it’s worth their time. If you are the data representative, start by discussing the idea with the person who has overall accountability for the team’s work and would be most interested in tracking progress and aligning towards a common goal. This could be the CEO, a general manager (ie head of operations, head of sales), or a product leader (ie PM, engineering lead, etc.)
If you yourself are the directly accountable individual, then propose the idea of a metrics review with your data partner. If both of you agree, then ask the rest of the team to commit to trying it for at least a quarter. Tacking the practice to the start of a pre-existing team meeting also reduces friction.
Your metrics review doesn’t have to be fully comprehensive to start. Consider beginning with a 10-minute slot every other week. The data representative should be prepared to direct the agenda and remind everyone of the goals in the early sessions (This is our main goal metric… I am showing you this segment because… I am showing you this related metric because…) Use the first few sessions to get feedback about the right set of metrics and cuts to look at. Remember: it’s better to start small and add over time than to overwhelm the group by flipping through dozens of charts and numbers. You can always follow up with a short e-mail summarizing what got discussed, with links for further exploration.