Is growth an art or science? Some praise the creativeness of new growth hacks. Others praise the quantitative maneuvering of analytical growth geniuses.
That question has been debated. But are we asking the right question?
The creative juice brings new ideas around story telling, great copywriting, unique ad creative, and never tried before growth tactics. But the quantitative science helps us measure the largest friction points, validate areas of opportunity, and understand how users behave at scale.
It seems the better question is how do we balance both? Truly brilliant growth teams are ones that figure out how to combine the nature of art and science to accelerate growth.
This is why our third principle as a growth team is….
Math and Metrics Don’t Lie, But They Don’t Tell Us Everything
There are three sources of information that drive insight:
3. Your Own Knowledge/Vision/View Of The World (i.e. intuition)
The balance and quantity of each type of information depends on your business type and stage of growth. Two general rules:
1. B2C (vs B2B) products can/should/need to lean more on quantitative information than qualitative and intuition due to size of audience and quantity of data that a large user base generates.
2. The later you are in your growth cycle, the more you should lean on quantitative information. If your numbers are small, the amount of insight you can get out of quantitative data is also small
Each type of information has their own purpose.
As humans we tend to do a few things:
1. Overestimate probability of success
2. Inflate the impact of a success
3. When successful, assume we know what made it successful
These three things are dangerous. They cover up insight and mislead you. As discussed in building a growth machine , insight drives the machine. At its core, data helps keep us honest and more accurate with our decisions and their outcomes.
Our team uses data to assess a few things:
1. Baseline - where are we at today
2. Ceiling - whats possible in terms of improving an area
3. Sensitivity - what is the overall impact if we improve that baseline
A simple example. Lets say you are trying to decide what to work on, retention or virality. How do you know which one to choose assuming you can’t work on both given limited resources? In a lot of companies you end up with people with different “gut feels" arguing over apples and oranges. Exhausting.
To make that decision easier, use data to understand where you are at today with both areas, make an educated prediction on where you think you can improve it to, and understand how that improvement effects the overall outcome over time.
Data doesn’t stop at quantitative metrics. A lot of times to understand why a key metric is not improving, we need to ask the users. When most people think about talking to users, they think user testing sessions. But doing user testing at scale is difficult.
Instead for products like Sidekick that are dealing with huge audiences, we collect a continuous stream of 1 on 1 qualitative data via email and on site messages. A simple example, lets say you have identified through quantitative data that you have a lot of users using your product for a couple months, but then churn. To get an indication of the problem areas we will pull a list of users who show that behavior in the data and email them 1 on 1 asking why they decided to stop using the product.
Quantitative and qualitative data helps us understand the what and the why, but it doesn’t give us the solution. Otherwise we could just program robots to build our products for us. We have to use our own intuition to come up with possible solutions. Note the emphasis on possible. We think we might know the solution, but ultimately our ideas are just hypotheses and should be validated with quantitative and qualitative data.
Committing To Data
Every growth team needs to have a commitment to data. There are three common mistakes I see:
Most teams don’t spend enough time on instrumentation and visualization of data. To put it into perspective, James Currier of Ooga Labs said in his 500 Startups presentation that early teams should be spending 33% - 50% of their time on instrumentation and analytics infrastructure. A feature release should not be deemed complete unless it is instrumented in detail.
Most teams have a hard time prioritizing instrumentation when there is a list of features they want to build. Builders like to build. But building with out instrumenting will lead you to a path where you are making blind gut calls.
2. Quantitative But Not Qualitative
While some teams don’t spend enough time building quantitative data collection, other teams end up ignoring the qualitative portion of data collection. Make sure to build systems that make it easy to do extremely targeted surveying of users (onsite, email, etc). Tools like Intercom can significantly help with this. The best systems will regularly poll key pockets of users based on behavioral data to measure trends over time.
Instrumentation and data collection is only step one. The real work is on how you analyze it, extract learnings, and form those learnings into ideas. There is no point in collecting all of this data unless you are going to put it to use. A core part of a growth PM day should be on data analysis. If you are a larger team, having a dedicated data analyst per team will help you move the fastest.
If you want your team to use data to make decisions, work on making it as accessible as you can. If the data is hard to access, the less your team will use it, guaranteed. Use tools like Mixpanel in the early days. In the later days when you need more flexibility and customization, invest in custom data infrastructure.
How We Apply This Principle
Here are some more specific ways that we apply the principle: Math and metrics don’t lie, but they don’t tell us everything.
Building Growth Models
We build quantitative models to help us understand and visualize the effects of changes over the long term. This helps us make decisions at the macro level i.e. What is the highest impact area we should be working on for the next 1 -3 months? With out these models it is hard to understand the effect of changes beyond the short term.
We use OKR’s (Objective and Key Results) as our goal setting framework. One of the reasons I like this framework is that it combines both qualitative and quantitative measures as part of success. The objective is the qualitative statement. The “why” behind the goal. The key results are the quantitative measures that indicate we are achieving the objective.
Every experiment we run requires a hypothesis. That hypothesis includes a prediction of the impact of the experiment. Every hypothesis has to also come with a list of the assumptions we are making around the prediction. Those assumptions can be quantitative and/or qualitative. The point is that we don’t take shots in the dark, and the experiments we run are purposefully chosen.
Every experiment, whether a success or failure must be analyzed on our team. We want to extract learnings out of everything that we can do. The analyses have both quantitative and qualitative components.
We look at what happened. What was the actual quantitative impact? Which of our quantitative assumptions were correct? Which were wrong? The numbers give us answers to these questions.
More importantly we ask why? Why are we seeing the quantitative results that we are? Why were we right or wrong? We can gain understanding of the why by looking at user behavior in the various experiment groups. The why typically leads to additional experiment ideas or action items that we can take.