In-Depth Insights and Guides
About Customer Acquistion And Growth

If you are looking for "hacks" or "tricks" (that probably don't work) then 
my content is not for you.  Subscribe to my email list if you want detailed and in-depth guides on growth. 

One email per week, unsubscribe anytime with one click

Get Out of the ARPU-CAC Danger Zone with Channel Model Fit

Written By

Channel Model Fit is simple - channels are determined by your model.  

First, what do I mean by “Model?”  The two most important elements of your model are:

  1. How Your Charge - For example, free (monetized with ads), freemium, transactional, free trial, one year up front, etc. 
  2. Average Annual Revenue Per User - What the average $$ you make from a customer/user per year.  

Product Channel Fit Will Make or Break Your Growth Strategy

Written By

Earlier, I discussed our common obsession with Product Market Fit that has led to false beliefs such as “Product Market Fit is the only thing that matters.” A byproduct of that false belief are statements such as:

“We are focused on product-market fit right now. Once we have that we’ll test a bunch of different channels.”
There are two major issues with this statement. I’ll break them down separately. 

The Road to a $100M Company Doesn’t Start with Product

Written By

While Product Market Fit isn't the only thing that matters, it is important, so it makes sense that there are no shortage of blog posts explaining Product Market Fit, and how to get it. 

Instead of echoing the many great Product Market Fit explainer posts out there, I'm going to focus on the 5 elements of Product Market Fit that I believe are most misunderstood and overlooked:

Why Product Market Fit Isn't Enough

Written By

I’ve been lucky to have been part of building, advising, or investing in 40+ tech companies in the past 10 years. Some $100M+ wins. Some, complete losses. Most end up in the middle. 

One of my main observations is that there are certain companies where growth seems to come easily, like guiding a boulder down hill. These companies grow despite having organizational chaos, not executing the “best” growth practices, and missing low hanging fruit. I refer to these companies as Smooth Sailers - a little effort for lots of speed.

In other companies, growth feels much harder. It feels like pushing a boulder up hill. Despite executing the best growth practices, picking the low hanging fruit, and having a great team, they struggle to grow. I refer to these companies as Tugboats - a lot of effort for little speed.

Inside the 6 Hypotheses that Doubled Patreon’s Activation Success

Written By

I rarely accept guest posts on this blog, but this opportunity was too good to pass up.  A couple months ago Tal Raviv (Growth PM @ Patreon) sent me a short email - "Brian, the results are in.  We doubled (yeah, doubled) new creators in our onboarding." 

My immediate response, holy f*ing YES!!!! My second response, lets write about this.  I've spoken/written about the importance of onboarding a lot before. The number of solid cases/examples out there is few and far between.  So Susan Su, Head of Marketing at Reforge, dug in with the Patreon team to produce this amazing piece.  It covers:

  • How Patreon decided and defined a leading indicator activation metric.
  • The challenges they faced with that metric.
  • The six hypotheses they tested, where the hypotheses came from, what worked and what didn't. 

Building a Growth Team from Zero to Fifty

Written By

Growth is still an emerging discipline, and not everyone has a structured growth team within their org. But, let’s say you get to start from scratch and build the ideal growth team. What people and roles would you start with?

Andrew Chen and I recently sat down to look at a few configurations to consider as you're scaling up a team around growth. We've broken up the conversation into three videos, with notes below each video. 


1. The Minimum Viable Growth Team -- 1 to 3 people
2. The 10-20 Person Growth Team -- The 1-5-10 Ratio and Growth Specialization
3. 50-Person Growth Team and Beyond -- Where Growth Drives Product

How You Battle the "Data Wheel of Death" in Growth

Written By

Data Isn’t Constantly Maintained -> Data Becomes Irrelevant / Flawed -> People Lose Trust -> They Use Data Less

If the above looks familiar, you’re not alone. I estimate that greater than ⅔ of data efforts at companies fail.

This is trouble because data plays a key horizontal role in the growth process and mindset. Without good data, it’s not possible to run a legitimate experimentation cycle.

Today, I’ll take a look at 4 reasons why well-meaning data efforts fail so often, and what you can do about it.

Growth Benchmarks Are (Mostly) Useless

Written By

Quick note: I wanted to let you know that we are starting to build out the team at Reforge. We are hiring a Sr Product Marketing Manager, Sr Content Developer, and a Content Marketer. If you, or someone you know, is passionate about professional development/education then please get in touch.

Most growth communities, forums, and email lists will inevitably have that thread that goes: 

“Hey, what are the benchmarks everyone’s seeing for X?” 

I constantly find people seeking out benchmarks or pointing to benchmarks, and we’ve all been there -- who doesn’t want some normalizing data to understand whether we’re on track or not?

The desire is further fueled by many companies releasing “benchmark reports.”  I understand why. It makes for sexy content marketing.

But there’s one small issue:

Growth benchmarks are mostly useless. There is a better way, which I'll outline below.

Now, back to benchmarks... we can plot most benchmark information on the below 2X2 matrix:

On the X-Axis, we have sample size, representing the number of data points included in the benchmark.

On the Y-Axis we have similarity, representing how similar the data-producing sample group is to your own product, business, and channel. High similarity would mean that the sample group has the same target audience, a similar product, and a similar business model. 

There are tons of benchmark reports and intelligence tools out there, and I use to recommend a handful whenever I’d get asked the inevitable question about benchmarks. 

But the more I thought about the question, the more I realize that these posts and tools are not that helpful. To assess when and how benchmark data can aide growth, let’s go through some common sources of benchmark data and where they fall on the matrix.

Benchmark reports

Most benchmark reports live in the lower right hand quadrant of the matrix. 

They usually draw on a larger sample size, but that sample size scores low on similarity to your business. In the best case scenario, the sample might represent an industry category, but will still include lots of data points for audiences or business models that are completely different from yours.  

There are a bunch of problems with your typical benchmark report, so let’s go through them one by one:

1. Aggregated noise

Most benchmark data present aggregate numbers of their entire sample. The problem is that typically 80%+ of the sample are usually low quality applications or companies. This generates an incredible amount of noise in the data. Most of the people reading this article are probably trying to build venture backed businesses. By definition to be a venture backed business you need to be in the top 10% or a complete outlier.  

2. Averages are useless

Most benchmark reports show you metrics in the form of averages, medians and standard deviations. Averages / medians will be skewed towards the high number of low quality apps since they are more numerous in the sample. The result is that the benchmark stat that ends up being presented is well below where you actually need to be in order to be a high growth company.

Aggregate stats across a category can help you get a general understanding of what to expect in that category, but their utility stops there. If you are hitting "average" within the category, then you are are probably not a venture-backable business.

If you are benchmarking, you naturally want to benchmark against best-in-class competitors, not an aggregate average of a category, but your benchmark report or tool may not show that spread.

For category-wide performance data to be useful, you would need a segmented average of apps, sites, or business that have a combo of similarities with one another, that you also share. That level of granularity and accuracy typically doesn't exist in publicly available or purchasable form.

3. Same metric, different measurement

CAC is CAC, LTV is LTV, Churn is Churn, right? Nope.

Different businesses measure the same metric completely differently even if they are in the same industry category. I’ve never seen a benchmark report that takes this into account. They usually just ask, “What is your CAC?”  

Different products and business models require different ways to measure customer acquisition cost, and other key metrics that often show up on benchmark reports as uniform. 

Averaging or lumping together CAC can be extremely misleading because it doesn’t take into account your company or product’s specific business model. For example, if you have multiple tiers in your SaaS product, average CAC is a lot less actionable than CAC sliced by your different customer segments (with each segment paying different subscription fees). 

4. Incomplete picture

The third issue we face with industry benchmarks is that these reports and tools often aren’t able to provide enough context on the sample set since they need to keep the data anonymous -- which apps or products were included, what categories were covered, or the reasons behind their performance. 

We end up with a deceptively incomplete picture that shows lots of data but delivers few answers. You might get retention numbers, for example, but you have no idea what their acquisition looks like. One piece of the puzzle leaves an incomplete puzzle.

What to do instead

Now that I’ve bashed on benchmark reports enough, I should say that they can be OK as a starting point, if you also do the following:

1. Take it with a grain of salt
2. Ignore non-segmented benchmarks
3. Only look at the top 10% or upper outliers, if you can identify them
4. Contextualize as much as possible

And of course, always prioritize your own numbers when you have enough data. 

Learn Growth, No Fluff. One Email Per Post.

Forum convos

The next most common benchmark source lives in the lower left hand corner. 

I call these “forum convos.” These are small group discussions with others with low similarities to your business. 

Sadly, forum convos aren’t helpful because of 3 reasons:

1. Low similarity

The other reference points have so many differences in relation to your business (product, audience, model) that it’s comparing apples to oranges.

2. The problem of averages again

They are typically giving you average numbers for their business which contains all the flaws of averages.  

Again, what you would want instead are segmented numbers, but you typically won’t get that in conversations like this. 

3. Lack of context

Forum convos tend to be casual exchanges with little to no context on performance history, additional factors such as other channels, brand, and key metrics, not to mention the basic who/what/why behind the performance data being presented.

What to do instead

Forum convos can be fun, but ultimately aren’t that useful. As practitioners we commonly get caught in the trap of the “grass is always greener:” 

“I wish I had the virality of x!  Or I wish I had the retention of Y!  Or the model of Z!”

Out-of-context forum threads and random data snippets only make it worse.   

In reality, our businesses are a unique set of of variables that we need to figure out for ourselves. If those variables fit together mathematically to make a really good business, it matters a lot less how each variable benchmarks against other standalone data from other businesses that may or may not be relevant. 

If you are going to have these forum convos, DIG to get more information.  

At minimum, ask these two follow-up questions:

  • Who is the target user?
  • What is the business model?
  • Competitive convos

In the upper left hand corner, we have 1:1 or small group data points with extremely high similarity (i.e., competitive) businesses.  

Unfortunately these tend to be competitor convos due to the high similarity score. The competitive dynamic means low trust, so the chance of getting reliable, accurate, and updated data is slim.
In some industries tools will emerge to help you get this. For example, during the Facebook Platform days there were a number of data services where you could get metrics like DAU, MAU, DAU/MAU, etc of almost any app on the platform.  

But there are a two issues with this:

1. First, it’s rare. The existence and ongoing availability of data services depends on the platform making the data publicly available in some form. The platform can change how they report data at any time, have outages that they aren’t incentivized to fix, or even discontinue availability entirely. 

2. Second, you will only get the top level metrics. In the Facebook Platform example above, you could find out MAU, DAU, and their derivatives for a competitive app in your same category, but there’s lots that these top level metrics don’t tell you. Unsegmented data leads to unreliable answers at best. 

1 to 1 convos

Now we are getting into the really useful stuff. 

1 to 1 convos work best with businesses that have similarities on one or more of the axes (Target Audience, Business Model, Product), but not all three. The fact that the other business isn’t similar on all three axes means that it’s more likely to be non-competitive and offer mutual benefit to exchanging valuable information in confidence. 
Here are two examples.

  • If I were Ipsy, I might talk to the likes of Dollar Shave Club, Stitch Fix, or Nature Box.  (similar model, subscription, but different target audiences and product)
  • If I were Pinterest, I might talk with LinkedIn, Facebook or another social product about shared challenges like the logged-in/logged-out experiences or other common pieces of the product. (Similar product and model, social/ads, but different target audiences)

The biggest downside to the 1 to 1 convos is the sample size. As deep as the conversation may go, remember that it is only one data point. This is like trying to get a view on a 3D object but only looking at one dimension. 

What to do instead

1 to 1 convos can still be helpful and a valuable source of a deeper benchmark that also doesn’t require that you close with a direct competitor. 

To make sure it’s valuable, keep the following in mind:

1. Contextualize, contextualize, contextualize
Think hard about the product value prop, target audience, and other elements that might impact the numbers. For example Dollar Shave Club and Ipsy both are subscription ecommerce products with similar cadences, but there are two big differences:

  1. Value prop - DSC is more of a utility, Ipsy is more discovery
  2. Target audience - male vs female 

Both those things will naturally influence all numbers. 

2. Get the complete picture

Try to get segmented numbers as well as a holistic view, not just one number in isolation.

3. Get the ‘Why’

It’s critical to ask “why” they think something worked or didn’t work. Asking “why” leads to learnings that you can apply back to your product, rather than numbers that are specific but not actionable.

The Sweet Spot

The sweet spot is if you can form a group of non-competitive businesses with similarities on those axes we talked about.

It has all the advantages of the 1:1 convos, but with a slightly larger sample size. That larger sample size allows you look at the picture from multiple angles, and to triangulate your data. This ultimately leads to more useful insights for everyone.  

Build relationships with the best

I used to do this in Boston with "mastermind" groups. In my experience, it is the only way to get reliable, non-noisy, benchmarks for where you aspire to be.

It sounds simple (“Just get together a group!”) but getting it right requires the right setup and ground rules. Below are the four takeaways that I learned from my Boston groups.

1. Start small

It starts with just 2 to 3 people you know in your network. Explain what you’re trying to do and organize a meetup with that initial group. If you don’t have anyone within your existing network for that initial group, you can cold email.  

2. Share your own knowledge first

You should do the initial work to seed conversation by creating a presentation that shares some insight, learning or experiment that you have been running lately. If you cold-email, the value-add of this presentation needs to be even bigger. 

3. Confidentiality + warmup

Set an expectation that everything is confidential. To warm up the conversation, you can start with an easy discussion prompt, like: 

  1. One thing you’ve tried recently that has worked
  2. One thing that hasn’t worked
  3. What was the learning
  4. One question or problem you are facing  

This is something simple that people can spend 20 minutes on and everyone can participate in without revealing too much. It has a low barrier and still kicks off a good conversation where people are both giving and receiving. 

4. Expand the group

Once you’ve had a couple successful meetings with that small group, ask each member if they know one person they’d like to invite in. This turns the work away from you to others. 

5. Rotating ownership

Once there is a rhythm, rotate “ownership” of the meeting between members of the group.  Ownership is figuring out a location, general theme topic, etc. 

6. Regularity / repeat

Rinse/repeat about every one to two months. Repeat exposure is key to building trust and deeper relationships. 

7. Expand the axes
Rinse and repeat steps 1 through 6 across different axes of similarity. For example, you might have one group where the model is the common theme (i.e. subscription) and another where the common theme is a channel (i.e. paid acquisition) or another that is target audience themed (i.e. SMB).  


With all their flaws, industry benchmarks can still be an OK starting point for gauging the health of your product or business, but building up a group of experienced practitioners who share both commonalities and helpful differences from one another is the most important pillar of continual mastery of growth, or any other discipline.


Solving Mobile Growth & Retention with Andy Carvell, ex Growth at SoundCloud

Written By

Andy Carvell joined SoundCloud in 2012, when the company was just over 80 employees and 10 million monthly active users.  The service now boasts over 150 million registered users, and monthly actives in the high tens of millions. I recently spoke with Andy as part of a 1 hour interview covering:

How he brought a web-first product to mobile
Activity notifications, rich push, and other techniques for driving mobile growth and retention
Andy’s “Mobile Growth Stack” for 2017

How to Embrace Constant Change in Growth

Written By

Quick note: We recently announced the next Growth Series, an 8-week program for designed for experienced practitioners in growth (past participants came from Dropbox, Google, LinkedIn, Evernote, Airbnb, Soundcloud, Facebook, and many other companies). It's by application only, and if interested, you can learn more here

In the early days of building the growth team at HubSpot, we spent a few months optimizing onboarding in our product and produced some meaningful improvements. As the team expanded, I wanted to dedicate a full-time team to onboarding, but I got a few versions of the following questions from other executives: 

“Why do you want to put a full-time team on that? I thought you guys were done optimizing onboarding?”

The mentality of “done” is the exact opposite of the mentality of high performance growth teams. Change is constant. Change is difficult. Not adapting to change is fatal.

The point is a critical one for any company and is the foundation for this growth principle. By nature, humans resist change because adapting can be hard and it’s usually a lot of work. We like to think of things as “done” so we can check them off our to-do list, and move on to the next thing.  

Any great growth team is ready for and responsive to change, nimble, and always, always adapting. They go beyond adapting, and truly embrace change building their team and process around it. Let's talk about why…

Systematic growth is iterative. 

There are four main reasons that this principle is important:

  1. Product Evolution
  2. Audience Shift
  3. Channel Change
  4. Tactic Fatigue

And constant change is the common thread that ties them all together. 

We must be aware of and accept of the constancy of change, integrate it into our culture and empower our teams to always be adapting. If we don’t, our product’s growth will inevitably stall at some point in time. If instead we evolve our growth efforts in sync with the changes happening across product, audience, channels and tactics, we will increase our chances of driving sustainable growth and operate by this principle.

Reason One: Product Is Always Evolving

We should be learning constantly. For product, this means constantly getting better at delivering more of the core value by refining existing features and building new ones based on our learnings. If we don’t, we risk stagnation and being beaten by competitors who are constantly evolving to meet the needs of the market. 

As we refine and change our product, the way our users enter, engage, and exit the product will evolve too, which ultimately affects how we approach growth. 

Think about how interaction with Facebook’s core value has transformed over the years. In the beginning, users updated their profiles, scrolled through their list of friends, clicked through to profiles and left messages on friends’ walls. 

But then, when the company added the newsfeed, user behavior changed in a big way, evolving from comments to the Like Button to current day Reactions. What users shared also changed from text only, to photos, to links, to videos, to live videos today. 

All of this points to the fact that with every product change, how we onboard, retain, monetize, and generally mobilize users for growth, may change. So we can never forget to evolve our growth strategies in lock step with our product strategy.  

Reason Two:  Audience Is Always Shifting

As the market shifts and the product develops and grows, the composition of our audience, who they are, what they want, and what they respond to, shifts. Across the board, our strategy and tactics from acquisition to activation, retention, revenue and referral need to evolve in sync. There are many ways to segment an audience and so there are many ways an audience, and each of its segments, can evolve. 

The Innovation Adoption Lifecycle

First, let’s think about it in relation to the Innovation Adoption Lifecycle.

Typically as a company grows, its audience shifts from innovators and early adopters to late majority and laggards, all of which behave very differently and respond to different product experiences and messaging.

Twitter offers a useful example. In the early days, Twitter’s onboarding focused on guiding users to write their first tweet. This was the core action innovators and early adopters responded to. 
But as the company grew beyond innovators and early adopters, the onboarding changed to emphasize following and consuming content first. That was an action the early and late majority were more likely to take in the adoption phase.  

Niche to Niche

Another way to think about these changes is that audiences expand from niche to niche growing in concentric circles. The growth can be based on different facets of the audience:

1. Demographics

Let’s take a look at Facebook’s growth for a moment. It expanded in concentric circles to increasingly large populations based on education.
Harvard → All Ivy → All College → College + High School → College + High School + Recent Grads → Everyone

How Facebook Grew From Harvard to the World

2. Geography

Uber expanded by location, starting in San Francisco and then launching in other U.S. cities like New York, Chicago and DC, and then expanding internationally moving from the U.S. to all other english speaking countries and then to Europe and beyond.

3. Industry

Workday (and lots of other B2B companies) expanded by industry.  It started by serving other technology companies first, and then extended its reach from vertical to vertical - moving from technology companies to service-based companies to the financial, healthcare, and education sectors.

These and other niches live in different channels, respond to different messaging, and have different needs. So as our audience expands from niche to niche, we need to adapt our growth strategies and tactics with every shift in focus. 

Get More of My Essays on Growth.

One Email Per Post

Reason Three:  Channels Change Constantly

User acquisition channels are changing constantly, evolving at an accelerating rate on both a macro and a micro level. For this reason, companies that adapt quickly and experiment in the newest channels increase their odds of capturing the biggest growth opportunities 

Macro Changes

On the macro level more and more channels are emerging, while their lifecycles are simultaneously accelerating. The graph below shows these two dominant trends - you can see that there are a lot more to channels than there were 10 years ago and that they’re rising into and falling out of favor much faster than older channels. 

Check out MySpace and Facebook, they both grew at unprecedented rates shooting up very quickly, only to die (in the case of MySpace) or become closed (in the case of Facebook) even more quickly.  As channels come and go we need to adapt with them.

Zynga experienced stratospheric growth riding on back of the Facebook platform. But in 2012 Facebook changed the rules of the game and in an instant completely shut off Zynga’s only growth channel overnight. Zynga’s entire business model tanked as a result. This story demonstrates how quickly a channel can change and how vulnerable companies are when they aren’t positioned to see these changes coming down the pipeline, at best, or at least respond instantly when they’re blindsided.

Micro Changes

When we zoom in to the micro level we can see that each channel is evolving at an ever increasing rate. What’s even more challenging is that the platforms upon which we grow - Facebook, Google, Gmail and many more - write the rules of the game. Not only do we not even get a say, but many change the rules constantly. 

To stay in the game we have to play by the rules of the platform. The only problem is that they don’t actually tell us what the rules are. Google doesn’t tell us exactly how a page ranks and Facebook doesn’t tell us exactly how a CPC is determined. So we run experiments to deconstruct them, and figure out how to adapt and keep growing. This means that once we find a channel that works for us we need to be adapting our approach constantly to keep up with how the rules are changing and the channel is evolving. 

Reason Four: Tactic Fatigue

Increasingly, we are bombarded by more information than we can process and our brains have developed ways to filter out what we don’t need to survive. This survival mechanism is important for us to understand because it forces all of our favorite acquisition tactics to move through the Growth Tactic Lifecycle depicted in the chart below. 

There are 4 Stages of the lifecycle of any growth tactic:

  1. Tactic Discovered - Someone discovers a new tactic, facilitated by the evolution of an existing platform or channel or the advent of new platforms and channels. Initially, this new tactic performs well because it captures the user’s attention in a novel way.
  2. Tactic Optimized - The innovators and early adopters of this tactic experiment like crazy to optimize it to peak effectiveness and maximize its impact on their growth.
  3. Tactic Adopted by the Masses - Other companies, the early and late majority, see that the tactic is working and they copy it.
  4. Tactic Goes into Fatigue - By this point, users have been exposed to this tactic over and over across multiple products.  The novelty of the tactic has worn off and users start to block it out. The effectiveness of the tactic decreases and it no longer drives the same kind of growth it once did.

For example, how we ask for email addresses has undergone rapid change over the past few years. As users, we’ve experienced this change first hand and as user acquisition professionals we’ve had to learn and adapt quickly to continue driving email capture conversion rates.

As an example of how quickly tactic fatigue can happen, let’s walk through the incarnations of this tactic over the past few years:

  1. Form within a webpage
  2. Slide ins
  3. Lightbox popovers
  4. Full page takeovers
  5. SumoMe Welcome Mat

Take this one little thing - how do I collect email addresses on my content - and we can name 5 different iterations of that ecosystem in just a few years.

When I first started capturing email on my own content, I used a lightbox popover, which few people were doing at the time. My conversion rate was initially around 10%. But as more sites implemented lightboxes to collect emails, consumers became desensitized, the tactic went into fatigue, and my conversion rate leveled off at 3-4%. I recently started using exit-intent full page takeovers and my conversion rate has increased to 6%, but I expect that it won’t last long.

Do These 5 Things To Implement This Principle

1. Build A Culture of Constant Experimentation

People often see constant change as a negative thing and resist the inevitable. Instead, we must welcome change by building a culture of constant experimentation. This allows us to incorporate the constant flow of change into our growth processes and survive and thrive in competitive environments.   

2. Constantly Validate Old Playbooks, Ditch The Ones That Don’t Work Anymore

Eventually, all playbooks will stop working. This is not the real problem though. The real problem is that over time we become habitualized and attached to these playbooks, and we resist throwing them away. Even when all evidence points to the fact that they are no longer relevant, the thought of ditching our old faithfuls and starting anew is intimidating. Do it anyway. 

3. Balance Your Portfolio of Bets

Here I challenge you to think of your set of growth tactics like a financial portfolio to be balanced in terms of risk and reward. When we test new and unproven channels we take higher risk, higher reward bets. When we optimize proven channels we take lower risk, lower reward bets. It’s like a game in which we have a pool of resources and we’re shifting them between different bets, depending on results of various experiments.

Since all channel effectiveness curves flatten eventually (think of my email capture conversion rate leveling off at 3-4%), in a perfect world we aim to hit our stride with a new channel as a proven channel starts to slow down. It may never happen perfectly like this in reality, but we should attempt to layer tactics on top of each other if we want to maintain growth.

4. Accelerate Your Growth Team Investment As You Keep Growing

When your growth team first starts out almost all of its time will be spent experimenting.  


Over time you find more and more things that work and the team’s resources will be consumed by running and repeating these successful things.  


The danger is that 100% of your time and resources will get consumed by repeating the successful things and you won’t make time for experimentation.  

If you follow this trajectory, you will eventually get caught with your proverbial pants down.

At scale, it’s easier than not to let experimenting with new ideas fall by the wayside. Instead, we must set aside time and resources specifically dedicated to this effort. This requires us to continue increasing the total pie of resources we dedicate to our growth teams. 

As a rule of thumb, I focus at least 30% of my team’s time on experimenting with new ideas, and 70% of our time repeating and optimizing our old faithfuls.

5. Hire For Grit and Resilience

The thought that growth is never done can be exhausting. This is why I hire for grit and resilience. When an experiment fails multiple times in a row, I want people who get right back up, learn from their misfires, bring new ideas to the table, and attack the problem over and over until they solve it. 

When they do succeed, these folks don’t view the job as done, instead they focus on making it even better. And, they have no problem adapting when something they slaved over in the past is no longer working. They’re willing to throw it out and start all over again because they know that’s what it takes to stay on top.

If I Stepped Into Your Company Today As VP Of Growth...

I would establish a baseline to determine where your team and systems fall in relation to the principle - Embrace Constant Change. Walk through the checklist below, assess your team on the principle of Constant Change, and figure out where you’re killing it and where you’re being killed. 


  • How has the product evolved over time?  
  • Did we adapt our growth strategies and tactics? If so, how?
  • What major product changes are upcoming?  How do we plan to adapt our growth strategies?


  • Who is our audience?
  • How has it evolved over time? 
  • Did we adapt our growth strategies and tactics? If so, how?


  • What are our channels?
  • How have they evolved over time?
  • Did we adapt our channel strategies? If so, how?

Tactic Fatigue

  • What are the biggest changes coming down the pipeline that threaten the effectiveness of our channels?

Build A Culture of Constant Experimentation

  • Does our team view change as negative? 
  • Does it resist or welcome it?  

Validate Old Playbooks

  • What playbooks are we running?  
  • What has been their trend of effectiveness over time?
  • Do we ever throw out playbooks that no longer produce results? Or do we hold on?

Balance a Portfolio Of Bets

  • How is experimentation time being apportioned? 
  • What percent goes to low risk, low reward bets?
  • What percent goes to high risk, high reward bets?

Accelerate Growth Team Investment

  • What percentage of overall resources and time is spent on experimentation for new ideas?

Hire For Grit and Resilience

  • When an experiment fails does our team learn, bring new ideas to the table, and keep trying to solve the problem? 
  • When an experiment succeeds do we view it as done or focus on making it even better?