In-Depth Insights and Guides
About Customer Acquistion And Growth

If you are looking for "hacks" or "tricks" (that probably don't work) then 
my content is not for you.  Subscribe to my email list if you want detailed and in-depth guides on growth. 

One email per week, unsubscribe anytime with one click

Why Most Companies Fail At Moving Up or Down Market

Written By

This is part five in a series about 4 Frameworks To Grow To $100M+. Subscribe to get the rest of the series.

In the introduction I explained there are two types of companies:

  • Tugboats, where growth feels like you have to put a ton of fuel in to get only a little speed out.
  • Smooth sailors, where growth feels like wind is at your back.

The difference between these two are not the common mantras of build a great product, product market fit is the only thing that matters, or growth hacking. 

In part two, I talked about why we should think about Product Market Fit as Market Product Fit, how to lay out your Market and Product hypotheses, and how understanding whether you have Market Product Fit comes down to Qualitative, Quantitative, and Intuitive indicators.

In part three, I covered Product Channel Fit - the concept that products are built to fit with channels, channels are not built to fit with products.  

In part four I covered Channel Model Fit - that channels are determined by your model.  I went through the ARPU ↔ CAC spectrum and how your product and product tiers need to align on this spectrum. 

In part five I covered Model Market Fit - your model influences the target market and vice versa.  

Through out the series I've tried to highlight three key points:

1. You need all four Fits to grow to $100M+.
2. You can't think about the four Fits in isolation because together they form an ecosystem for growth.
3. You need to constantly revisit the fits because they are continuously changing, or breaking down.

Let's walk through each of them individually in more detail. 

You Need to Find All Four Fits to Grow to $100M+

Do you need all four Fits to build a profitable company? No. But, you do need to have all four to build a $100M+ product on a venture timeline. If you lack one of the fits, you either don't reach the scale of $100M+ or grow so slowly that venture investors aren't interested.

Evidence of the four Fits is most apparent in the SaaS space. We can look at almost any category in SaaS and find multiple $1B+ companies who all essentially have the same core product.  

I look at that and ask how that can be? If they are essentially the same core product, why can't one company dominate the entire market?  

The reason comes back to the four Fits. Let's look at some examples.

In the email marketing space, we have multiple companies valued at $1B+: Marketo, HubSpot, and Mailchimp.  

When you strip away all the outer layers, they all have essentially the same core product — a tool that lets you send and automate emails to your customers and audience.  

Marketo

Their market is the enterprise. As a result they've differentiated their product on the things that enterprise customers care about: customization, security, and scale (that's their Market Product Fit). 

Because of that, they use Outbound Sales to sell (Product Channel Fit). Because they use Outbound Sales they must have High ACV's to support the channel (Channel Model Fit). There are thousands of customers in the enterprise space times their ACV's equal a greater than $100M revenue business (Model Market Fit).

HubSpot

Their market is the mid-market. As a result they've differentiated their product on “All In One” since thats what mid-market customers care about.  

Since the product still requires a fair amount of setup and education to work they use Inbound Sales (Content) and Channel Partnerships (Product Channel Fit). Because those channels hit a medium CAC, they have medium ACVs to support the channel (Channel Model Fit). There are hundreds of thousands of mid-market customers, multiplied by their ACV ends up being $100M+ revenue business (Model Market Fit). 

Mailchimp

Their market are small businesses. As a result they've differentiated their product on being simple and touchless (Market Product Fit).  

Because their product is focused on simplicity and touchless they can use virality via the free tier as the primary acquisition channel (Product Channel Fit). Because their channel is virality they use a freemium model with lower price points to have low enough friction (Channel Product Fit).  There are millions of small businesses so even at the low ACV, they have a far greater than $100M business (Model Market Fit). 

We can play this scenario out in almost any SaaS Category. Let's take a look at a few more examples.

As you can see, no company (that I know of) has ever dominated all three tiers of the market (however, LinkedIn and Slack might be the closest).  

The reason for that, again, is the four Fits. All four Fits are like the pieces of a puzzle. Plenty of enterprise companies have tried to move down market by making small changes to their pricing or product. But, to properly attack any new tier of the market requires locking in all four Fits, which means potentially changing all four elements and building new expertise. 

The reverse also happens. Plenty of startups try to attack all three tiers of the market with the same product/channel/model. But it doesn't work. 

Attacking all three tiers will pull your product in different directions, require you to build expertise in multiple channels at once, communicate to three different types of customers at once, etc. It's better to focus on one tier of the market. 

If that goes well, then over the long term it's possible that you'll get the opportunity to attack another. Mailchimp is now just moving up to the mid-market with their automation features, and it has been 10+ years.

You Can't Think About the Fits in Isolation

Startups are commonly instructed to just focus on product market fit in the early days. This leads to teams ignoring the other fits. While I agree in the earliest phases you focus on proving Market Product Fit, that does not mean you can ignore the others.    

Why can't you just focus on Market Product Fit? Because as we know from Product Channel Fit, products are built for channels. That means that as you lay out your Product hypotheses, you need to include your channel hypotheses as well.  

Why can't you just focus on Market Product Fit + Product Channel Fit? Because as we know from Channel Model Fit your model hypotheses influence your channel hypotheses.  

Get the point? All of the fits influence each other. Your hypotheses for each work together to build an ecosystem. You need to have hypotheses for all components at once. Don't think about them in silos. 

This doesn't mean that you try to prove all of the hypotheses at once. There is a difference between formulating a hypothesis, and proving a hypothesis.  

In the world of startups with constrained resources, proving them all at once is impossible. You spend the majority of the earliest phases proving Market Product Fit, but you need to do that in context of having hypotheses for the other fits at the same time. 

You Have to Revisit All of the Fits on an Ongoing Basis

In my post about Product Channel Fit I told a story about Pinterest:

In late 2011 Pinterest hit an inflection point and their growth started to take off. One of the reasons were they hit product channel fit. The channel was viral sharing to Facebook's feed through their API. But around end of 2012 Facebook started killing off the API's that enabled this channel. Many reported on Pinterest's slowing growth.

This is an example of when you have Product Channel Fit, but it breaks due to a channel getting killed off. Many companies were also effected by Facebook killing off this channel. Pinterest was one of the few that transitioned successfully. They ended up transitioning to a UGC SEO channel which has driven their growth ever since.

Part of that transition over the long term was that Pinterest also changed their product focus from a social product to more of a personal utility. Once again, you have to mold the product to the channel. You can't think about them in silos.

Pinterest is a perfect example of why you have to constantly revisit the fits on an ongoing basis. The fits are always doing one of two things:

1. Evolving - Your market, product, channel and model are always evolving and changing.

Sometimes this evolution happens because of your own changes — moving into new markets — and sometimes it's because of the changes of others (i.e. channels).

2. Breaking - An element gets killed off. Typically this happens with the channel, as was the case with Pinterest. 

When one evolves, changes, or breaks, you once again need to revisit all of them rather than just trying to fix one. Facebook faced a similar situation to Pinterest when the channel and market started to shift from web to mobile. Zuckerberg recalls this time in the Masters of Scale podcast episode 5:   

“[26:58] We made one really important strategy decision, which was, often when companies need to take two years or so to rewrite their whole app or software for a new platform, they believe that they can't slow down feature development. So they do two things at the same time: they try to design a new product, while rewriting the existing product. I think that that ends up dragging everything out for longer, and increases the chance that you fail and die. So we made what was a pretty hard decision at the time, which was basically, no new features for two years, which is kind of a crazy thing.”

In addition to making a radical product shift to mobile, Facebook also spent billions on acquiring Instagram and WhatsApp. Once again, when one of the fits changes, you need to revisit all of the fits to ensure growth. 

In earlier stage companies you need to constantly revisit for a different reason. You are constantly proving or disproving your hypotheses as you learn. When you disprove a hypothesis, you make a change. But when you make that change you need to revisit the four fits to make sure they all still fit together.  

Get Out of the ARPU-CAC Danger Zone with Channel Model Fit

Written By

Channel Model Fit is simple - channels are determined by your model.  

First, what do I mean by “Model?”  The two most important elements of your model are:

  1. How Your Charge - For example, free (monetized with ads), freemium, transactional, free trial, one year up front, etc. 
  2. Average Annual Revenue Per User - What the average $$ you make from a customer/user per year.  

Product Channel Fit Will Make or Break Your Growth Strategy

Written By

Earlier, I discussed our common obsession with Product Market Fit that has led to false beliefs such as “Product Market Fit is the only thing that matters.” A byproduct of that false belief are statements such as:

“We are focused on product-market fit right now. Once we have that we’ll test a bunch of different channels.”
There are two major issues with this statement. I’ll break them down separately. 

The Road to a $100M Company Doesn’t Start with Product

Written By

While Product Market Fit isn't the only thing that matters, it is important, so it makes sense that there are no shortage of blog posts explaining Product Market Fit, and how to get it. 

Instead of echoing the many great Product Market Fit explainer posts out there, I'm going to focus on the 5 elements of Product Market Fit that I believe are most misunderstood and overlooked:

Why Product Market Fit Isn't Enough

Written By

I’ve been lucky to have been part of building, advising, or investing in 40+ tech companies in the past 10 years. Some $100M+ wins. Some, complete losses. Most end up in the middle. 

One of my main observations is that there are certain companies where growth seems to come easily, like guiding a boulder down hill. These companies grow despite having organizational chaos, not executing the “best” growth practices, and missing low hanging fruit. I refer to these companies as Smooth Sailers - a little effort for lots of speed.

In other companies, growth feels much harder. It feels like pushing a boulder up hill. Despite executing the best growth practices, picking the low hanging fruit, and having a great team, they struggle to grow. I refer to these companies as Tugboats - a lot of effort for little speed.

Inside the 6 Hypotheses that Doubled Patreon’s Activation Success

Written By

I rarely accept guest posts on this blog, but this opportunity was too good to pass up.  A couple months ago Tal Raviv (Growth PM @ Patreon) sent me a short email - "Brian, the results are in.  We doubled (yeah, doubled) new creators in our onboarding." 

My immediate response, holy f*ing YES!!!! My second response, lets write about this.  I've spoken/written about the importance of onboarding a lot before. The number of solid cases/examples out there is few and far between.  So Susan Su, Head of Marketing at Reforge, dug in with the Patreon team to produce this amazing piece.  It covers:

  • How Patreon decided and defined a leading indicator activation metric.
  • The challenges they faced with that metric.
  • The six hypotheses they tested, where the hypotheses came from, what worked and what didn't. 

Building a Growth Team from Zero to Fifty

Written By

Growth is still an emerging discipline, and not everyone has a structured growth team within their org. But, let’s say you get to start from scratch and build the ideal growth team. What people and roles would you start with?

Andrew Chen and I recently sat down to look at a few configurations to consider as you're scaling up a team around growth. We've broken up the conversation into three videos, with notes below each video. 

Videos

1. The Minimum Viable Growth Team -- 1 to 3 people
2. The 10-20 Person Growth Team -- The 1-5-10 Ratio and Growth Specialization
3. 50-Person Growth Team and Beyond -- Where Growth Drives Product

How You Battle the "Data Wheel of Death" in Growth

Written By

Data Isn’t Constantly Maintained -> Data Becomes Irrelevant / Flawed -> People Lose Trust -> They Use Data Less

If the above looks familiar, you’re not alone. I estimate that greater than ⅔ of data efforts at companies fail.

This is trouble because data plays a key horizontal role in the growth process and mindset. Without good data, it’s not possible to run a legitimate experimentation cycle.

Today, I’ll take a look at 4 reasons why well-meaning data efforts fail so often, and what you can do about it.

Growth Benchmarks Are (Mostly) Useless

Written By

Quick note: I wanted to let you know that we are starting to build out the team at Reforge. We are hiring a Sr Product Marketing Manager, Sr Content Developer, and a Content Marketer. If you, or someone you know, is passionate about professional development/education then please get in touch.


Most growth communities, forums, and email lists will inevitably have that thread that goes: 

“Hey, what are the benchmarks everyone’s seeing for X?” 

I constantly find people seeking out benchmarks or pointing to benchmarks, and we’ve all been there -- who doesn’t want some normalizing data to understand whether we’re on track or not?

The desire is further fueled by many companies releasing “benchmark reports.”  I understand why. It makes for sexy content marketing.

But there’s one small issue:

Growth benchmarks are mostly useless. There is a better way, which I'll outline below.

Now, back to benchmarks... we can plot most benchmark information on the below 2X2 matrix:

On the X-Axis, we have sample size, representing the number of data points included in the benchmark.

On the Y-Axis we have similarity, representing how similar the data-producing sample group is to your own product, business, and channel. High similarity would mean that the sample group has the same target audience, a similar product, and a similar business model. 

There are tons of benchmark reports and intelligence tools out there, and I use to recommend a handful whenever I’d get asked the inevitable question about benchmarks. 

But the more I thought about the question, the more I realize that these posts and tools are not that helpful. To assess when and how benchmark data can aide growth, let’s go through some common sources of benchmark data and where they fall on the matrix.

Benchmark reports

Most benchmark reports live in the lower right hand quadrant of the matrix. 

They usually draw on a larger sample size, but that sample size scores low on similarity to your business. In the best case scenario, the sample might represent an industry category, but will still include lots of data points for audiences or business models that are completely different from yours.  

There are a bunch of problems with your typical benchmark report, so let’s go through them one by one:

1. Aggregated noise

Most benchmark data present aggregate numbers of their entire sample. The problem is that typically 80%+ of the sample are usually low quality applications or companies. This generates an incredible amount of noise in the data. Most of the people reading this article are probably trying to build venture backed businesses. By definition to be a venture backed business you need to be in the top 10% or a complete outlier.  

2. Averages are useless

Most benchmark reports show you metrics in the form of averages, medians and standard deviations. Averages / medians will be skewed towards the high number of low quality apps since they are more numerous in the sample. The result is that the benchmark stat that ends up being presented is well below where you actually need to be in order to be a high growth company.

Aggregate stats across a category can help you get a general understanding of what to expect in that category, but their utility stops there. If you are hitting "average" within the category, then you are are probably not a venture-backable business.

If you are benchmarking, you naturally want to benchmark against best-in-class competitors, not an aggregate average of a category, but your benchmark report or tool may not show that spread.

For category-wide performance data to be useful, you would need a segmented average of apps, sites, or business that have a combo of similarities with one another, that you also share. That level of granularity and accuracy typically doesn't exist in publicly available or purchasable form.

3. Same metric, different measurement

CAC is CAC, LTV is LTV, Churn is Churn, right? Nope.

Different businesses measure the same metric completely differently even if they are in the same industry category. I’ve never seen a benchmark report that takes this into account. They usually just ask, “What is your CAC?”  

Different products and business models require different ways to measure customer acquisition cost, and other key metrics that often show up on benchmark reports as uniform. 

Averaging or lumping together CAC can be extremely misleading because it doesn’t take into account your company or product’s specific business model. For example, if you have multiple tiers in your SaaS product, average CAC is a lot less actionable than CAC sliced by your different customer segments (with each segment paying different subscription fees). 

4. Incomplete picture

The third issue we face with industry benchmarks is that these reports and tools often aren’t able to provide enough context on the sample set since they need to keep the data anonymous -- which apps or products were included, what categories were covered, or the reasons behind their performance. 

We end up with a deceptively incomplete picture that shows lots of data but delivers few answers. You might get retention numbers, for example, but you have no idea what their acquisition looks like. One piece of the puzzle leaves an incomplete puzzle.

What to do instead

Now that I’ve bashed on benchmark reports enough, I should say that they can be OK as a starting point, if you also do the following:

1. Take it with a grain of salt
2. Ignore non-segmented benchmarks
3. Only look at the top 10% or upper outliers, if you can identify them
4. Contextualize as much as possible

And of course, always prioritize your own numbers when you have enough data. 


Learn Growth, No Fluff. One Email Per Post.


Forum convos

The next most common benchmark source lives in the lower left hand corner. 

I call these “forum convos.” These are small group discussions with others with low similarities to your business. 

Sadly, forum convos aren’t helpful because of 3 reasons:

1. Low similarity

The other reference points have so many differences in relation to your business (product, audience, model) that it’s comparing apples to oranges.

2. The problem of averages again

They are typically giving you average numbers for their business which contains all the flaws of averages.  

Again, what you would want instead are segmented numbers, but you typically won’t get that in conversations like this. 

3. Lack of context

Forum convos tend to be casual exchanges with little to no context on performance history, additional factors such as other channels, brand, and key metrics, not to mention the basic who/what/why behind the performance data being presented.

What to do instead

Forum convos can be fun, but ultimately aren’t that useful. As practitioners we commonly get caught in the trap of the “grass is always greener:” 

“I wish I had the virality of x!  Or I wish I had the retention of Y!  Or the model of Z!”

Out-of-context forum threads and random data snippets only make it worse.   

In reality, our businesses are a unique set of of variables that we need to figure out for ourselves. If those variables fit together mathematically to make a really good business, it matters a lot less how each variable benchmarks against other standalone data from other businesses that may or may not be relevant. 

If you are going to have these forum convos, DIG to get more information.  

At minimum, ask these two follow-up questions:

  • Who is the target user?
  • What is the business model?
  • Competitive convos

In the upper left hand corner, we have 1:1 or small group data points with extremely high similarity (i.e., competitive) businesses.  

Unfortunately these tend to be competitor convos due to the high similarity score. The competitive dynamic means low trust, so the chance of getting reliable, accurate, and updated data is slim.
 
In some industries tools will emerge to help you get this. For example, during the Facebook Platform days there were a number of data services where you could get metrics like DAU, MAU, DAU/MAU, etc of almost any app on the platform.  

But there are a two issues with this:

1. First, it’s rare. The existence and ongoing availability of data services depends on the platform making the data publicly available in some form. The platform can change how they report data at any time, have outages that they aren’t incentivized to fix, or even discontinue availability entirely. 

2. Second, you will only get the top level metrics. In the Facebook Platform example above, you could find out MAU, DAU, and their derivatives for a competitive app in your same category, but there’s lots that these top level metrics don’t tell you. Unsegmented data leads to unreliable answers at best. 

1 to 1 convos

Now we are getting into the really useful stuff. 

1 to 1 convos work best with businesses that have similarities on one or more of the axes (Target Audience, Business Model, Product), but not all three. The fact that the other business isn’t similar on all three axes means that it’s more likely to be non-competitive and offer mutual benefit to exchanging valuable information in confidence. 
 
Here are two examples.

  • If I were Ipsy, I might talk to the likes of Dollar Shave Club, Stitch Fix, or Nature Box.  (similar model, subscription, but different target audiences and product)
  • If I were Pinterest, I might talk with LinkedIn, Facebook or another social product about shared challenges like the logged-in/logged-out experiences or other common pieces of the product. (Similar product and model, social/ads, but different target audiences)

The biggest downside to the 1 to 1 convos is the sample size. As deep as the conversation may go, remember that it is only one data point. This is like trying to get a view on a 3D object but only looking at one dimension. 

What to do instead

1 to 1 convos can still be helpful and a valuable source of a deeper benchmark that also doesn’t require that you close with a direct competitor. 

To make sure it’s valuable, keep the following in mind:

1. Contextualize, contextualize, contextualize
 
Think hard about the product value prop, target audience, and other elements that might impact the numbers. For example Dollar Shave Club and Ipsy both are subscription ecommerce products with similar cadences, but there are two big differences:

  1. Value prop - DSC is more of a utility, Ipsy is more discovery
  2. Target audience - male vs female 

Both those things will naturally influence all numbers. 

2. Get the complete picture

Try to get segmented numbers as well as a holistic view, not just one number in isolation.

3. Get the ‘Why’

It’s critical to ask “why” they think something worked or didn’t work. Asking “why” leads to learnings that you can apply back to your product, rather than numbers that are specific but not actionable.

The Sweet Spot

The sweet spot is if you can form a group of non-competitive businesses with similarities on those axes we talked about.

It has all the advantages of the 1:1 convos, but with a slightly larger sample size. That larger sample size allows you look at the picture from multiple angles, and to triangulate your data. This ultimately leads to more useful insights for everyone.  

Build relationships with the best

I used to do this in Boston with "mastermind" groups. In my experience, it is the only way to get reliable, non-noisy, benchmarks for where you aspire to be.

It sounds simple (“Just get together a group!”) but getting it right requires the right setup and ground rules. Below are the four takeaways that I learned from my Boston groups.

1. Start small

It starts with just 2 to 3 people you know in your network. Explain what you’re trying to do and organize a meetup with that initial group. If you don’t have anyone within your existing network for that initial group, you can cold email.  

2. Share your own knowledge first

You should do the initial work to seed conversation by creating a presentation that shares some insight, learning or experiment that you have been running lately. If you cold-email, the value-add of this presentation needs to be even bigger. 

3. Confidentiality + warmup

Set an expectation that everything is confidential. To warm up the conversation, you can start with an easy discussion prompt, like: 

  1. One thing you’ve tried recently that has worked
  2. One thing that hasn’t worked
  3. What was the learning
  4. One question or problem you are facing  

This is something simple that people can spend 20 minutes on and everyone can participate in without revealing too much. It has a low barrier and still kicks off a good conversation where people are both giving and receiving. 

4. Expand the group

Once you’ve had a couple successful meetings with that small group, ask each member if they know one person they’d like to invite in. This turns the work away from you to others. 

5. Rotating ownership

Once there is a rhythm, rotate “ownership” of the meeting between members of the group.  Ownership is figuring out a location, general theme topic, etc. 

6. Regularity / repeat

Rinse/repeat about every one to two months. Repeat exposure is key to building trust and deeper relationships. 

7. Expand the axes
 
Rinse and repeat steps 1 through 6 across different axes of similarity. For example, you might have one group where the model is the common theme (i.e. subscription) and another where the common theme is a channel (i.e. paid acquisition) or another that is target audience themed (i.e. SMB).  

Conclusion

With all their flaws, industry benchmarks can still be an OK starting point for gauging the health of your product or business, but building up a group of experienced practitioners who share both commonalities and helpful differences from one another is the most important pillar of continual mastery of growth, or any other discipline.