In-Depth Insights and Guides
About Customer Acquistion And Growth

If you are looking for "hacks" or "tricks" (that probably don't work) then 
my content is not for you.  Subscribe to my email list if you want detailed and in-depth guides on growth. 

One email per week, unsubscribe anytime with one click

Building a Growth Team from Zero to Fifty

Written By

Growth is still an emerging discipline, and not everyone has a structured growth team within their org. But, let’s say you get to start from scratch and build the ideal growth team. What people and roles would you start with?

Andrew Chen and I recently sat down to look at a few configurations to consider as you're scaling up a team around growth. We've broken up the conversation into three videos, with notes below each video. 

Videos

1. The Minimum Viable Growth Team -- 1 to 3 people
2. The 10-20 Person Growth Team -- The 1-5-10 Ratio and Growth Specialization
3. 50-Person Growth Team and Beyond -- Where Growth Drives Product

Each is below, with summary notes.

1. The Minimum Viable Growth Team

Key takeaways for this video

  • Founder as the "growth team," and what skills that requires
  • If not the founder, marketer as the "growth team" armed with tools (Optimizely, Leanplum, Kahuna, etc) or an engineer who can build but is also business-oriented enough to build for growth
  • Even if it's just one person, it's crucial to cover both business thinking and implementation ability
  • 3 people allows your growth team to be independent: a PM with a marketing bent, and a full-stack engineer and frontend designer for implementation
  • Because both iOS and Android are important, it's now (more) necessary to cover those bases and either increase engineering resources, or strategically limit scope and work in phases
  • Early growth teams need to prove themselves, so think hard about project impact and likelihood of success.  
  • Quick rule of thumb: Growth = Total User Reach x Expected Impact.
  • Take a set of actions and think about it as the number of those actions divided by monthly active users, or a similar core metric.
  • Then ask, does your impact area represent a large enough percentage of MAU where if you grew it, that change would be a big and company-wide win? 
  • Work on areas that don’t feel like a niche part of the product; instead, focus on the broadest base. 
  • Early teams need to regularly and publicly celebrate victories to increase buy-in and continue to get resources to expand

2. The 10-20 Person Growth Team -- The 1-5-10 Ratio and Growth Specialization

When you're ready to scale up to the next stage, you're looking at a 10 to 20 person growth team. How do those numbers relate to the rest of the organization? What does the split look like within that team?

Key takeaways for this video

  • As the company is scaling, use the 1:5:10 Ratio to understand how the growth team maps to the overall org
  • As the overall org grows, analytics and experiment infrastructure becomes a more important investment
  • Growth will be the power users of any analytics / experiment infrastructure output, so that function could live within growth
  • A more mature / resourced growth team should map to the user lifecycle
  • Ask: what are the functions, team members and structure required to take someone from non-user to user, to repeat power user?
  • How should sub-teams on a larger growth team be organized? What areas should they tackle? 
  • The right time for the growth team to take on bigger risks 

Get More Essays on Growth, One Email Per Post.

3. The 50-Person Growth Team and Beyond -- Where Growth Drives Product

Few companies are at a stage where a 50-person growth team makes sense, but more are getting there. Here's what that looks like, and what projects that growth team would take on.

Key takeaways for this video

  • Sub teams in Growth will map to the user lifecycle and manage that on an ongoing basis
  • Another part of the growth team will take on big bets -- rather than optimizations 
  • Bigger bets can be divided up into short term bets (i.e., a new design for a "People You May Know" feature) and long term bets (i.e., Facebook's Internet Free Basics)
  • When does the growth team own these projects versus the product team?
  • Growth vs. Product depends on what purpose it serves.
  • Ask: is the project ultimately about driving growth?
  • Ask: is the growth team entrepreneurial enough to tackle a zero-to-one problem?
  • The right time to take on international expansion as part of Growth

How You Battle the "Data Wheel of Death" in Growth

Written By

This is the Data Wheel of Death:

Data Isn’t Constantly Maintained -> Data Becomes Irrelevant / Flawed -> People Lose Trust -> They Use Data Less

If the above looks familiar, you’re not alone. I estimate that greater than ⅔ of data efforts at companies fail.

This is trouble because data plays a key horizontal role in the growth process and mindset. Without good data, it’s not possible to run a legitimate experimentation cycle.

Today, I’ll take a look at 4 reasons why well-meaning data efforts fail so often, and what you can do about it.

Issue #1: Project Mindset vs Process Mindset

Most companies that want to get more serious about data approach it as a project. Something with a definitive start and a definitive end. 
 
When you think about data as a project, you get the Data Wheel of Death.

The project wraps up, but at some point someone finds a piece of data that doesn’t look quite right. As a result, they don’t trust it and then they stop using the data. Because no one is using the data, the data isn’t maintained -- which leads to more distrust.  

In the above scenario, treating data like a project is at fault. In reality, data is an ongoing, never-ending project, similar to building a product. 

Build data collection -> Measure the impact of data outputs -> Learn from how data is or isn’t used

Your data needs to be refined and updated via an ongoing process for a few reasons:

1. Your product will change

Features will change on an ongoing basis. As your product evolves feature by feature, your data also needs to keep pace or it will become irrelevant / flawed, people will lose trust, and the Data Wheel of Death will continue.

2. Your understanding of the business will change.  

Data should lead you to insight that might lead you to prioritize some things and de-prioritize others.

Andrew Chen likes to say, “Your data and KPI’s should be a reflection of your strategy.”  Your strategy will change over time and what you track and analyze needs to evolve along with what it. 

3. New answers expose new questions.  

As you gain fresh insight from your data, it opens the door to new questions. As you have new questions, you need to update your instrumentation and analysis.  Saying the process is “done” is saying you understand every thing there is to know about your users, product, and channels.  

4. Shit breaks. 

It’s that simple. 

Data is never done

People spend more time on analyzing what tool to use than they do instrumenting and updating their data.  

The project mindset is what’s behind this, and is tied to a mindset of “I have to get this right the first time.” The problem - there is no perfect tool and you end up in analysis paralysis.  

If you view data as an ongoing process -- there’s no defined finish -- it’s easier to jump in knowing that you’re bound to iterate as new needs emerge.

What to do instead

Instead of a one-and-done, project-based approach to data, you should assign dedicated resources for both data collection and analysis. 

In the earliest stages, this might be part of a couple of engineers’ or PMs’ time, but no matter how many or how few the number of hours it is, it needs to be seen as critical part of their role. 

In later stages as the company grows, you will most likely need a dedicated team to maintain the data process -- both building and maintaining the data infrastructure, and promoting data usage.

It’s important to keep in mind that getting a data process off the ground isn’t just about instrumentation. Your other job is to build trust with the consumers of your data within the organization. You must spend time verifying data in at least some of the reports and working with the greater team around you to make sure they both understand and trust what they’re seeing. If they don’t trust it, they won’t use it. 


Learn About Growth, One Email Per Post. 


Issue #2: Misalignment of Incentives

Even if you treat data as a process vs a project, that doesn’t necessarily translate into success.   There are companies that spend endless time on the infrastructure, tooling and instrumentation but then miss something extremely important. 

Getting an organization to use data requires behavior change among the individuals.

Changing behavior is extremely difficult. There is typically one culprit behind the lack of change: a misalignment of incentives and rewards.

Teams and individuals will ultimately do what they are rewarded for. So if you want to change behavior, you have to make sure that the behavior is part of what they are rewarded for.  There are multiple types of rewards:

  • Financial Rewards (bonuses/raises/equity)
  • Progression Rewards (promotions)
  • Authoritative Recognition (good job from boss/superior, etc)
  • Peer Recognition (“good job” from peers, etc)

When you look around your team, how often are you delivering one of those rewards related to the use of data? 

Here’s the problem:

If the use of data is not rewarded, then the work that it takes to instrument and analyze data will be seen as friction to doing what is rewarded, or what’s perceived as a “good job” within the company.  

If the use of data is rewarded, then the work it takes to incorporate data will be seen as an ally to doing a good job. 

Then ask yourself a second question: How big / often is that reward in relation to other things you reward for?

How big/often the use of data is rewarded compared to other rewards signal importance/priority. Managers need to be careful how they weight different factors as part of rewards.  

What to do instead

1. Every team needs a KPI

Every team needs a KPI as part of their measure of success. Having a KPI for each team aligns the use of data to their work vs positioning data as a friction point.  

Just setting the KPI isn’t enough, though. These 3 things need to happen:

  • The team needs have a sense of ownership over the KPI.  
  • If they don’t feel like they own it, they will view it as someone else's problem. 
  • Every single person on the team needs to understand the KPI and have easy access to viewing it. 

Often times only the PM or a couple people on the team will truly understand it.

The KPI needs to be part of the four types of rewards, but not the only thing the team is rewarded for.  

There are other important factors for product teams like shipping velocity, product quality, etc. Like most things, it is a balance.  

2.  Design systems for each type of reward

Go through each of the four reward types and design systems to reward for the use of data. What do I mean by systems?  

Here’s a quick example around a system for Authoritative Recognition Reward: the best managers I know have checklists they use to prep for every 1:1. An item on the checklist might be “Did this person display the use of data in their work?” 

If yes, make sure to give recognition for it. It is too easy to forget these types of things unless you have specific prompts. 

3.  Communicate the rewards clearly

Don’t assume that team members know how they get promotions, bonuses, praise. You need to put it in writing, make it explicit, and reinforce. You have to over communicate in order for it to get through.  

When you promote someone, don’t just announce “Congrats to Jane Smith on her promotion to Sr. Product Manager.” Back the announcement up with examples of the type of work and behavior that the person has displayed which led to the promotion.

Issue #3: Data team becomes the bottleneck

If you solve process vs project and incentives and data starts to become valuable in the company it creates a couple of new problems. 

The first is that the data team can end up becoming a bottleneck. This stems from a the data team taking an “ownership” mindset to the data, i.e. “We own the data.” 

But ownership thinking leaves out one important point:

Every team has a “customer.”

The data team’s customers are the other people using the data internally: data analysts, product managers, engineers, marketers, etc.

In service to these internal customers, the data team needs to act like any other product team:

  • They need to define their customer segments
  • They need to understand customer needs
  • They need to deliver the most compelling solution
  • And they have to iterate!

In other words, their output must be able to enable other teams’ output, rather than them being the exclusive owners. 

Issue #4: Brilliant Answers, Useless Questions

There’s a second issue that arises once data has become valuable and recognized, and that’s when people start noodling on data for data’s sake. Whether it’s because they find the data process intellectually fascinating, or just to seem smart, productive, etc. in the end, it’s data masturbation :)

Data for data’s sake is an alluring trap, but only creates “brilliant answers to useless questions,” as Ken Rudin (Head of User Growth & Analytics at Google, former Head of Analytics at Facebook) says.

Rudin reminds us that even though it’s alluring to increase knowledge, insight alone isn’t enough -- results and impact are the true goal of analytics. To that end, data teams should make sure they’re asking the right questions, not just coming up with more and more answers. 

Rudin has two suggestions for how to create a data process that actually serves business needs: 

1. Hire analysts that are business savvy, not only academic about data or its tools. 

When you interview candidates, don’t focus on just “How do we calculate this metric?” Ask them “In this business scenario, what metrics do you think would be important?”

2. Make data everyone’s thing.

Rudin’s team at Facebook ran a two week “Data Camp” for not just the data team but for everyone at the company. It gives the wider organization a common language to frame questions that data can answer.

(Some great additional talks by Ken on data and impact here and here.)

A solid data process will be informed by asking good questions that result in a real business impact (Rudin also makes a great point that data teams must be accountable for not just “actionable insights” but the fact that those necessary actions get taken). 

As data provides answers and actions, it helps shape better and better questions. 

As data’s answers translate to impact, people will be incentivized to maintain and even improve data systems, leading to more data capacity, and stronger answers to better questions. 

 

Learn About Growth. No Fluff. One Email Per Post.

Growth Benchmarks Are (Mostly) Useless

Written By

Quick note: I wanted to let you know that we are starting to build out the team at Reforge. We are hiring a Sr Product Marketing Manager, Sr Content Developer, and a Content Marketer. If you, or someone you know, is passionate about professional development/education then please get in touch.


Most growth communities, forums, and email lists will inevitably have that thread that goes: 

“Hey, what are the benchmarks everyone’s seeing for X?” 

I constantly find people seeking out benchmarks or pointing to benchmarks, and we’ve all been there -- who doesn’t want some normalizing data to understand whether we’re on track or not?

The desire is further fueled by many companies releasing “benchmark reports.”  I understand why. It makes for sexy content marketing.

But there’s one small issue:

Growth benchmarks are mostly useless. There is a better way, which I'll outline below.

Now, back to benchmarks... we can plot most benchmark information on the below 2X2 matrix:

On the X-Axis, we have sample size, representing the number of data points included in the benchmark.

On the Y-Axis we have similarity, representing how similar the data-producing sample group is to your own product, business, and channel. High similarity would mean that the sample group has the same target audience, a similar product, and a similar business model. 

There are tons of benchmark reports and intelligence tools out there, and I use to recommend a handful whenever I’d get asked the inevitable question about benchmarks. 

But the more I thought about the question, the more I realize that these posts and tools are not that helpful. To assess when and how benchmark data can aide growth, let’s go through some common sources of benchmark data and where they fall on the matrix.

Benchmark reports

Most benchmark reports live in the lower right hand quadrant of the matrix. 

They usually draw on a larger sample size, but that sample size scores low on similarity to your business. In the best case scenario, the sample might represent an industry category, but will still include lots of data points for audiences or business models that are completely different from yours.  

There are a bunch of problems with your typical benchmark report, so let’s go through them one by one:

1. Aggregated noise

Most benchmark data present aggregate numbers of their entire sample. The problem is that typically 80%+ of the sample are usually low quality applications or companies. This generates an incredible amount of noise in the data. Most of the people reading this article are probably trying to build venture backed businesses. By definition to be a venture backed business you need to be in the top 10% or a complete outlier.  

2. Averages are useless

Most benchmark reports show you metrics in the form of averages, medians and standard deviations. Averages / medians will be skewed towards the high number of low quality apps since they are more numerous in the sample. The result is that the benchmark stat that ends up being presented is well below where you actually need to be in order to be a high growth company.

Aggregate stats across a category can help you get a general understanding of what to expect in that category, but their utility stops there. If you are hitting "average" within the category, then you are are probably not a venture-backable business.

If you are benchmarking, you naturally want to benchmark against best-in-class competitors, not an aggregate average of a category, but your benchmark report or tool may not show that spread.

For category-wide performance data to be useful, you would need a segmented average of apps, sites, or business that have a combo of similarities with one another, that you also share. That level of granularity and accuracy typically doesn't exist in publicly available or purchasable form.

3. Same metric, different measurement

CAC is CAC, LTV is LTV, Churn is Churn, right? Nope.

Different businesses measure the same metric completely differently even if they are in the same industry category. I’ve never seen a benchmark report that takes this into account. They usually just ask, “What is your CAC?”  

Different products and business models require different ways to measure customer acquisition cost, and other key metrics that often show up on benchmark reports as uniform. 

Averaging or lumping together CAC can be extremely misleading because it doesn’t take into account your company or product’s specific business model. For example, if you have multiple tiers in your SaaS product, average CAC is a lot less actionable than CAC sliced by your different customer segments (with each segment paying different subscription fees). 

4. Incomplete picture

The third issue we face with industry benchmarks is that these reports and tools often aren’t able to provide enough context on the sample set since they need to keep the data anonymous -- which apps or products were included, what categories were covered, or the reasons behind their performance. 

We end up with a deceptively incomplete picture that shows lots of data but delivers few answers. You might get retention numbers, for example, but you have no idea what their acquisition looks like. One piece of the puzzle leaves an incomplete puzzle.

What to do instead

Now that I’ve bashed on benchmark reports enough, I should say that they can be OK as a starting point, if you also do the following:

1. Take it with a grain of salt
2. Ignore non-segmented benchmarks
3. Only look at the top 10% or upper outliers, if you can identify them
4. Contextualize as much as possible

And of course, always prioritize your own numbers when you have enough data. 


Learn Growth, No Fluff. One Email Per Post.


Forum convos

The next most common benchmark source lives in the lower left hand corner. 

I call these “forum convos.” These are small group discussions with others with low similarities to your business. 

Sadly, forum convos aren’t helpful because of 3 reasons:

1. Low similarity

The other reference points have so many differences in relation to your business (product, audience, model) that it’s comparing apples to oranges.

2. The problem of averages again

They are typically giving you average numbers for their business which contains all the flaws of averages.  

Again, what you would want instead are segmented numbers, but you typically won’t get that in conversations like this. 

3. Lack of context

Forum convos tend to be casual exchanges with little to no context on performance history, additional factors such as other channels, brand, and key metrics, not to mention the basic who/what/why behind the performance data being presented.

What to do instead

Forum convos can be fun, but ultimately aren’t that useful. As practitioners we commonly get caught in the trap of the “grass is always greener:” 

“I wish I had the virality of x!  Or I wish I had the retention of Y!  Or the model of Z!”

Out-of-context forum threads and random data snippets only make it worse.   

In reality, our businesses are a unique set of of variables that we need to figure out for ourselves. If those variables fit together mathematically to make a really good business, it matters a lot less how each variable benchmarks against other standalone data from other businesses that may or may not be relevant. 

If you are going to have these forum convos, DIG to get more information.  

At minimum, ask these two follow-up questions:

  • Who is the target user?
  • What is the business model?
  • Competitive convos

In the upper left hand corner, we have 1:1 or small group data points with extremely high similarity (i.e., competitive) businesses.  

Unfortunately these tend to be competitor convos due to the high similarity score. The competitive dynamic means low trust, so the chance of getting reliable, accurate, and updated data is slim.
 
In some industries tools will emerge to help you get this. For example, during the Facebook Platform days there were a number of data services where you could get metrics like DAU, MAU, DAU/MAU, etc of almost any app on the platform.  

But there are a two issues with this:

1. First, it’s rare. The existence and ongoing availability of data services depends on the platform making the data publicly available in some form. The platform can change how they report data at any time, have outages that they aren’t incentivized to fix, or even discontinue availability entirely. 

2. Second, you will only get the top level metrics. In the Facebook Platform example above, you could find out MAU, DAU, and their derivatives for a competitive app in your same category, but there’s lots that these top level metrics don’t tell you. Unsegmented data leads to unreliable answers at best. 

1 to 1 convos

Now we are getting into the really useful stuff. 

1 to 1 convos work best with businesses that have similarities on one or more of the axes (Target Audience, Business Model, Product), but not all three. The fact that the other business isn’t similar on all three axes means that it’s more likely to be non-competitive and offer mutual benefit to exchanging valuable information in confidence. 
 
Here are two examples.

  • If I were Ipsy, I might talk to the likes of Dollar Shave Club, Stitch Fix, or Nature Box.  (similar model, subscription, but different target audiences and product)
  • If I were Pinterest, I might talk with LinkedIn, Facebook or another social product about shared challenges like the logged-in/logged-out experiences or other common pieces of the product. (Similar product and model, social/ads, but different target audiences)

The biggest downside to the 1 to 1 convos is the sample size. As deep as the conversation may go, remember that it is only one data point. This is like trying to get a view on a 3D object but only looking at one dimension. 

What to do instead

1 to 1 convos can still be helpful and a valuable source of a deeper benchmark that also doesn’t require that you close with a direct competitor. 

To make sure it’s valuable, keep the following in mind:

1. Contextualize, contextualize, contextualize
 
Think hard about the product value prop, target audience, and other elements that might impact the numbers. For example Dollar Shave Club and Ipsy both are subscription ecommerce products with similar cadences, but there are two big differences:

  1. Value prop - DSC is more of a utility, Ipsy is more discovery
  2. Target audience - male vs female 

Both those things will naturally influence all numbers. 

2. Get the complete picture

Try to get segmented numbers as well as a holistic view, not just one number in isolation.

3. Get the ‘Why’

It’s critical to ask “why” they think something worked or didn’t work. Asking “why” leads to learnings that you can apply back to your product, rather than numbers that are specific but not actionable.

The Sweet Spot

The sweet spot is if you can form a group of non-competitive businesses with similarities on those axes we talked about.

It has all the advantages of the 1:1 convos, but with a slightly larger sample size. That larger sample size allows you look at the picture from multiple angles, and to triangulate your data. This ultimately leads to more useful insights for everyone.  

Build relationships with the best

I used to do this in Boston with "mastermind" groups. In my experience, it is the only way to get reliable, non-noisy, benchmarks for where you aspire to be.

It sounds simple (“Just get together a group!”) but getting it right requires the right setup and ground rules. Below are the four takeaways that I learned from my Boston groups.

1. Start small

It starts with just 2 to 3 people you know in your network. Explain what you’re trying to do and organize a meetup with that initial group. If you don’t have anyone within your existing network for that initial group, you can cold email.  

2. Share your own knowledge first

You should do the initial work to seed conversation by creating a presentation that shares some insight, learning or experiment that you have been running lately. If you cold-email, the value-add of this presentation needs to be even bigger. 

3. Confidentiality + warmup

Set an expectation that everything is confidential. To warm up the conversation, you can start with an easy discussion prompt, like: 

  1. One thing you’ve tried recently that has worked
  2. One thing that hasn’t worked
  3. What was the learning
  4. One question or problem you are facing  

This is something simple that people can spend 20 minutes on and everyone can participate in without revealing too much. It has a low barrier and still kicks off a good conversation where people are both giving and receiving. 

4. Expand the group

Once you’ve had a couple successful meetings with that small group, ask each member if they know one person they’d like to invite in. This turns the work away from you to others. 

5. Rotating ownership

Once there is a rhythm, rotate “ownership” of the meeting between members of the group.  Ownership is figuring out a location, general theme topic, etc. 

6. Regularity / repeat

Rinse/repeat about every one to two months. Repeat exposure is key to building trust and deeper relationships. 

7. Expand the axes
 
Rinse and repeat steps 1 through 6 across different axes of similarity. For example, you might have one group where the model is the common theme (i.e. subscription) and another where the common theme is a channel (i.e. paid acquisition) or another that is target audience themed (i.e. SMB).  

Conclusion

With all their flaws, industry benchmarks can still be an OK starting point for gauging the health of your product or business, but building up a group of experienced practitioners who share both commonalities and helpful differences from one another is the most important pillar of continual mastery of growth, or any other discipline.

 

Solving Mobile Growth & Retention with Andy Carvell, ex Growth at SoundCloud

Written By

Andy Carvell joined SoundCloud in 2012, when the company was just over 80 employees and 10 million monthly active users.  The service now boasts over 150 million registered users, and monthly actives in the high tens of millions. I recently spoke with Andy as part of a 1 hour interview covering:

How he brought a web-first product to mobile
Activity notifications, rich push, and other techniques for driving mobile growth and retention
Andy’s “Mobile Growth Stack” for 2017

How to Embrace Constant Change in Growth

Written By

Quick note: We recently announced the next Growth Series, an 8-week program for designed for experienced practitioners in growth (past participants came from Dropbox, Google, LinkedIn, Evernote, Airbnb, Soundcloud, Facebook, and many other companies). It's by application only, and if interested, you can learn more here

In the early days of building the growth team at HubSpot, we spent a few months optimizing onboarding in our product and produced some meaningful improvements. As the team expanded, I wanted to dedicate a full-time team to onboarding, but I got a few versions of the following questions from other executives: 

“Why do you want to put a full-time team on that? I thought you guys were done optimizing onboarding?”

The mentality of “done” is the exact opposite of the mentality of high performance growth teams. Change is constant. Change is difficult. Not adapting to change is fatal.

The point is a critical one for any company and is the foundation for this growth principle. By nature, humans resist change because adapting can be hard and it’s usually a lot of work. We like to think of things as “done” so we can check them off our to-do list, and move on to the next thing.  

Any great growth team is ready for and responsive to change, nimble, and always, always adapting. They go beyond adapting, and truly embrace change building their team and process around it. Let's talk about why…

Systematic growth is iterative. 

There are four main reasons that this principle is important:

  1. Product Evolution
  2. Audience Shift
  3. Channel Change
  4. Tactic Fatigue

And constant change is the common thread that ties them all together. 

We must be aware of and accept of the constancy of change, integrate it into our culture and empower our teams to always be adapting. If we don’t, our product’s growth will inevitably stall at some point in time. If instead we evolve our growth efforts in sync with the changes happening across product, audience, channels and tactics, we will increase our chances of driving sustainable growth and operate by this principle.

Reason One: Product Is Always Evolving

We should be learning constantly. For product, this means constantly getting better at delivering more of the core value by refining existing features and building new ones based on our learnings. If we don’t, we risk stagnation and being beaten by competitors who are constantly evolving to meet the needs of the market. 

As we refine and change our product, the way our users enter, engage, and exit the product will evolve too, which ultimately affects how we approach growth. 

Think about how interaction with Facebook’s core value has transformed over the years. In the beginning, users updated their profiles, scrolled through their list of friends, clicked through to profiles and left messages on friends’ walls. 

But then, when the company added the newsfeed, user behavior changed in a big way, evolving from comments to the Like Button to current day Reactions. What users shared also changed from text only, to photos, to links, to videos, to live videos today. 

All of this points to the fact that with every product change, how we onboard, retain, monetize, and generally mobilize users for growth, may change. So we can never forget to evolve our growth strategies in lock step with our product strategy.  

Reason Two:  Audience Is Always Shifting

As the market shifts and the product develops and grows, the composition of our audience, who they are, what they want, and what they respond to, shifts. Across the board, our strategy and tactics from acquisition to activation, retention, revenue and referral need to evolve in sync. There are many ways to segment an audience and so there are many ways an audience, and each of its segments, can evolve. 

The Innovation Adoption Lifecycle

First, let’s think about it in relation to the Innovation Adoption Lifecycle.


Typically as a company grows, its audience shifts from innovators and early adopters to late majority and laggards, all of which behave very differently and respond to different product experiences and messaging.

Twitter offers a useful example. In the early days, Twitter’s onboarding focused on guiding users to write their first tweet. This was the core action innovators and early adopters responded to. 
But as the company grew beyond innovators and early adopters, the onboarding changed to emphasize following and consuming content first. That was an action the early and late majority were more likely to take in the adoption phase.  

Niche to Niche

Another way to think about these changes is that audiences expand from niche to niche growing in concentric circles. The growth can be based on different facets of the audience:

1. Demographics

Let’s take a look at Facebook’s growth for a moment. It expanded in concentric circles to increasingly large populations based on education.
     
Harvard → All Ivy → All College → College + High School → College + High School + Recent Grads → Everyone

How Facebook Grew From Harvard to the World

2. Geography

Uber expanded by location, starting in San Francisco and then launching in other U.S. cities like New York, Chicago and DC, and then expanding internationally moving from the U.S. to all other english speaking countries and then to Europe and beyond.

3. Industry

Workday (and lots of other B2B companies) expanded by industry.  It started by serving other technology companies first, and then extended its reach from vertical to vertical - moving from technology companies to service-based companies to the financial, healthcare, and education sectors.

These and other niches live in different channels, respond to different messaging, and have different needs. So as our audience expands from niche to niche, we need to adapt our growth strategies and tactics with every shift in focus. 


Get More of My Essays on Growth.

One Email Per Post


Reason Three:  Channels Change Constantly

User acquisition channels are changing constantly, evolving at an accelerating rate on both a macro and a micro level. For this reason, companies that adapt quickly and experiment in the newest channels increase their odds of capturing the biggest growth opportunities 

Macro Changes

On the macro level more and more channels are emerging, while their lifecycles are simultaneously accelerating. The graph below shows these two dominant trends - you can see that there are a lot more to channels than there were 10 years ago and that they’re rising into and falling out of favor much faster than older channels. 

Check out MySpace and Facebook, they both grew at unprecedented rates shooting up very quickly, only to die (in the case of MySpace) or become closed (in the case of Facebook) even more quickly.  As channels come and go we need to adapt with them.

Zynga experienced stratospheric growth riding on back of the Facebook platform. But in 2012 Facebook changed the rules of the game and in an instant completely shut off Zynga’s only growth channel overnight. Zynga’s entire business model tanked as a result. This story demonstrates how quickly a channel can change and how vulnerable companies are when they aren’t positioned to see these changes coming down the pipeline, at best, or at least respond instantly when they’re blindsided.

Micro Changes

When we zoom in to the micro level we can see that each channel is evolving at an ever increasing rate. What’s even more challenging is that the platforms upon which we grow - Facebook, Google, Gmail and many more - write the rules of the game. Not only do we not even get a say, but many change the rules constantly. 

To stay in the game we have to play by the rules of the platform. The only problem is that they don’t actually tell us what the rules are. Google doesn’t tell us exactly how a page ranks and Facebook doesn’t tell us exactly how a CPC is determined. So we run experiments to deconstruct them, and figure out how to adapt and keep growing. This means that once we find a channel that works for us we need to be adapting our approach constantly to keep up with how the rules are changing and the channel is evolving. 

Reason Four: Tactic Fatigue

Increasingly, we are bombarded by more information than we can process and our brains have developed ways to filter out what we don’t need to survive. This survival mechanism is important for us to understand because it forces all of our favorite acquisition tactics to move through the Growth Tactic Lifecycle depicted in the chart below. 

There are 4 Stages of the lifecycle of any growth tactic:

  1. Tactic Discovered - Someone discovers a new tactic, facilitated by the evolution of an existing platform or channel or the advent of new platforms and channels. Initially, this new tactic performs well because it captures the user’s attention in a novel way.
  2. Tactic Optimized - The innovators and early adopters of this tactic experiment like crazy to optimize it to peak effectiveness and maximize its impact on their growth.
  3. Tactic Adopted by the Masses - Other companies, the early and late majority, see that the tactic is working and they copy it.
  4. Tactic Goes into Fatigue - By this point, users have been exposed to this tactic over and over across multiple products.  The novelty of the tactic has worn off and users start to block it out. The effectiveness of the tactic decreases and it no longer drives the same kind of growth it once did.

For example, how we ask for email addresses has undergone rapid change over the past few years. As users, we’ve experienced this change first hand and as user acquisition professionals we’ve had to learn and adapt quickly to continue driving email capture conversion rates.

As an example of how quickly tactic fatigue can happen, let’s walk through the incarnations of this tactic over the past few years:

  1. Form within a webpage
  2. Slide ins
  3. Lightbox popovers
  4. Full page takeovers
  5. SumoMe Welcome Mat

Take this one little thing - how do I collect email addresses on my content - and we can name 5 different iterations of that ecosystem in just a few years.

When I first started capturing email on my own content, I used a lightbox popover, which few people were doing at the time. My conversion rate was initially around 10%. But as more sites implemented lightboxes to collect emails, consumers became desensitized, the tactic went into fatigue, and my conversion rate leveled off at 3-4%. I recently started using exit-intent full page takeovers and my conversion rate has increased to 6%, but I expect that it won’t last long.

Do These 5 Things To Implement This Principle

1. Build A Culture of Constant Experimentation

People often see constant change as a negative thing and resist the inevitable. Instead, we must welcome change by building a culture of constant experimentation. This allows us to incorporate the constant flow of change into our growth processes and survive and thrive in competitive environments.   

2. Constantly Validate Old Playbooks, Ditch The Ones That Don’t Work Anymore

Eventually, all playbooks will stop working. This is not the real problem though. The real problem is that over time we become habitualized and attached to these playbooks, and we resist throwing them away. Even when all evidence points to the fact that they are no longer relevant, the thought of ditching our old faithfuls and starting anew is intimidating. Do it anyway. 

3. Balance Your Portfolio of Bets

Here I challenge you to think of your set of growth tactics like a financial portfolio to be balanced in terms of risk and reward. When we test new and unproven channels we take higher risk, higher reward bets. When we optimize proven channels we take lower risk, lower reward bets. It’s like a game in which we have a pool of resources and we’re shifting them between different bets, depending on results of various experiments.

Since all channel effectiveness curves flatten eventually (think of my email capture conversion rate leveling off at 3-4%), in a perfect world we aim to hit our stride with a new channel as a proven channel starts to slow down. It may never happen perfectly like this in reality, but we should attempt to layer tactics on top of each other if we want to maintain growth.

4. Accelerate Your Growth Team Investment As You Keep Growing

When your growth team first starts out almost all of its time will be spent experimenting.  

growth-team-1.png

Over time you find more and more things that work and the team’s resources will be consumed by running and repeating these successful things.  

growth-team-2.png

The danger is that 100% of your time and resources will get consumed by repeating the successful things and you won’t make time for experimentation.  

If you follow this trajectory, you will eventually get caught with your proverbial pants down.

At scale, it’s easier than not to let experimenting with new ideas fall by the wayside. Instead, we must set aside time and resources specifically dedicated to this effort. This requires us to continue increasing the total pie of resources we dedicate to our growth teams. 

As a rule of thumb, I focus at least 30% of my team’s time on experimenting with new ideas, and 70% of our time repeating and optimizing our old faithfuls.

5. Hire For Grit and Resilience

The thought that growth is never done can be exhausting. This is why I hire for grit and resilience. When an experiment fails multiple times in a row, I want people who get right back up, learn from their misfires, bring new ideas to the table, and attack the problem over and over until they solve it. 

When they do succeed, these folks don’t view the job as done, instead they focus on making it even better. And, they have no problem adapting when something they slaved over in the past is no longer working. They’re willing to throw it out and start all over again because they know that’s what it takes to stay on top.

If I Stepped Into Your Company Today As VP Of Growth...

I would establish a baseline to determine where your team and systems fall in relation to the principle - Embrace Constant Change. Walk through the checklist below, assess your team on the principle of Constant Change, and figure out where you’re killing it and where you’re being killed. 

Product

  • How has the product evolved over time?  
  • Did we adapt our growth strategies and tactics? If so, how?
  • What major product changes are upcoming?  How do we plan to adapt our growth strategies?

Audience

  • Who is our audience?
  • How has it evolved over time? 
  • Did we adapt our growth strategies and tactics? If so, how?

Channels

  • What are our channels?
  • How have they evolved over time?
  • Did we adapt our channel strategies? If so, how?

Tactic Fatigue

  • What are the biggest changes coming down the pipeline that threaten the effectiveness of our channels?

Build A Culture of Constant Experimentation

  • Does our team view change as negative? 
  • Does it resist or welcome it?  

Validate Old Playbooks

  • What playbooks are we running?  
  • What has been their trend of effectiveness over time?
  • Do we ever throw out playbooks that no longer produce results? Or do we hold on?

Balance a Portfolio Of Bets

  • How is experimentation time being apportioned? 
  • What percent goes to low risk, low reward bets?
  • What percent goes to high risk, high reward bets?

Accelerate Growth Team Investment

  • What percentage of overall resources and time is spent on experimentation for new ideas?

Hire For Grit and Resilience

  • When an experiment fails does our team learn, bring new ideas to the table, and keep trying to solve the problem? 
  • When an experiment succeeds do we view it as done or focus on making it even better? 

 

Your Average CAC is Lying to You -- What to do Instead

Written By

I recently wrote about the most common mistakes with CAC (customer acquisition cost) that can derail growth efforts before you even get started because CAC is a metric that's foundational to growth strategy.

In this post, we'll look at the faulty myth of "Average CAC," and the most meaningful CAC segmentations that you should be paying attention to instead.

5 Recommendations for Setting Yearly Growth Goals

Written By

We are nearing the year end and that probably means many teams are finalizing their growth goals for next year.  Just the thought of the planning process is probably jacking up your stress levels.  After all, setting yearly growth goals in fast changing environments feels like shooting darts blindfolded.  While there a lot of goal-setting frameworks out there, no matter which one you use, I have a few recommendations to make the process better.

How To Setup A Growth Team For Maximum Impact

Written By

Measuring the success of the growth team is a tougher question. Every team from product, to marketing, to operations indirectly influences growth. This leads to some questions and challenges: “Why have a growth team? Doesn’t everyone own growth?” 

7 Principles To Mastering Growth Marketing

Written By

We have a big addiction problem in our industry. Hacktics, the tips, tricks, hacks, tools, and secrets that promise to solve our growth problems.  As a result, marketers now get the majority of their “learning” through this hack-tic based content. As addictive as they are, prescription-based tactics and tools won’t solve your problem, and most importantly, help you become an elite marketer.