Agency

The key to unlocking more email revenue


If you could increase your revenue from email just by sending messages more often, you’d do it, right? So, what’s stopping you from communicating more often?

To be fair, that’s a bit of a rhetorical question. I know why many email marketers hesitate to send more messages and it has nothing to do with revenue. 

It’s because, once upon a time, someone on the email team set up a test to determine what would happen if the brand increased email frequency from one campaign per week to two. The results showed them that unsubscribes went up a fraction while open, click and conversion rates went down a percentage point or two.

Although these statistics are par for the course when you increase frequency, they are enough to scare off the “I told you so” crowd. But are those metrics always the best judge of success? No, they are not, and I will explain why. 

Measuring success with the right metrics

If you measure the success of your frequency decisions on opens, clicks and unsubscribes, lower frequency wins the metrics battle every time. But you could also leave piles of money on the table for your competition to grab. Are you in business to get more opens and clicks? Or to make money?

I’m not advocating that everybody suddenly start sending campaign after campaign without regard for audience preferences, campaign goals, time of year and all the other considerations that go into frequency decisions.

Your frequency planning depends on many factors. Your schedule will be unique to your brand, market, customers and the products or services you offer. Here are three principles I have developed over my decades in the email industry that should guide your frequency planning: 

  • Higher frequency can generate more revenue when done mindfully. Keep testing until you find the tipping point — and you will find it.
  • You must choose the right metrics to measure the true impact of higher frequency.
  • A carefully planned email strategy is essential for guiding your journey to higher frequency.

Dig deeper: Email marketing strategy: A marketer’s guide

Theory into practice: A frequency-testing case study

We work with a travel company that relies on email to drive inquiries and bookings. The company is averse to risk. The email team is small but motivated to increase its revenue from email.

One way to do that is to expand its email frequency from twice monthly to one weekly campaign — doubling the frequency without overwhelming customers’ inboxes. 

While I’ve always believed that a properly managed higher frequency will drive more revenue and long-term engagement, I also recognize we need to be realistic about some of the side effects: tolerating changes in engagement metrics and doubling the email work, even with automated tools, to create, test and manage twice as many email campaigns.

Further, a haphazard approach to frequency can annoy customers and turn them against the brand to the point where the brand could run into problems with deliverability and inbox access.

Hence, the need to set up frequency testing before diving into change. We followed a science-based methodology to set up, run and analyze our testing program to give us the most useful and reliable results:

1. We started with a hypothesis

“Sending twice as many campaigns in a period would result in increased revenue.” This hypothesis guided all of the decisions we made in creating and running our tests and in analyzing the results.

2. We set up our control and variant and established a study period and success metrics

The control was the current frequency schedule which sent two campaigns a month. The variant was a weekly campaign. We ran the test for three months, generating six campaigns for the control group and 12 for the variant and took the average for each metric. 

For our success metric, we chose revenue and total transactions, as these business metrics match the strategic purpose of the test. We also tracked open, click and unsubscribe rates along with the average number of transactions in the testing period.

3. Which test won? 

Here are the results, which had a statistical significance of 99%:

  • Average open rate: The control (2 campaigns/month) won with a 4% uplift over the variant (4 campaigns/monthly).
  • Average click rate:  The control won with a 16% uplift. 
  • Average campaign value: The control gained an additional 27%.
  • Unsubscribes: Identical between the two groups. 
  • Average conversion rate: The control won with a 44% uplift.
  • Revenue: The variant won with a 57% uplift.
  • Total transactions: The variant won with an uplift of 74.79%.

4. Our conclusion

Sending a weekly campaign will generate more revenue and bookings than sending only twice monthly.

Interpreting the results and why the wrong metrics can mislead you

The results from my case study show you what can happen if you rely on engagement metrics to measure the success of a frequency test. Now, let’s get back to why you can be led astray.

If you measure only open, click and unsubscribe rates in a frequency test, the group with fewer campaigns in the testing period will likely win because there’s only one campaign to act on. The more campaigns you send, the less likely you will get the same number of actions on each campaign. 

But as we saw with my client case study above, a much different picture will emerge if you take the long view and measure your business metrics over time. Instead of basing decisions on results from one or two campaigns, over time, the results accumulate and will deliver more encouraging results for these metrics:

  • Open-reach: Total opens for all campaigns sent in a testing or research period.
  • Click-reach: Total clicks for all messages sent by all participants in the test.
  • Conversion-reach: Total conversions in the campaign testing.

Let’s look at a couple of the metrics we used in our frequency study to see what’s going on:

Open rate

Suppose your average open rate is 20%. But the people who make up that 20% are not the same from campaign to campaign. Maybe one campaign wasn’t relevant to a segment of your audience, or it’s a busy time of year and people didn’t have time to read your email. But when they are in the market for your products or services and your messages aren’t in their inboxes, they might just find what they need in the arms of your competitors.

The open rate might go down per campaign when you increase frequency, but with more campaigns, more individuals are likely to see them. This is why we use the open-reach to better gauge audience interest over time.

Unsubscribe rate

This is the big scary fearmongering statistic. And yes, when you send more emails, your unsubscribe rate will go up because you’re giving people more opportunities (i.e., campaigns) to opt out. That’s just basic logic. 

But you’re also sending more email campaigns and giving customers more chances to buy from you, so you’ll more than make up any potential lost revenue.

And, besides, let’s stop fearing the unsubscribe! These people weren’t going to buy from you again anyway. One unsubscriber means one less potential spam complainer or inactive customer on your list, and you didn’t have to pay a list-hygiene service to remove them.

Email’s ‘nudge’ effect

Your emails need to be in the inbox to be acted on. Just seeing the email can prompt customers to open and act on an older message or send them right to your website. 

If you send fewer campaigns, they’ll get buried under piles of fresh messages. Motivated customers might use inbox search to dig out your emails, but you can lighten the workload by sending more emails that could be more visible in the inbox.

Dig deeper: 7 key email metrics to track beyond opens and clicks

Springboarding from frequency tests to more email investment

As with any testing program, frequency testing is not one-and-done. You should test regularly and look for trends. We’re already plotting new tests we can run that will build on what we learned from this first round. This is how you can find your optimum frequency. 

A frequency testing program using the scientific method I outlined earlier also can lead you to bigger and better things, like insights into customer behavior that you can use to increase your customer lifetime value. You aren’t just making more money for the company (an excellent goal, by the way!). You’re also raising the value of an email address. 

Plus, you can show your management team how your comparatively small investment of time and budget paid off a handsome dividend and how an even bigger slice of the budget pie could yield even larger dividends. In other words, everybody wins!

Dig deeper: 7 common problems that derail A/B/n email testing success

Get MarTech! Daily. Free. In your inbox.

Opinions expressed in this article are those of the guest author and not necessarily MarTech. Staff authors are listed here.



Source link

en_US