Explore our Email Marketing and Marketing Automation Toolkit

Eight ways to improve your email marketing analysis

Author's avatar By Mark Brownlow 23 Nov, 2010
Essential Essential topic

Success seems such an easy thing to measure in email marketing.

Even the lowliest of campaign software should report "opens" and clicks. And many marketers have access to more-important post-click metrics, like sales, downloads, page impressions, donations etc.

Most arguments around email metrics concern which numbers are best suited to campaign analysis. But there are two less-discussed issues which are equally important.

First, when an email performs particularly well (or badly), we attribute that success (or failure) to far fewer factors than might actually be the case. Which means we risk drawing the wrong conclusions and making inappropriate changes in future campaigns.

Second, we forget that the typical measures of success we use aren't actually that good at measuring the true impacts of our emails.

In this article, I'll explain each problem in more depth and suggest solutions that will improve the usefulness of your email marketing analysis.

Attributing success

Assuming the audience is relatively unchanged, peaks and troughs in email responses are typically attributed to the offer/content and the subject line.

And, naturally, we take that into account when planning future content/offers and writing future subject lines.

But, of course, a proper analysis should consider all the factors that might impact results. And there are a lot more than we generally think.

So when you come to assess your results, cast a wider net when looking for an explanation. For example, review...

1. Delivery issues

When comparing campaigns, email marketers should adjust numbers - like unique opens, unique clicks, revenue etc. - to account for the number of emails actually delivered to recipients in each campaign.

If no emails get delivered, you can hardly blame the offer for the low conversion rate.

Problem is, you don't normally have a delivery number.

It's not equivalent to the number sent, because we know some emails get bounced back as undeliverable.

Nor is it the number you get when you take these bounced messages away from the sent total.

Not all email that fails to bounce succeeds in reaching the inbox. Some is quietly filtered out at the ISP level, and some goes directly into junk/spam folders at the user account level. It's not unusual for this to happen to up to 20% of opt-in email.

So differences in absolute responses can simply be down to changes in inbox delivery success.

How do you check that?

1. Segment subscribers by address domain and check response rates across each domain. An unusually low response rate from, for example, gmail.com addresses alerts you to a potential delivery issue at Google's webmail service.

2. Use an email analytics tool (e.g. Litmus, MailboxIQ).

3. Use a service that seeds your list with addresses from popular ISPs and then monitors whether your email to those addresses actually arrives. Many ESPs offer this service, or there are standalone options from the likes of Return Path, Pivotal Veracity, EmailReach and Delivery Watch.

2. Copywriting approaches

The inherent attractiveness of the email's offer or content is important, but so is the way this content or offer is presented.

Changes in copywriting or presentation approaches do change responses dramatically. Tests regularly show, for example, that even small alterations in calls-to-action can have dramatic impacts on response rates.

So your unusually good/bad result may simply be due to, for example, a CTA change...such as:

  • Words used
  • Position (top only? Bottom only? Position relative to copy and images?)
  • Repetition (how many times does the same CTA appear?)
  • Competition (how many different CTAs compete for the subscriber's attention?)
  • CTA design (colours, sizes, shapes, fonts, images, buttons?)

3. Timing

Most email marketers consider timing as the impact of time of day and day of week. But what about day of month, month and season?

How close was the retail promotion to payday?

What about the weather? Don't laugh: one ESP found close correlations between weather and specific types of content/offers (and I don't just means sunscreen promotions).

What about other major events that might distract recipients from even the very best emails. Or leave them with more incentive to spend time with their inbox. Public holiday? The World Cup Final? An election? A royal wedding?

4. The last email

An email's success also depends on the emails preceding them. The same email sent at the same time to the same people will draw a better response if the last few emails were highly valuable. Or a lower response if they were repetitive rubbish.

The previous emails condition people's reaction to new messages.

5. Your other marketing

Email doesn't operate in isolation from the rest of your marketing. The potential positive interactions between different channels are one reason people are so interested in integrated, multichannel marketing.

What other marketing were email subscribers exposed to?

Did you up your PPC search engine budget last week?

Did the email follow a big TV and newspaper campaign for the product advertised in the message? How much of the conversion work for this email was already done by other marketing channels?

6. Spam and the competition

Nor does email operate in isolation from everyone else's marketing.

The level of competition in the inbox changes when a big source of spam gets shutdown. Or comes back online.

Have your competitors been advertising the same product or service, warming people up for your promotion? Did they offer 10% off and you offered 20% (making your offer more attractive)? Or did they offer 30% off (making your offer less attractive)?

Competitor influence is more likely in particular seasons, like the weeks leading up to Christmas, when it's harder to stand out in a morass of similar-themed messages.

7. Is the list really the same as the last time you sent email?

As we continue to send email, we often assume the qualitative nature of the list remains relatively constant. But that's not true.

Did you finish a big address acquisition campaign that shifted list composition in favour of new subscribers (who typically respond better than "older" subscribers)?

What kind of new subscribers did you add? Trade show visitors who expressed strong interest in your service and newsletter? Or low-quality addresses picked up through a sweepstake promotion?

Did you append data that lets you better segment your list, improving targeting?

Did you delete a slew of "inactive" addresses? Total responses won't change much, but response rates will rise due to a lower divider when calculating percentages.

8. Design issues

Maybe response differences arose from changes at the recipient end of the email chain. If a major ISP or webmail service modifies how it handles HTML email, this can break designs and hurt responses.

This is why it helps to regularly retest existing templates using preview tools. Did a big ISP start blocking images by default? Or did they suddenly change paragraph rendering so your copy became an amorphous lump, rather than a well-spaced and well-paced story?

I daresay you could add some more to this list, once you start thinking beyond the content/offers/subject line trio...

Measuring true success

Even if you can get a clear understanding of why an email is successful or not, there's still the issue of whether you're accurately measuring that success.

Opens and clicks are process metrics that help you understand the chain of events leading from sending an email to getting the desired end result. But they are not usually end goals in themselves.

"True" measures of success are typically some kind of desired (by the sender) action that is directly related to an email click. Like a whitepaper download, a webinar registration or a purchase.

But even if you have tracking that follows someone from an email through to a website "conversion" that's not truly accurate either. There are two core issues here.

First (as with many online channels) we fail to capture all the actions driven by the email. Your promotional message might get a click and an online sale. It might also lead, for example, to the recipient:

  • visiting your website directly and purchasing a product not advertised in the email
  • doing a related search on Google that leads to your site and a sale
  • doing a related search on Google that leads to your site via a PPC ad (costing you money)
  • doing a related search on Google that leads to a purchase at a competitor's site (lost future sale)
  • visiting your high street store to make purchases

It may also affect whether they buy more or less from you in the future through the positive (or negative) brand impression and awareness created.

And if they share the email, all these effects are repeated for people you don't even know exist.

Second, we capture some responses that would have happened anyway, even if you hadn't sent the email: the email simply changes the timing of an action.

An example would be where I already intend to buy a CD from Amazon for my wife's birthday when they send me a gift voucher via email.

In this case, the email actually costs Amazon money, since I would have bought the CD at full price if the email discount hadn't arrived first.

Our existing measures of success allow us to draw reasonable comparisons between emails.

But a more accurate picture of email's contribution to business success is important when it comes to budget allocation, investment decisions and comparing the value of different channels.

Some experts suggest simply comparing, for example, total sales recorded across all channels by newsletter subscribers and comparing them to non-subscribers. The difference is email's impact on purchasing behavior.

That isn't a fair comparison though. You'd expect people who sign-up for your emails to be better customers anyway.

One promising solution, if you have appropriate access to customer data, is to use holdout groups.

Here you create a control group among your subscribers and do not send them email. Then you compare the "results" (across all channels) generated by the control group with those who do receive email to see the overall lift attributable to email.

This approach is outlined in detail by Kevin Hillstrom here and here.

Even if holdout groups are not an option, an awareness of email's indirect impacts can fine tune your analysis. You can, for example, map incoming search engine referral volumes against email campaigns and see if there is any association.

Kevin tells me that "more often than not", email marketing value as measured by overall lift is greater than when measured simply by looking at direct post-click conversions. And who wouldn't want to take that result to the boss?

Author's avatar

By Mark Brownlow

Mark Brownlow is a former email copywriter and publisher of the retired Email Marketing Reports site. He now works as a lecturer and writer. Connect with him via Lost Opinions.

Recommended Blog Posts