Explore our Email Marketing and Marketing Automation Toolkit

Why robots can ruin your email split testing results

Author's avatar By Tim Watson 28 Jun, 2011
Essential Essential topic

A reminder to dive deeper into your email testing results

Split testing of email campaigns is a great way to learn and improve your results. Testing works by changing particular campaign elements, sending both the orignal (control) and new version (treatment) and then measuring the difference in results.

However test results can be ruined when there are additional factors that impact one test cell's results but not the other test cells. When this happens you can pick the wrong winner and end up decreasing campaign performance and revenue.

I was recently running a test and was hit by an external factor that without correction would have meant wrong conclusions were reached.

When diving into the results of one test cell I observed that one email address had clicked five times on every single link in the email. Upon investigation it turned out these clicks were not clicks from a human but clicks by a corporate spam filter. The corporate spam filter was automatically following every link on the email and thus causing a link click to be counted on campaign reports.

Spam filter robots do this to check that the links do not lead to a virus or a site with inappropriate content. Such spam filter automated link clicking is not common and is mostly used by corporate IT, so will affect B2B campaigns more than B2C.

Under normal circumstances a few extra robot clicks would not be an issue to reviewing the success of your campaign. However, by design split test cells are small, so the activity of one address can have a much greater relative impact.

In this case the one email addresses added five clicks to each of the 20 links, giving this one test cell a total of 100 additional clicks. This was enough to skew the result and make this test cell look like a winner. Once spotted the answer was easy, just remove clicks from this particular address before judging the winner.

The take home is always look at all metrics and reports when testing. Review the detail as well as the top line and look forĀ  any unexpected patterns that could be due to an external test factor.

Author's avatar

By Tim Watson

Tim is founder of email marketing consultancy Zettasphere and EOS Implementer at Traction Six. Experience includes Operations Director at Email Reaction and Marketing Director (fractional) in the US delivering a 310% revenue increase to $5m. Tim has 15 years of email marketing expertise with a heavily analytical approach to strategic choices. Connect with Tim via LinkedIn or Twitter.

Recommended Blog Posts