As Internet marketers, we love our testing, and one of the greatest benefits of email marketing over direct marketing is the immediacy of testing results. By ‘pre-testing’, we can use that immediacy to improve the performance of our email campaigns.
A typical A/B test usually involves developing two different versions of an email (e.g. different subject lines, including personalization, etc.), splitting the list of subscribers into two randomly selected groups, and sending a different version to each group. The test is often run multiple times, results are analyzed, and the information is used to inform future campaigns.Â
Most marketers start with what I’ll call ‘macro tests’ which involve larger issues such as testing different layouts, best time of day and day of week to send, etc. All of these type of macro tests are very important and establish best practices and guidelines for an email program.
However, there are situations in which elements specific to a campaign need to be tested – I’ll refer to those as ‘micro tests’. For example, maybe the creative director and product manager disagree on which photo should be used in the email as a hero shot or there are questions about the arrangement of words in the subject line (i.e. which are most important to place toward the front). You could just A/B test the two approaches, sending each version to 1/2 of the list. However, if one version significantly outperforms the other, then you would have lost opportunity by sending out the worst performing version to 50% of your list.Â
Let’s look at the results (similar to one of our client’s recent campaigns) of an email that was A/B tested with 200,000 subscribers and in which version A outperformed version B:
Typical A/B Test Scenario
The good news is that we did 20% better than if we would have sent version B to the entire list. However, the bad news is that we performed 20% worse than if we had sent version A to the entire list. Of course, we didn’t know which would be the best version prior to the send. Pre-testing allows us to reduce the risk associated with sending a worse-performing email to a large percentage of our list.
A pre-tests involves deploying the initial A/B test to a smaller, but statistically significant percentage of subscribers first and then sending the ‘winning’ version to the remainder of the list.  For example, using the same number of subscribers and response rates in the example above, a pre-test sent to 20% of the list would generate the following results:
Pre-Testing Scenario
In this example, pre-testing improved results by 16% over straight A/B testing. The greater the performance between the two versions, the more benefit provided (and risk-reduced) by pre-testing.
A few caveats about pre-testing:
- Pre-tests are not suitable for all situations.  For example, there are some tests (like testing a new enewsletter layout) that you are going to want to run multiple times involving as many subscribers in the the sample as possible. Also, you need to allow at least 24 hours between the pre-test and the send to the reaminder of the list so that you have enough data to reach a conclusion, so if the email is time sensitive, you may not have time for the pre-test.
- Even though you want the pre-test groups to be small, the groups need to be large enough to be statistically significant. (for more on sample sizes and statistical relevance, read Wayde Nelson’s response in a MarketingProf knowledge exchange answer)
- To help validate your approach to pre-testing, run a few tests where you conduct a pre-test with your two versions and then deploy an A/B test to the remaining subscribers. If you don’t see the same results between your pre-test and full A/B tests, then you need to pre-test with a larger sample size or check to see if something else is impacting results (e.g. day of send).