Campaign A/B testing
We have an exciting new feature for you that will help you to achieve more efficient campaigns - A/B testing. Using A/B testing your campaign has two different versions which are sent to a subset of your recipients first (half gets version A, the other half version B). Later on the more successful version out of the two will be delivered to the remaining recipients.
A/B test lets you evaluate the success rate of your campaign to maximize the overall success rate to achieve efficient email marketing results.
How does it work?
Simply put A/B test works like this:
- Prepare two different versions of your campaign.
- System will send both versions to a evaluation subset of your mailing list.
- The version reaching higher success in the preset timeframe is sent to the remainig recipients of your mailing list.
…but let's go step by step.
How to setup campaign for A/B test?
The options for A and B versions differ based on the account type you have. While Mailkit Base users can have different subject and sender but the content remains the same, users of Mailkit Syndicate and Agency can have also two different versions of the content as well as different templates. Also the campaign can have a different format and different image attachment settings.
Settings or A/B testing can be found on the campaign details page.
As soon as you click the Enable A/B test button you will get access to the A/B testing options.
A/B testing options:
- Percentage of recipients to include in the test group: - select the size of your test group. If you choose 40% recipients to be part of the test group and send your email to 100 recipients in total then during the test phase 20 recipients will receive variant A and 20 will receive variant B. After the evaluation period has passed the remaining 60 recipients will get the more successful version.
- Test group evaluation period before final delivery - set the time limit for test delivery evaluation. Minimum time is 10 minutes, maximum is 4 days. When setting the evaluation period always consider the size of your test group as well as the evaluation factor (eg. Opens can be evaluated sooner then Conversions).
- Evaluate the test group results by:
- Opens - The number of recipients opening each version will decide which version will be used for the final delivery.
- Visits - The number of visits for each version will decide which version will be used for the final delivery.
- Clickthru rate - the click-thru ratio - the ratio of visits to opens - for each version will decide which version will be used for the final delivery.
- Conversion rate - The conversion rate for each version will decide which version will be used for the final delivery. Please keep in mind that the conversion code MUST be implemented on your website to measure the conversions correctly. You must also allow a significant amount of time (1 day recommended) for test evaluation.
Scheduling campaign with A/B testing
Scheduling campaigns with A/B testing is similar to standard campaigns. You can schedule an immediate or unattended delivery at a specified time. In case the Schedule button is not available you have not set the senders for both versions of your campaign.
After the campaign with A/B testing has been scheduled you get an option to abort the final delivery - that means that while the evaluation group already received your campaign the remaining recipients will not.
Later on you can check the reports and analyze the campaign's performance as well as the performance of the individual versions within the evaluation group under the A/B Test tab.
Few helpful tips and ideas before you do the A/B test setup:
Consider whether A/B test can help your campaign.
If your campaign is time sensitive and you need to reach your recipients right away then A/B testing is undesired. If your mailing list is small (tens or low hundreds of emails) the A/B testing will have little to no impact.
Consider the evaluation group size.
There is no ideal evaluation group size. You must always consider the size of your mailing list and content of your campaign. Always calculate how many recipients will be participating. Example: If you have a mailing list with 1000 recipients and you choose to use 40% for evaluation the campaign will be sent to 400 recipients (200 will get version A and 200 will get version B) and 600 will get the more successful version after the evaluation.
Consider the evaluation method.
Always consider the content and your campaign's goals. If it's more important to make sure your recipients open the email select the Opens as the method. If your campaign has a call for action and you want the version more visitors select the Visits.
Consider the evaluation period.
The same goes for the evaluation period - there is a lot to consider. In general you need less time to evaluate the Opens and Visists, but the conversion rate on the other hand might take a few days to peak. The size of your mailing list needs to be taken into account as well - if your mailing list has 100k recipients an hour might not be enough to get the right feedback.