A/B testing, or split testing, compares two different versions of a web page to see which one is more successful. Each version is shown to a different group of website visitors at the same time, and the one with the better conversion rate wins.
Every website has a goal that it’s trying to achieve. Some websites want their visitors to buy a product they’re selling, some aim to get visitors to sign up for a trial of a product and eventually become paying customers, while others hope readers will click on ads displayed on their site. Whatever the goal, the rate at which it’s achieved is known as the conversion rate.
Why A/B Testing?
A/B testing makes it possible for you to use the traffic you’re already getting to your advantage. Increasing traffic is fairly difficult and costly, while increasing conversions on existing traffic is minimal. A/B testing offers a huge return on investment, since even just a small change on a website’s landing page can significantly increase conversion rates.
What Can You Use A/B Testing For?
You can run A/B testing on practically anything on your website. Here’s a list of the most commonly tested features.
- Sub headlines
- Paragraph Text
- Call to Action text
- Call to Action Button
- Content near the fold
- Social proof
- Media mentions
- Awards and badges
Aside from these, more advanced A/B tests can compare different price structures, sales promotions, trial lengths, free vs. paid delivery, etc.
Now that you have a general idea of how A/B testing can be used to increase website conversions, here are some tips to make the most of your tests.
1. The Control Stands
Many people look at the two variants being tested as equals. However, this isn’t likely to bring you significant improvements to your conversion rate. Unless a variant truly proves itself as the better option, with a significant margin, the control stands. Otherwise, any “victory” of a variant could just as easily be attributed to a small sample size. The goal of A/B testing is to find variants for different variables that create significant increases in conversion rate and combining them. Thus, one variant that performs 2% better than the control won’t really contribute to a huge increase in conversions in the long run.
2. Make Sure You Have Enough Traffic
In order to make sure your A/B testing is yielding statistically significant results, make sure that at least 2000 people visit the site during the time that you’re running the test. If you don’t have that volume of traffic on your site yet, wait until you do to run A/B testing. Without a large enough audience for your test, your results are likely to be skewed by the visitors you do get. Of course, the more visitors, the better, but 2000 is a good starting benchmark.
3. Run Tests For at Least 1 Week
With tests that run for less than a week, the results are too volatile to be conclusive and reliable. Tests can start with a huge improvement within the first day, but end up with a slight improvement or even a decline by the end of the week.
Running each test for a week also ensures that you get a full cycle of data. Website visitors may behave differently at the beginning of a work week than they do by the end, or on a weekend vs. weekday. By waiting a week to collect your results, you ensure that these variations in visitor behavior don’t affect your A/B testing.
4. Kill Tests with a Less than 10% Improvement
Although 10% better than the control is better, chances are you can get a higher improvement with another variant that you’re not currently testing. Since A/B testing requires time for each test, it’s better to not waste time on unproductive tests. Instead of a winner that increases conversion rate by 10% that took 6 months to find, it’s better to take that same amount of time and cycle through several different tests to find a winner that increases conversions by 25%. Due to the time necessary for each test, the longer a test runs, the higher the opportunity costs gets. If you let unsuccessful tests run for too long, you’re missing out on chances to find big winners in that same amount of time. A 10% improvement is just not significant enough to keep tests going.
5. Kill Tests Without a Winner After 1 Month
There has to be a point at which you cut off tests that don’t have a clear winner. It’s important to not get particularly attached to a specific variant when A/B testing. If you’re personally rooting for one to win, you’re more likely to let it run longer, wasting precious time. If after a month, a variant is not doing significantly better than the control, it’s time to kill the test and consider the control the winner. Even if the variant is slightly better, that lead is likely to decrease as time goes on.
6. Plan Your Next Test While Waiting for Data
The point of A/B testing is to determine several successful variants, each of which changes a different aspect of the website. In order to do this quickly and efficiently, you should launch the next test right after you finish one. This means designing and building new variants while you still have one test running. This way, you’ll avoid downtime between tests and be able to go through as many variants as possible in a time-efficient manner.