A/B Testing should be a huge part of your marketing efforts. When I work with clients, I see they make many mistakes. Here's how to avoid them.
Have you been A/B testing for a while, but not getting any results out of it?
If yes, then you are likely doing it wrong!
Although many businesses get significant results through A/B testing, most of the businesses out there complain of getting inconclusive results.
The reason behind this inconsistency is that A/B testing is a highly sensitive process and even one small mistake can completely disrupt it.
In fact, I'd say that badly performed A/B tests can cause more harm than good.
To help you avoid wasting your time, effort, and resources, here are some of the most common A/B testing mistakes I've seen may of my ecommerce clients make.
Let’s get started!
1. Testing the Wrong Page
When it comes to A/B testing for websites, one of the most common mistakes businesses make is that they test the wrong pages.
But, how do you figure out which webpages to test and which ones to not?
Most of the times, you should start to A/B test the pages of your funnel that contribute most to your conversions.
Think of pages that are directly capturing leads or generating sales, such as PPC landing pages, category pages and product pages, or indirectly contributing to them by engaging and convincing your visitors, such as the home page and your blog.
Not only you should take a look at your funnel in terms of conversion rate, but also have in mind the traffic each page is getting.
For example, if most of your traffic comes from PPC and visitors arrive on your product pages, then these are probably the most important pages to A/B test on.
If you have more SEO traffic to your blog, then getting these pages to convert better can only increase your revenue.
And of course for any ecommerce store, the cart and checkout are primary conversion pages, and should be A/B tested on a continuing basis.
In a nutshell, you should only test the pages of your website that are a key part of your sales funnel.
2. Testing Without a (Valid) Hypothesis
Many of my clients test elements on their website based on a blog post showing a win or on a whim because they think a test is a good idea.
And often, that's why they don't get solid results.
What you need to have is a testing hypothesis before setting up any A/B test.
If you don’t know, an A/B testing hypothesis is a statement that highlights potential problems on a webpage, provides tentative answers to them, and also predicts the impact that applying those solutions will have on different website performance KPIs.
Simply put, a hypothesis helps you figure out what elements need to be changed, why they need to be changed, and how those changes are expected to contribute to your business.
When it comes to creating a hypothesis for A/B testing, many businesses disregard their importance, whereas others use wrong, weak, or meaningless hypotheses that do not create any real impact on the testing process.
Testing without or with an invalid hypothesis can significantly reduce your chances of success.
It’s like playing the lottery; the chances that the results will be in your favor are very low.
Here are a few guidelines on how to develop a hypothesis for A/B testing:
- First of all, use analytics software or heatmaps to track and measure visitors’ actions. It will help you figure out how many people are behaving in your funnel. Furthermore, it will also help you determine the dominant user behavior – are there more people signing up for your newsletter? Do the majority of your visitors leave after downloading a free resource? Or most of your visitors are actually converting into customers by completing a purchase?
- The figures will help you determine the weak areas or leaks in the funnel and will also enable you to speculate about why you are facing certain problems. For example, if you are getting a good amount of traffic on your landing page, but only a small number of people fill out the form to download the free resource, there may be a problem with the call to action or the copy; it could be wrong or weak.
- Then, use qualitative polls like hotjar to survey your visitors and ask them why they are not buying from you.
- The next step is to come up with the possible changes that you think can help resolve the issues you uncovered and would improve the results, like in the above-given scenario, you can try testing the landing page with a different call to action.
- Determine the metrics you will measure to conclude if the change has made any difference, such as an increase in signups (in case of the above example).
Now that you have made an observation, identified the possible reason(s) for the problem you are facing, figured out a possible solution for it, and have decided how you will measure the results, you have formulated a valid hypothesis for your A/B testing.
For example: “We know from Google Analytics that many visitors don't move past the product pages, and from surveying our visitors that many want quick delivery. Our hypothesis is that by highlighting the fact that we use DHL Express that guarantees a 2-day delivery, we will get more sales from these visitors. ”
3. Testing Too Many Variations or Multi-Variate Testing
When going for A/B testing, people often get carried away and test multiple variations at once.
It's all fine when you get millions of visitors per month, because you'll have plenty of traffic going into your test.
However, for most ecommerce stores your should probably focus on one single variation at a time. This will allow you to get quicker results.
Moving fast and iterating quickly is a key factor in a successful A/B testing campaign.
The same goes for multi-variate-testing (MVTs). these are basically A/B tests with multiple variations, and should be avoided unless you have tons of traffic.
4. Running Too Many A/B Tests at the Same Time
Although running multiple A/B tests at the same time isn’t wrong, there comes a point where you run the risk of skewing your results if you have for example a dozen test going.
This is because the more tests you run at a time, the bigger the sample size (traffic) you will need in order to get reliable results.
Also, the tests may interfere with each other, which then will affect the results.
For example, you wouldn't want to be testing a new 30-day guarantee in the checkout, but then have a test running on the product page that has a 7-day guarantee. This will confuse your visitors and probably kill your sales.
Performing less tests will take more time, but the insights they provide make them worth the time and effort.
I typically limit live tests to about 5, which enables me to test on many of the key pages of my clients' funnels without damaging the results.
5. Ending the Test too Early
To get meaningful and reliable results – meaning at a statistical significance of 90-95%, every A/B test has to be run for a certain amount of time.
This ensures that the results are driven from a significant amount of data and are reliable enough that you can make new decisions on the basis of them.
Many marketers make the mistake of stopping a test as soon as they see a huge lift (or big decrease) or when their A/B testing platform triggers a statistically significant result.
This is often a huge mistake.
To get meaningful results that help you make real improvements, I recommended you determine the sample size before starting the test.
A useful tip for resisting the urge to end the test early is to not check the results before the end of the set time period.
But, how long should you run an A/B test?
The right time duration for an A/B test varies across businesses because it depends on the number of variants, the number of visitors per variant, conversion rate of the page, and the lift the variation is getting on the control.
For example, a 50% lift will take much less time than a 10% lift.
I typically recommend running an A/B test for at least seven days and aim for a minimum of 100 conversions, and wait until you have 95% statistical significance before calling it.
Others believe that you should run the tests for about three to six weeks and get a 99% statistical significance.
In the best of worlds, getting a higher statistical significance is best, but there is a cost to this. Unless you have a ton of traffic, going from 90% to 95% and even 99% can take a rally long time. And that's time spent that you are not running additional tests.
So I typically prefer to be a bit more aggressive, so I can test more and compound wins on top of other wins.
This is when a real growth of a business happens.
6. Running the Test for Too Long
I've just touched on this, but you can run A/B tests for too long too and this will have the same impact (if not worse) as running the tests for minimal time periods.
In addition to wasting your time and resources, this can possibly ruin your results by increasing the chances of certain external factors impacting the process.
General market trends, competitors’ actions, and changes in the quality of traffic are some external factors that are beyond your control.
And running an A/B test for a very long time period can significantly increase the chances that they could impact your process by polluting the data, which then ruins the results.
7. Testing Too Early
Another common A/B testing mistake is that many businesses start running them too soon; before they have enough traffic, such as a couple of days after launching a new product or a campaign.
But, this is just a waste of time and resources because in such scenarios, you don’t have enough data to compare your results with.
Take for example a newly launched product where your are running Facebook ads to the landing page and you'd like to test the headline copy.
[NOTE: If you are new to writing copy, I highly recommend checking out Copywriting Secrets.]When you set your campaign live you probably sending some traffic that is not qualified for your offer. Meaning, you haven't cleaned your Facebook campaign and filtered out users that aren't converting as much.
So basically, you are sending junk visitors to your landing page which will in effect hurt your test.
The best is typically to wait for a week until you've gotten rid of these users before activating your test.
8. Not Considering Different Traffic Sources
It is common knowledge that your website traffic comes from different sources and in most cases, the visitors also see different designs and/or messages – your ads – before they land on your website.
If you monitor user behavior with tools like Hotjar, you will notice that the visitors coming from different sources also interact differently with your website.
All these factors make it necessary that you create different A/B tests for different traffic sources.
If you have too many traffic sources, pick the ones that drive the most traffic and evaluate their conversion rates, bounce rate, pages visited per session, and exit rate.
These metrics will help you figure out your best performing sources, the problems across different sources, and the potential reasons behind them.
All these factors combined will enable you to come up with a valid hypothesis, which then will help you determine if there is a need to set up separate A/B tests for different traffic sources and how it should be done.
Conclusion
A/B testing mistakes are more frequent than you think.
Almost every business owner and marketer make mistakes in the beginning, so don’t worry if your initial tests have failed or provided insignificant or invalid results.
However, make sure you learn from your mistakes and remember the tips discussed in this article to avoid wasting your time and resources anymore. Do your homework and plan for your tests properly before setting them up – remember that attention to detail and patience are your best friends here.
Have you made any of the A/B testing mistakes we have discussed here?
Let us know how you dealt with them and improved your results!
Next Steps...
1 See which tools I use & recommend to clients:
I've hand-picked the best performing tools and resources I use every day or that I've seen my clients use with great success. To get tactics and copy to build a better funnel see Dotcom Secrets, Copywriting Secrets and Funnel Scripts. If you do affiliate marketing, I recommend you take a look at The Affiliate Bootcamp, ClickMagick and Active Campaign.2 Get My FREE Funnel
Get 3 “Done For You” + 100% Automated Sales Funnel. They're free. I will share these with you in PDF and via ClickFunnels.3 Book your FREE 30 Min strategy call:
Since 2012, I’ve helped generate over $100 Million in extra sales for Fortune 500s and SMEs. Let's see how I can help you and what you need to grow your sales and revenue. I use psychology, persuasion and science to get more leads or sales and I'll be 100% transparent on the best strategy for you, even if that's not me. Book your call now. Finally, you can also check out my other articles on funnels and Shopify and ecommerce.PLUS, these results are not typical and your experience will vary based upon your effort, education, business model, and market forces beyond my control. I make no earnings claims or return on investment claims, and you may not make your money back.