B/A testing is the opposite of A/B testing.
In B/A testing, you take the Baseline conversion rate, form data-driven hypotheses, implement changes, and Assess the conversion rate over a reasonable period of time, based on your sales cycle.
If the conversion rate is flat or down, your assessment shows you tweaks need to be made.
You then go back to the drawing board, form data-driven hypotheses on what you think is lagging, and change those aspects.
Then, again, assess, comparing the Before to the After.
Baseline/assess. Before/after. Baseline/assess. Before/after. B/A. B/A.
You continue this iterative process until you see conversion rates increase.
This solution is particularly good for lower traffic sites that don’t have the volume to run trustworthy tests, but still want to experiment and optimize conversions.
The downside is, all traffic is exposed to the “variant” – the change(s) you’re implementing on the site.
But, the upside is, by exclusively monitoring the conversion rate, you know for certain sales are increasing. Or not.
There’s no more guesswork or relying on data you can’t fully trust. Just results.
Here’s a real-life client example of B/A testing in action:
As you can see, over the time period measured, the eCommerce conversion rate went from 2.32% to 3.14%, contributing to a solid 35.48% increase for the company -- which resulted in many more thousands of dollars over the sales period! And counting.
And this example isn't isolated.
GuessTheTest has used this method for many happy clients and have seen conversion rates and sales figures increase +30% in just a couple months.
So, B/A testing may be a viable alternative when A/B testing just doesn’t make sense.
If you’re interested in learning more about how B/A testing could help you, get in touch for a free conversion audit.
Join the Best in Test awards ceremony. Submit your best tests and see who wins the testing awards.
A primer explaining the 4 different types of tests you can run, what they mean, and how you can use each to improve your competitive testing advantage.
One of the most debated testing topics is how large does my sample size need to be to get trustworthy test results? Some argue samples of more than 120,000 visitors per variant are needed to begin to see trustworthy test results. Ishan Goel of VWO disagrees. What does he think is needed to get trustworthy test results? Listen to this webinar recording to find out.