Also known as inconclusive A/B tests or non-significant A/B tests.
An A/B test result in which the conversion rate difference between the tested variants (usually version A and B) is too small to be statistically significant.
Because the conversion rate difference is so small, it could be due to random chance; similar results are reasonably likely to occur with different variations tested.
In contrast, conclusive A/B test results -- which are statistically significant -- achieve a clear conversion rate difference not likely due to random chance; these results are very unlikely to occur with different tested variations.
Many tools calculate A/B test significance automatically.
However, interpreting the results, and putting them in context, to make an actionable decision on whether to implement a test design, still requires an understanding of statistical concepts.
Knowing when to end a test, that may or may not reach the desired confidence level is an important aspect of any testing project.
This guide to A/B testing significance can help you.
Not necessarily.
Non-conclusive results happen more often than most marketers would like to admit.
However, inconclusive results don't necessarily mean your test failed.
Instead, they indicate the alternative version is not clearly better than the original. In other words, neither tested version definitively won.
So it's probably best to keep the page as is, rather than implement the alternative design.
Knowing that information alone adds value and shows every test is a useful learning experience -- even if it didn't win.
« Back to Glossary Index