There’s some bad news and good news.
Let’s get the bad news out of the way. Then, go into the good news.
The bad news is: what you’ve been taught about how to calculate a valid A/B test is probably wrong!
That cause you’ve probably been taught using Frequentist Statistics.
While there’s nothing wrong with Frequentist Statistics, the model doesn’t work that well when applied to A/B testing.
In Frequentist statistics, the only way to validly address this question is by stating a null hypothesis. And here’s where it gets more mind boggling. . .
Unlock Access. Sign up to become a Pro Member. Get complete access to this helpful content, plus so much more.
A/B Test Case Studies help business owners to understand their business and find out where they can improve. It is an easy-to-use tool that allows businesses to compare different approaches, such as changes in pricing, on the same website or landing pa ...
Understanding and calculating statistical significance is quite complex. Many experimenters don’t truly know what statistical significance is or how to derive a statistically significant test result. To properly call a winning (or losing) A/B test, it’s really important to clearly understand what a statistically significant result is and means. This article, written in plain English is here to set it all straight for you.
It may seem like a small change, but optimizing your navigational menu can have a big impact on conversions. This article provides you with do's, don'ts, and the top-10 navigational bar A/B test ideas you absolutely must try to easily increase conversions on your site.