Does including a step-by-step indicator in the checkout help provide clarity and reassurance, getting more people to complete the checkout process? Or does having this visual indicator completely work against conversions, creating distraction and hindering the user experience? Take your best guess on this smart step-by-step checkout study to find out.
To increase form completions, is it better to show a progress bar indicating how far users have come, and how much more is left to complete? Or does doing so turn off visitors -- especially on small mobile devices where screen real estate is already tight? Take your best guess on this progressive progress bar A/B test to find out.
When trying to get more people to fill-out your application form, is it best to present more information, by breaking the form into multiple, short steps? Or does it work better to remove extra steps, making the form short and sweet? Take your best guess on the powerful page path test to find out. See if you can guess the right answer to the A/B test.
One of the most debated testing topics is how large does my sample size need to be to get trustworthy test results? Some argue samples of more than 120,000 visitors per variant are needed to begin to see trustworthy test results. Ishan Goel of VWO disagrees. What does he think is needed to get trustworthy test results? Listen to this webinar recording to find out.
The testing landscape is about to turn on its head. With Google Universal Analytics (UA) platform gone, GA4 in, and the sunsetting of Google Optimize, it's a lot to keep up with! What should you do to keep up with all these major changes? Analytics expert, Dana DiTomaso, clearly spells out your next best steps.
Soon, third-party cookies will be blocked by most major web browsers leaving optimizers and experimenters desperately picking up the crumbled pieces. In A/B testing, there will be no easy way to track user events or behaviors -- let alone measure conversions. What's going to happen to A/B testing in a cookieless world?
Join the Best in Test awards ceremony. Submit your best tests and see who wins the testing awards.
A primer explaining the 4 different types of tests you can run, what they mean, and how you can use each to improve your competitive testing advantage.
To get users clicking your content, which format works best: buttons or links. A series of 8 real-life A/B tests suggests one format consistently outperforms. Can you guess which version wins? Checkout the mini meta analysis to find out.
How many visitors do I need to run a statistically significant A/B test and achieve valid, reliable results? This article will tell you everything you need to know to correctly calculate sample size and test duration for a split-test. Learn sample size best practices and industry-standard tools.
Calculating the Minimum Detectable Effect (MDE) can feel like a hair-pulling, speculative exercise. Luckily, this step-by-step guide is here to show you exactly how to accurately calculate your MDE to yield powerful, trustworthy test results.
Low sample testing is problematic and can result in hugely mis-leading data where tests appear to be enormous winners but, in actuality, aren't. This article presents a practical 5-step approach to overcome the traps of low sample testing and arrive at valid, trustworthy results. What are the 5-steps? Find out here.