🎉Be Part of the Best in Test Awards! 🎉 Find out more
Account
guess_the_test-white-green
guess_the_test-white-green
Menu
Free Sign up
Member Login

A/A Testing - What You Need To Know

By: Deborah O’Malley| Last Updated Nov., 2022

A/A Testing: What You Need To Know

In this article, you’ll learn the basics about A/A testing and how to apply this information to optimize your website. You’ll learn:


What is an A/A test?

An A/A test is exactly as it sounds. A split-test that pits two identical versions against each other.

a_a_testing

To perform an A/A test, you show half of visitors version 1, and the other half version 2. The trick here is both versions are exactly the same!

So, why on earth would you run a test showing the exact same version to two different groups?


Why run A/A tests?


The reasoning is simple: to validate that you’ve set-up and run the test properly — and that the data coming back is clean, accurate, and reliable.

According to Instapage, it’s estimated 80% of A/B test results are imaginary. They’re based on false positives — which is a fancy, statistical way of saying, the results aren’t accurate. If you’re making optimization decisions on inaccurate results, you’ve got a problem.

The only way to truly validate accuracy is to test the same variant against itself. If you get markedly different results – and one version emerges a clear winner – you know there’s an issue. You might have noise in your data, or have not set things up properly.

If an A/A test comes back with roughly the same performance for each version, you know things are set-up properly, and you’re good to go.

It’s slightly counter-intuitive. But, with an A/A test, you’re actually looking to ensure there is no difference in results, between variants.


Advantages of running an A/A test

There are a couple of advantages to running an A/A test. Doing so:

1. Gives you more certainty you’ve set-up and run the test properly — assuming there is little difference in the results, between variants. This verification is especially valuable if you’re new to setting-up and running A/B tests.

2. Rules out the novelty effect. If you want to accurately test a change you’ve made on a website that people frequently go to, and are used to, like Facebook, implementing an A/A test helps you better understand how people are reacting. If one version of the same variant performs markedly different than the other, you know there’s something with the way your sample is reacting – not the variant itself.

3. Helps you ensure you don't have a Sample Ratio Mismatch (SRM) issue in which the conversion numbers are skewed because one variant receives notably more traffic than the other(s).


Disadvantages of running an A/A test

While running an A/A test can be beneficial — especially, if you’ve never set-up and run a test before — there are a few major disadvantages:

1. Doing so is resource-intensive. Testing takes time and requires resources to set-up design, develop, and run the test. Most organizations and agencies just want to start testing without needing to do "pre-work" to verify the testing environment is set-up properly.

2. Running an A/A test can distract you from running real, valid tests that bring in new, additional revenue or conversions.


Should you run A/A tests?

My advice to you is this: if you’ve never set-up and run an A/B test before, start with an A/A test. Doing so is low-risk and will give the confidence to move forward with setting-up a real test the next time round.

If you’ve run tests before, but feel uncertain about some aspects, and have adequate traffic, consider running an A/A/B test in which you split traffic three ways, to version 1 and 2 – which are the same – and to version 3, which is different. When analyzing results, you should see no major difference between versions 1 and 2 (A/A), but will, hopefully, see a difference in results when comparing versions 1 and 2 (A/A) to version 3 (B).

An important word of caution, you’ll only be able to validly set-up an A/A/B test, if you have enough visitors, as you’ll be splitting traffic not just two, but three ways.


Cross-check your data sources

Additionally, it’s important with any test – A/A, or otherwise — that you don’t rely on just one data source to ensure your test results are valid.

So, for example, if you’re running a test in a testing platform like VWO, it’s a good practice to also set-up and cross check the data in an analytics platform, like Google Analytics, to ensure the results are consistent across both platforms. If there’s large discrepancies in the data, across platforms, you know there’s an issue, and the results should be further assessed.

My recommendation is to set-up custom dimensions in Google Analytics. Doing so will enable you to:

1. Be more confident in the test results you obtain

2. Allow you to perform further data analysis, beyond what’s provided in the testing platform.

Testing platforms, like VWO and Optimizely, have built-in Google Analytics integrations, so you can easily set-up custom dimensions in Google Analytics. Depending on the testing platform you’re using, set-up will vary. Here’s some great resources to help you set-up custom dimensions in Google Analytics:


Summary

A/A tests have utility. They can help you confirm your sample groups are being split, or randomized properly, and your tests are set-up properly.

A/A tests essentially test your tests.

As this real-life GuessTheTest A/A test case study shows, A/A tests can help you uncover testing problems, and save you from implementing inaccurate tests.

Hope you’ve found this information useful and informative. Please share your thoughts and comments in the section below. 

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Other Posts You Might Enjoy

👋 Use the AI-driven chatbot to answer any A/B testing question
Chat Icon
magnifiercrossmenu-circlecross-circle
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram