A/B Testing

a letter b sign on a chainlink fence

A/B

Testing

Time

Depends

Difficulty

🕹

Advanced

Materials

📦

Two versions of design

Way to serve different designs and track usage

People

🕴

Designer, Engineers

5% of Users

Overview

A/B testing: trying two different versions of a design on users to see which performs the best. A/B testing should certainly never be the only method used on a project.

What

A/B testing: trying two different versions of a design on users to see which performs the best.

Multivariate testing: trying MULTIPLE different versions of a design on users to see which performs the best.

A/B testing only collects quantitative data

Why

Advantages

  • Great for measuring the effect of design changes on key business metrics
  • Measures the actual behavior of your customers under real-world conditions. You can confidently conclude that if version B sells more than version A, then version B is the design you should show all users in the future.
  • It can measure very small performance differences with high statistical significance because you can throw boatloads of traffic at each design. Example: here’s how you can measure a 1% difference in sales between two designs.
  • It can resolve trade-offs between conflicting guidelines or qualitative usability findings by determining which choice is right for your individual circumstances. Ex: There is mixed research on prominently displaying coupon entry spots for online retailers. Doing your own A/B testing will resolve this conflict.
  • Cheap: once you’ve created the two design alternatives (or the one innovation to test against your current design), you simply put both of them on the server and employ a tiny bit of software to randomly serve each new user one version or the other. Also, you typically need to cookie users so that they’ll see the same version on subsequent visits instead of suffering fluctuating pages, but that’s also easy to implement. There’s no need for expensive usability specialists to monitor each user’s behavior or analyze complicated interaction design questions. You just wait until you’ve collected enough statistics, then go with the design that has the best numbers.

Disadvantages

  • Can only be used for projects that have one clear, all-important goal, that’s to say a single KPI. Furthermore, this goal must be measurable by computer, by counting simple user actions. Examples: sales completed, clicks, user downloads. Many goals are not measurable this way, for example: improving brand reputation, how delightful the interface is.
  • Only works for fully implemented designs. It’s cheap to test a design once it’s up and running, but we all know that implementation can take a long time. Before you can expose it to real customers on your live website, you must fully debug an experimental design. A/B testing is thus suitable for only a very small number of ideas. You can combine A/B testing with paper prototyping
  • Creates a focus on short-term improvements. You need to combine this method with qualitative studies to find bigger issues.

  • When you have a fully-implemented design option to test
  • When you have time to combine it with another form of testing (examples: paper prototyping, qualitative research) for the most useful data.

Step 1 Make a research plan

Before building an A/B testing plan, you need to conduct thorough research on how the product is currently performing. Heatmap tools are used to determine where users are spending the most time on, their scrolling behavior, etc. This can help you identify problem areas on your product. Another popular tool used to do more insightful research is user surveys, which often highlight issues that may be missed in aggregate data.

Step 2 Ready your participants

Get closer to your business goals by logging research observations and creating data-backed hypotheses aimed at increasing KPIs. Without these, your test campaign is like a directionless compass. The qualitative and quantitative research tools can only help you with gathering user behavior data. It is now your responsibility to analyze and make sense of that data. The best way to utilize every bit of data collated is to analyze it, to make keen observations on them, and then to draw product as well as user insights to formulate data-backed hypotheses. Once you have a hypothesis ready, test it against various parameters.

Step 3 Log and Process

The next step in your testing program should be to create a variation based on your hypothesis, and A/B test it against the existing version (control). A variation is another version of your current version with changes that you want to test. You can test multiple variations against the control to see which one works best (“multivariate testing“). Create a variation based on your hypothesis of what might work from a UX perspective. For example, enough people not filling forms? Does your form have too many fields? Does it ask for personal information? Maybe you can try a variation with a shorter form or another variation by omitting fields that ask for personal information.

Step 4 Follow-up and Learn More

Kick off the test and wait for the stipulated time for achieving statistically significant results. Keep one thing in mind – the conditions of your test administration and your statistical accuracy will determine the end results. For example, one such condition is the timing of the test campaign. The timing and duration of the test have to be on point. Calculate the test duration keeping in mind your average daily and monthly users, estimated existing conversion rate, minimum improvement in conversion rate you expect, number of variations (including control), percentage of users included in the test, and so on.

Use this [Bayesian Calculator to calculate the duration for which you should run your A/B tests for achieving statistically significant results.

Step 5 Analyze and share

Even though this is the last step where you find your campaign winner, analysis of the results is extremely important. Because A/B testing calls for continuous data gathering and analysis, it is in this step that your entire journey unravels. Once your test concludes, analyze the test results by considering metrics like percentage increase, confidence level, direct and indirect impact on other metrics, etc.

Here is a calculator for statistical significance in A/B testing.

After you have considered these numbers, if the test succeeds, deploy the winning variation. If the test remains inconclusive, draw insights from it, and implement these in your subsequent tests.

A/B testing lets you systematically work through each part of your product to improve it.

Tools

None