Best Practice: How To Improve A/B Testing with UX Insight



Over the past five years A/B (or MV) Testing has rapidly grown in popularity amongst digital professionals. Access to affordable and easy to use tools backed up by robust mathematics has made it relatively straightforward to incrementally improve website performance through live content experimentation.


Today, the question is no longer whether or not to adopt tools like Optimizely or Maxymiser, but how to run an efficient and systemised programme that a) maximises conversions,  b) has a positive ROI and c) avoids performance improvements plateauing after initial “low hanging fruit” wins.


In this article we examine how UX Insight is supercharging A/B Testing programmes by drawing on real-world implementations from WhatUsersDo’s clients with four best practice recommendations:


  1. Use Customer-struggle Insight to Prioritise Test Plans
  2. Develop Root-cause Hypotheses
  3. Improve Variant Quality
  4. Tackle Tough Problems.

*A/B Testing means both A/B and MV Testing for the remainder of this document.

1. Use Customer-struggle Insight to Prioritise Test Plans

It’s prudent to use multiple sources of insight to prioritise what to A/B Test next. But, in reality, too many revert to their hunches to decide.

On the face of it, prioritising A/B Tests should be straightforward. A search on Google reveals  how CRO Practitioners recommend this is approached; often boiling down to a combination of using data to identify those highest value pages that are the easiest to change. But, in reality, many teams tend to rely on hunches – especially those of influential executives – to prioritise their test plan.

This can lead to a plateauing of results after some initial wins (where the first “no-brainer” hunches proved right) because perceived rather than actual customer pain points are being tackled.

A straightforward way to avoid this is to identify actual customer struggle by observing target customers on the key journeys through UX Testing. Insight from this testing helps teams in three ways:

  • to identify the most impactful real World problems experienced by users that may not be evident from mining site data alone
  • to counter a hunch driven approach with compelling evidence – showing videos of customers struggling will convince even the most ardent executive
  • to quickly develop root cause hypothesis (see section 2 below).

Real World Example

A high-end women’s clothes retailer had been using Optimizely for more than 12 months. Their first few A/B Tests, designed to address the hunches of the ecommerce team, resulted in decent uplifts (peaking at 5%). The success of the initial tests lead them to continue to rely on their own hunches to prioritise future of A/B Tests. Over the following months the results were less impressive, even though the volume of testing increased.

Then, following two rounds of cross-device UX Testing on their key journeys, they re-prioritised their test plan to address the points of actual customer struggle that the testing revealed such as:

  • confusing returns policy wording that eroded trust
  • no support for SmartPhone pinch and zoom on product images
  • unclear delivery options.

Left to their own hunches, the team would never have prioritised running A/B Tests in these areas and their tests would continue to plateau.

2. Develop Root-cause Hypotheses

Insight from UX Testing helps optimisation teams develop robust root-cause hypothesis so that the test variants they design address an underlying issue that they understand.

If teams do not understand the root-cause of a conversion problem, they are often tempted to rely on guesswork or best practice to design variants for A/B Tests. This can limit the overall success of an A/B Testing programme, and can even lead to false positives: where an uplift is stumbled upon without the (more lucrative) underlying issue being addressed.

Real World Example

One online retailer with an increasing bounce rate on product category landing pages surmised that the lack of product filtering options was causing users to abandon. The team then developed design variants based on this hypothesis that had more granular filtering and ran A/B Tests – leading to some improvement, but not addressing the root-cause of the increasing bounce rates.

By observing customers landing on product category pages in a round of UX Testing, the team quickly identified the root cause of the abandonment. In this case it was sorting options (rather than filtering) that was the actual root cause. The team then successfully reduced bounce rates in another round of A/B Testing.

3. Improve Variant Quality

Even with robust root-cause hypotheses, the success of any A/B Test is dependent on the quality of the design variants - how well do they address the root cause problem?

An easy way to improve the quality of variants is for teams to gather UX Insight on mock-ups or prototypes and improve them during the design phase, before they are A/B Tested. There’s no need to wait for the finished designs and testing can be undertaken rapidly with the design team iterating on the test results.

Being confident that the design variants are of the best achievable quality maximises the likelihood of A/B Testing success.

Real World Example identified that the absence of videos on their product pages was limiting the conversion opportunity. As they designed a “B” product page that included manufacturer product videos they ran UX Tests to validate the variant quality with customers before running live A/B Tests.

The testing revealed that the manufacturers’ videos (highly stylised TV adverts) were of little value to users who wanted to see products in context. The then developed a variant with videos that showed the product in a real environment (e.g. a kitchen) and used these style of videos in the A/B Tests.

This resulted in a dramatic 8% uplift in online sales – an improvement that would not have been achieved had UX Testing not revealed why the team’s initial design was sub-optimal.

4. Tackle Tough Problems

Some complex conversion opportunities require a greater depth of insight, before they can even be considered for A/B Testing

Redesigning a Menu structure is the most common example of where more extensive UX Testing is prudent. Menus can be complex, and extensive – running Card Sorting and Tree Tests ahead of any live testing can save teams from designing and running what can prove to be very complex A/B Tests.

There are times when running A/B Tests, or to be more precise, many A/B Tests in the Wild is simply not feasible. This often applies when compliance or consistency of experience is important. For example, the logged in account area for a Bank or Utility Company.

Real World Example

British Gas wanted to improve their online bills for customers. They developed four variants into visual prototypes. But, knowing that call centre staff would struggle to answer customer queries if they first had to deduce which variant was being served (had all four been tested in the Wild) they ran an extensive round of UX Testing to determine a single variant to A/B Test. After gathering insight from over 250 customers, British Gas achieved a significant improvement with the winning variant from UX Testing.


These Best Practice Recommendations demonstrate how embedding UX Testing can make A/B Testing Programmes more successful and efficient because team decision making is improved through Insight – eradicating hunches and involving customers at every stage. Contact Us to discuss how this approach can help your organisation.