Subscribe

Subscribe to Blog



A/B tests should be used to answer small questions about design and user experience, but should also be conducted on a continual basis, so that you can help illuminate the bigger picture and reflect the evolution of external factors (such as culture and technology).The goal of UX design is to create the conditions in which positive and successful user interaction can flourish. How that is done changes from product to product, but A/B testing can be implemented in just about any context.

In order to A/B test a UX concept, it’s important to first understand how that concept (or variable) fits into the larger user journey. Does the state of this variable affect the user’s ability to complete their mission with ease, and by how much?

From there, priorities can be made to help determine which variables to test first. The specifics of each test depend on the nature of the variable and the goal of the product. But there are general best practices to consider in just about any A/B test, particularly regarding UX.

Best Practices for A/B Tests
  1. Have a Clear Objective – What do you plan to test and why? What is the desired outcome of the test and how will it impact the business? Perhaps conversions on the website are down, which is a problem for acquiring new customers. Therefore, you plan to test the website “Request a Demo” form to determine if it is related to the problem. The objective of the test is to identify issues related to conversion and restore or improve conversion.

 

  1. Have Enough Data – Adequate sample size is needed in order to make sense of shifts in user behavior. There are online tools to help calculate this, including one by Optimizely, which does so based on existing conversion rates and the desired minimum detectable effect (MDE). MDE represents relative change from the control variable, which is the base conversion rate.

 

  1. Have a Hypothesis – Once the input data is deemed sufficient and reliable, a hypothesis can be formed. Consider changing a “Subscribe!” button to “Find Out More!”; a potential hypothesis could be: “If the microcopy of a CTA button is changed, conversions will increase by at least 10% within a week.” As long as conditions are similar for the control and variant, then the hypothesis can conceivably be proven true or false. From there, new hypotheses can be formed, leading to better design and programming decisions that optimize user experience and maximize conversions.

 

  1. Keep Variables Constant – In order for an A/B test to matter, it must have the right sample size and duration, and typically should not test more than 4 variables at a time. The tester should have a clear sense of their product’s purpose and how user behavior factors into it. And with thoughtful and relevant hypotheses, meaningful changes can be made and tested against one another in order to create the best user experiences possible.

 

  1. Always Be Testing – As we have discussed before with Usability Testing, you need to Always Be Testing and the same applies for A/B testing. A/B testing allows for the ability to test very small tweaks or changes to your product or digital property.  A key question to ask once the variant has been tested is whether or not the change correlated with any change in user behavior. Did it improve conversions at all? Did anything else of note change?

 

  1. Don’t Discount Small Improvements – Improvements should be considered over time and the compound affect of small monthly gains can be significant when looked over a years’ time. If you stumble upon a large monthly increase, then great but likely you are looking to slightly move the needle in the right direction every month.


More from this author

Patrick K. Donnelly

CEO & Co-founder of Truthlab

Patrick K. Donnelly
Patrick K. Donnelly

Patrick K. Donnelly

CEO & Co-founder of Truthlab

More from this author

Watch Highlight Reels

Find out how Truthlab can shed light on the customer experience with the truth quotient.

Customer Experience Update