A/B Testing

A/B Testing: The Most Effective Checklist

When marketers create landing pages, write email copy, or design call-to-action (CTA) buttons, it can be tempting to use our intuition to predict what will make people click (and hopefully convert). However, basing marketing decisions on feeling can yield detrimental results. Rather than relying on guessing to make these decisions, you’re much better off with A/B testing.

A/B testing is valuable because different audiences behave, well…differently. Something that works for one company will likely not work for another. Conversion rate optimization (CRO) experts hate the term “best practices.” Why? Well, it may not be the best practice for you.

However, A/B testing can also be complex. If you’re not careful, you could make erroneous assumptions about what people like and what makes them click. These decisions could easily misinform other parts of your strategy.

This post will teach you how to do A/B testing before, during, and after data collection so you can make the best decisions from your results.

What is A/B Testing

In order to run an A/B test, you need to develop two versions of the same piece of content with changes to a single variable. You then show these two versions to two similarly sized audiences and analyze which of the two performed better over a set period. You need to ensure that said time is long enough to draw meaningful conclusions about your results.

A/B Testing

A/B testing helps marketers observe how one version of a marketing element performs alongside another. Here are two main types of A/B testing you may conduct in an effort to increase your website’s conversion rate:

Test #1: User Experience

Let’s say you want to see if moving a certain call-to-action (CTA) button to the top of your homepage instead of keeping in the sidebar will improve its click-through rate.

In order to A/B test this theory, you need to create a second, alternative web page. This alternative webpage will reflect the CTA placement change. The existing design – aka the “control” – is version A. Version B is the challenge. From there, you’d test these two versions by showing each of them to a set percentage of visitors. In an ideal setting, the percentage of visitors seeing either version is the same.

Test #2: Design

Let’s say you want to find out if changing the color of you CTA button can increase its click-through rate.

In order to test this theory, you’d design an alternative CTA button with a different button color. This new button will lead to the same landing page as the control. If you usually use a red CTA button, and the green variation receives more clicks after your A/B test, this could warrant changing the default color of your CTA buttons to green moving forward.

The Benefits of A/B Testing

A/B testing provides a plethora of benefits to a marketing team. However, it all depends on what you decide to test. Above all, though, these tests are valuable to a business since they’re low in cost and high in reward.

Let’s say your company employs a content creator with an annual salary of $50,000. This content creator produces 5 articles per week for the company blog, which amounts to 260 articles per year. If the average post on the company blog yields 10 leads, it is safe to say that it costs $192 to generate 10 leads ($50,000 salary / 260 articles = $192/article). This is a hefty chunk of change.

Let’s shift gears here. If you ask the same content creator to spend two days developing an A/B test on one article, instead of producing two articles in that time period, you might burn through $192 since you’re publishing one less article. However, if the A/B test finds you can increase the conversion rate of each article from 10 to 20 leads, you just spent $192 to theoretically double the number of customers from your blog.

On the flip side, if the test fails, you lost $192. However, you can make an even more educated A/B test next time around. If the second test succeeds in doubling your blog’s conversion rate, you essentially spent $284 to potentially double your company’s revenue. The lesson here is that no matter how many times an A/B test fails, its eventual success will almost certainly outweigh the cost to conduct it.

Types of A/B Testing

There are many types of split tests you can run to make the experiment worthwhile in the end. Here are some common goals marketers have for their business when A/B testing:

  • Increased Website Traffic: Testing different blog posts or webpage titles can impact the number of people who click on that hyperlinked title to your website. This, in turn, can increase website traffic.
  • Higher Conversion Rates: Testing different locations, colors, or even anchor text on your CTAs can influence the number of people who click on said CTAs to view a landing page. This can increase the number of people who complete forms on your website, submit their contact info, and convert into a lead.
  • Decrease Bounce Rate: If website visitors leave quickly after visiting your website, you lose. Testing different blog post introductions, fonts, or feature images can reduce said bounce rate and retain more visitors.
  • Lower Cart Abandonment: Ecommerce sites see 40%-75% of customers leave with items in their shopping carts. This is known as shopping cart abandonment. Testing different product photos, check-out page designs, and even where shipping costs are displayed can lower this abandonment rate.

Let’s walk through the checklist for setting up, running, and measuring an A/B test now!

A/B Testing Laptop

How to Conduct A/B Testing

Before the A/B Test

Let’s take a look at the necessary steps before you start your A/B test.

1.) Pick One Variable to Test

As you optimize your website and emails you’ll likely find that there are a number of variables that you’d like to test. However, to evaluate how effective a change is, you’ll want to isolate a single independent variable and measure its performance. Otherwise, you can’t be sure which variable is responsible for changes in performance. You can definitely test more than one variable for a single webpage or email, but make sure you’re testing them one at a time.

Take a look at the various elements in your marketing toolkit and their potential alternatives for design, wording, and layout. Other things you may consider testing include email subject lines, sender names, and different ways to personalize emails.

Keep in mind that even the most minute changes, like changing the image in you remail or the words on your CTA, can cause massive improvements. In fact, these sorts of changes are usually easier to measure than the bigger ones.

Quick Sidenote: There are times when it makes sense to test multiple variables rather than a single variable. This is a process known as multivariate testing. If you’re curious about whether you should run A/B testing versus a multivariate test, here’s a bomb.com article from Optimizely that compares the two methods.

2.) Identify Your Goal

While you’ll measure a wealth of metrics for each test, choose a primary metric to focus on…before you run the test. In fact, do it before you even set up the second variation! This your ‘dependent variable’.

Consider where you want this variable to be at the end of the split test. You might formulate an official hypothesis and examine your results based on this prediction.

If you wait until afterward to think about which metric(s) are important to you, what your goals are, and how the changes you’re proposing might impact user behavior, then you might not set-up the test in the most effective way.

3.) Create a ‘Control’ and a ‘Challenger’

You now have your independent variable, dependent variable and your desired outcome figured out. Now it’s time to use this info to set up the base version of whatever you’re testing as your ‘control.’ If you’re testing a webpage, this is the unaltered webpage as it currently exists. If you’re testing a landing page, this would be the landing page design and copy you’re currently using.

From there, construct a ‘challenger’ – the website, landing page, or email you’ll test against your control. For instance, if you’re wondering whether including a testimonial on a landing page would make a difference, set-up your control page with no testimonials. Then create your variation with a testimonial.

A/B Testing

4.) Split Your Sample Groups Randomly and Evenly

When you test an element where you have more control over the audience – emails for instance – you need to test with two or more audiences that are equal in order to derive conclusive results. How you go about doing this will vary on the A/B testing tool you decide to use.

5.) Determine Your Sample Size (if applicable)

The method you employ to determine your sample size will also vary on your A/B testing tool, as well as the type of A/B test you’re running. If you’re A/B testing an email, you’ll likely want to send an A/B test to a smaller portion of your list to get statistically significant results. You’ll eventually pick a winner and send said winning variation to the rest of the list.

If you’re testing something that doesn’t have a finite audience, like a web page, then how long you keep your test running will directly impact your sample size. You’ll need to let your test run long enough to obtain a substantial number of views, otherwise, it’ll be hard to tell whether there was a statistically significant difference between the two variations.

6.) Determine How Significant Your Results Need to Be

Once you’ve selected a metric, consider how significant your results need to be in order to justify choosing one variation over another. Statistical significance is an essential component of A/B testing that’s often overlooked or misunderstood. Here’s a quick review of statistical significance if you need a refresher.

The greater the percentage of your confidence level, the more sure you can be about your results. You’ll typically want a confidence level of at least 95%. 98% is even better, especially if it was a time-sensitive experiment to set-up. However, sometimes it makes sense to use a lower confidence level if you don’t need the test to be as precise.

7.) Make Sure You’re Only Running One Test at a Time on Any Campaign

Testing more than one element for a single campaign – even if it’s not on the exact same asset – can muddy your results. For instance, if you’re A/B testing an email campaign that directs to a landing page at the same time you’re A/B testing that landing page…how can you tell which change caused the increase in leads?

During the A/B Test

Let’s take a look at the steps you’ll take during your A/B test.

8.) Use an A/B Testing Tool

In order to conduct an A/B test on your website or in an email, you’ll need to use an A/B testing tool. Google Analytics’ Experiments is a great A/B testing tool. It allows you up to 10 versions of a single web page and you can compare the performance of each page using a random sample of users.

9.) Test Both Variations at the Same Time

Timing plays a major role in your marketing campaign’s results, whether it’s the time of day, week, month, or year. If you run version A one month and then version B the following month, how would you know whether the performance change was caused by a different design or a different month? When you run A/B tests, you need to run both variations at the same time. Otherwise, you’ll be left second-guessing your results.

The only exception here is if you’re testing timing itself – like finding the optimal time to send out emails. This is a great thing to test because depending on what your business offers and who your subscribers are, the optimal time for engagement can vary by industry and target market.

10.) Give the A/B Test Enough Time to Yield Relevant Data

You’ll want to let your test run long enough to obtain a substantial sample size. Otherwise, it’s hard to tell whether there was a significant difference between the two variations.

How long is long enough? Depending on your company and how you execute the A/B test, getting significant results could happen in hours…or days…or weeks. A big part of how long it takes is based on how much traffic your site yields. So if your business doesn’t get a whole lot of traffic to your site, it’ll take much longer to run an A/B test. In theory, you shouldn’t restrict the time in which you gather results.

A-B Testing Time

11.) Ask for Feedback from Real Users

A/B testing has a lot to do with quantitative data. However, that doesn’t mean that you understand why people take certain actions over others. While you’re running your A/B test, why not collect qualitative feedback from real users?

One of the best ways to make this happen is to conduct a survey or poll. You might add an exit survey on your site and ask why they didn’t click on a certain CTA, on one on your thank-you pages that asks why they clicked on a button or filled out a form.

You might find that visitors clicked on a CTA leading to an ebook, but didn’t purchase due to the price. This sort of information provides you with a lot of insight into why your users behave a certain way.

Post-Testing Action and Review

Lastly, let’s discuss the steps you take after conducting an A/B test.

12.) Focus on the Goal Metric

Even though you’ll be measuring multiple metrics, keep your focus on that primary goal metric when you analyze the data. For instance, if you tested two variations of an email and chose leads as your primary metric, don’t get caught up on open rate or click-through rate. You might see a high click-through rate and poor conversion rates, in which case you might end up selecting the variation with the lower click-through rate in the end.

13.) Take Action Based on Your Results

if one variation is statistically better than the other, you have a clear cut winner. Complete your test by disabling the losing variation in your A/B testing tool. If neither variation performed better than the other, you’ve just learned that the variable you tested didn’t impact results and you’ll mark the test as inconclusive. In this case, strick with the original version – or run another test. You can use the failed data to help figure out a new iteration on your new test.

While A/B tests help you impact results on a case-by-case basis, you can also apply the lessons learned from each test and apply it to future efforts. For example, if you’ve conducted A/B tests in your email marketing and have repeatedly found that using numbers in email subject lines generates better clickthrough rates, you might want to consider using that tactic in more of your emails.

15. Plan your next A/B test.

The A/B test you just finished may have helped you discover a new way to make your marketing content more effective — but don’t stop there. There’s always room for more optimization.

You can even try conducting an A/B test on another feature of the same web page or email you just did a test on. For example, if you just tested a headline on a landing page, why not do a new test on body copy? Or color scheme? Or images? Always keep an eye out for opportunities to increase conversion rates and leads.

author avatar
Andrew Roche
Andrew Roche is an innovative and intentional digital marketer. He holds an MBA in Marketing from the Mike Ilitch School of Business at Wayne State University. Andrew is involved with several side hustles, including Buzz Beans and Buzz Impressions. Outside of work, Andrew enjoys anything related to lacrosse. While his playing career is over, he stays involved as an official.
Skip to content