ab testing mistakes

You already know about A/B testing,but have you ever run an A/B test? Sadly, chances are you haven’t. Although conversion optimization has been around for years, it didn’t start getting popular until the last few years.

Why?

It’s because people are talking about the results they are experiencing from conversion optimization. Before you go out there and start running A/B tests in hopes to get the same results for your business, however, there are a few things you need to know.

Here are 7 A/B testing blunders you need to avoid:

Blunder #1: Believing what you read

Everyone talks about A/B testing results and how one simple change really boosted their conversions. When you see these results, you are likely to want to copy the same tests on your own website. But a lot of people run A/B tests incorrectly, which is why you need to learn to read between the lines.

red and green button

 

For example, if you read about this A/B test from Hubspot, it seems that having a red call-to-action button outperforms a green call-to-action button. But if you read carefully, you’ll notice that they mention that the red button had a 21% increase in click-throughs over the green button. That doesn’t mean that they saw a 21% increase in conversion rates.

Just because someone clicks a button doesn’t mean they are more likely to convert than if there were following a different variation. Now, the article by Hubspot does mention how they also saw a “21% increase at the bottom”, but it still doesn’t clearly state if they got an increase in conversions by 21%.

By no means am I saying the article by Hubspot isn’t useful, but more so I am trying to point out that you need to be careful about how you interpret the data. Don’t just assume that if someone ran a test and called it a “winning variation” that it is a winning variation. And, most importantly, don’t expect to take that same test, implement it on your website and achieve similar results.

Blunder #2: Ending tests when they are statistically significant

If an A/B testing software shows the results as statistically significant, you should end the test, right? In most cases that would be true, but if you don’t have enough conversions on each variable, you shouldn’t stop the test.

My rule of thumb is each variation should have at least 100 conversions and the test should run for at least two weeks.

When I started running A/B tests on Crazy Egg three years ago, I ran around seven tests in five months. The A/B testing software showed that we boosted our conversions by 41%. But we didn’t see any increase in our revenue.

Why? Because we didn’t run each test long enough. Stopping a test when the winning variation had forty-one conversions and the losing version had fifteen conversions was a bad idea because things can change quickly…especially if the test has been only running for a few days.

Those results didn’t hold true in the long run, which is why we didn’t see revenue increases. Make sure you run your tests long enough.

Blunder #3: Expecting big wins from small changes

If small changes are providing huge gains, something else is wrong with your design or copy. The conversion wins small changes provide typically don’t hold.

The biggest conversion boosts are going to come from drastic changes. So if you really want to move your conversion rates, don’t focus on small changes. Instead, focus on drastic changes as they are the ones that boost your revenue.

When you are starting out, you could try small tweaks to your design and copy to see if your conversion rates increase, but eventually you’ll need to focus on the big wins.

What I like doing is to focus on the drastic changes. Once I feel like I’ve maximized their potential, I then focus on the small changes.

crazyegg homepage

A good example of this was when we first got huge wins with Crazy Egg by changing the homepage to a long sales letter. After we made that drastic change, we then tested call-to-actions, button colors and other small things like that.

Blunder #4: The first step in A/B testing is to come up with a test

If you want to dive right in and start testing variations of your web page, that’s fine. I hope things work out for you, but chances are you will lose some money.

You aren’t the one buying your own product or service; it’s other people. So why should you base your A/B tests on what you think people will want to see?

Instead, you should start off by surveying your visitors. Ask them questions like:

  • What else would you like to see on this page?
  • What can we help you with?
  • Why didn’t you complete the purchase? (great question to use on your checkout page if someone has been idle for more than 30 seconds)
  • What could have we done to convince you to complete the purchase?

Getting qualitative data from your users will help you pinpoint what’s wrong with your messaging. You can then take that data to make changes and test them out to see if you can find a winning variation.

But before you start the test, you need to create an A/A test in which you test the current version of your website against itself. You’re doing this to test out the accuracy of the A/B testing software you are using and the accuracy of the data.

Once the A/A test looks good, you can start your first A/B test.

Blunder #5: Running a lot of tests on a regular basis

Just the other day, I sat down with an entrepreneur who claimed his company were experts in A/B testing because they run over twenty A/B tests each month.

When I started to dig a bit deeper, I found out that their website had a lot of traffic, which is why they were able to go through so many tests so fast. In addition, they didn’t have a ton of pages, so it was easy for their design team to make changes.

Upon hearing all of the evidence, I explained to the person why it’s bad to run so many tests in a short period of time.

For starters, they weren’t basing their tests off existing data, and they weren’t running them long enough. For this reason they weren’t seeing big revenue increases.

To make matters worse, they had a lot of losing tests, which was causing them to lose a lot of money. A temporary decrease in conversion rates means you temporary lose money.

If you can collect enough data, create tests based on the data and then learn from the tests, that’s fine. But it’s unrealistic to do all of that in a very short period of time.

Don’t focus on quantity; focus on quality. Always run tests based on data and learn throughout the process… even if that slows down your A/B testing schedule.

The biggest thing you need to take away from this blunder is how important it is to learn from your tests. Learning takes time, so don’t try to force yourself to run a lot of tests each month.

Blunder #6: The more variables the better

I hate testing too many variables at once, which is why I am not the biggest fan of multivariate tests. I’ve found when you combine all of the winning elements in a multivariate test, your conversion rates don’t go up as much as the software tells you they ought to.

If you modify too many variables at once, without testing them all, you also won’t know what variables are helping and which ones are hurting. For that reason, you should only try to make one change at a time.

A good example of this is Tim Sykes’ sales letter. Two weeks ago, he changed the video, the headline, the copy and even the design of the form fields. In the end, he had a huge drop in conversions.

That doesn’t mean the test was a failure. What it does mean is that Tim didn’t know which elements of the new design boosted conversions and which ones decreased conversions. To get a better understanding of this, he should have tested each variable independently instead of testing them all at once.

Blunder #7: Testing micro conversions

Do you remember the blunder at the beginning of this article in which Hubspot tested click-throughs? A click-through is an example of a micro conversion… Basically, Hubspot tried to increase the number of people moving from one part of the funnel to the next.

A macro conversion in that same scenario would be testing if a change impacted conversion rates. So instead of just testing the button color to see if it impacted click-through rates, the macro version of that test would have been testing if the button color impacted conversion rates.

Now, this doesn’t mean you shouldn’t look at micro conversions, but you should focus on macro conversions while keeping micro conversions in mind.

Blunder #8: Launching a new design

I know I said this post only contained seven blunders, but I had to throw in an eighth one…

The biggest blunder you can make is to redesign your website because your design is outdated. I’m a big believer that you should never just redesign things, but instead you should continually tweak and test your design until it’s to your customers’ liking.

At the end of the day, it doesn’t matter what you think of your design. All that matters is that it converts well. If you jump the gun and just change things up because you want to, you can drastically hurt your revenue.

Similar to the example in blunder Blunder #6, Tim Sykes also launched a new design because he wanted something fresh. Within hours of launching the design, he noticed that his revenue tanked. There was nothing wrong with the code; everything seemed to work; but the design wasn’t performing well. So he had no choice but to revert back to the old design.

You have a website to gain business from. Creating a new design won’t always help boost your revenue, so continually tweak and test elements instead of just redoing your whole design at once.

Conclusion

It’s ok if you make mistakes while you optimize your site for conversions. As long as you learn from your mistakes and avoid making the same ones over again, you’ll be fine.

I hope this blog post didn’t discourage you from running A/B tests on your website. Instead, I’m hoping that it encouraged you to run tests the right way.

Are there any other big mistakes you should avoid when optimizing your site for conversions?