Back in April, I wrote a little blog post titled Before you test, think. The post was largely inspired by my personal frustration at what I see as a real mass misunderstanding of A/B testing software and how to interpret A/B test results.

I received a lot of great feedback on the post (thank you to everyone who messaged), but a few months on, I still see countless examples of the same issue. And it still bugs me.

Yet another example

I recently attended a webinar hosted by a leading conversion agency who I have a huge amount of respect for. It was a really good talk with some great insight and I thoroughly enjoyed it (mostly).

However at one point there was a brief mention of an ecommerce site split test where the ‘Add to basket’ button on the product page had been changed to add some shading/bevelling to the button (everything else on the page was the same).

The host went on to say that this change alone resulted in a 12% increase in conversion rate for the site in question.

…..

……..

I have to admit, my heart sank a little.

I know exactly what’s happened here; they’ve run an A/B or MV test and their A/B testing software (it really makes no difference which one they used) has reported a statistically clear 12% increase in conversion for the new button version.

And if your A/B testing software says with 95% or 99% certainty that the change you’re testing results in a 12% conversion increase, then when you enable that change, your conversion rate will definitely increase by 12% right?

Wrong, wrong and one more for good measure, wrong.

If you want to understand why that’s wrong, don’t forget to read my original post.

So, who’s fault is it?

One of the messages I received back in April asked if I thought A/B testing software providers should be taking more responsibility here. My initial reaction was no, they’re just tools and it’s up to the user to interpret the results correctly, but actually with hindsight, I think the questioner makes a really valid point.

A/B testing tools are all invariably set up to promote this idea of getting to a “statistically significant” winning version (right down to being able to deploy the “winning” version with a single click) without any mention whatsoever of the critical importance of context or user intent. Without a full understanding of those aspects, a “statically significant” test result in any A/B testing software is virtually meaningless.

I appreciate it may not be in the commercial interests of any A/B testing software to promote a better understanding of this, but it would be refreshing to see one or two try and educate users. But hey, I’m not going to hold my breath on that one.

Want to learn more?

If you want to know more, including how I achieved a near statistically certain 20% conversion increase on an ecommerce site by changing (spoiler alert) absolutely nothing, then don’t forget to have a read of my original post from April where I cover all aspects of this in a lot of detail.

As always, if you have any comments or questions, feel free to drop me a line via the contact form or on LinkedIn.

Categories: Blog Posts

0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *