Posted by & filed under Accessibility and Usability.

In the course of researching my new employment (which I am really looking forward to!) at Storm ID, I read this article by Performable that was higlighted in their Facebook page.

It makes an interesting read and I encourage all of you to read it.  However, the argument it presents does have one fatal flaw, even though Performable have followed a reasonable scientific method in the way in which they have gone about their experiment.

Before I discuss the flaw, I should describe the experiment.

Button colour experiment

The guys at Performable set up an A/B split test, with their conversion call-to-action button green for half the users on their site, and red for the other half.

The article at Performable doesn’t really describe the method they undertook (more it describes the pyschological assumptions about each colour), but there is nothing to suggest that it wasn’t a perfectly well operated scientific A/B split test.

The results were staggering. Even though, as Performable’s presumptions expressed, the red button might pyschologically ‘suggest’ danger to a user, it converted more users. 21% more users!

It seems at first glance clear that in a straight race between red and green conversion buttons, red converts 6 users for every five converted by a green button.  Nothing else has changed on the page, so it has to be the colour, right?

Well, here’s the flaw in this specific experiment. Look again at the screengrabs above (I have taken them directly from the experiment results blog page at Performable, so credit to them). You’ll see that the design of the page on which the experiment has undertaken has several small splashes of green on it – the Performable logo, the top left icon in the “What can you do with Performable” lists.  The shades of green are close enough to the conversion button green to be similar.  On the other hand, there are a couple of more minor spashes of red on the page, but the area taken by them are less than the green areas, and the shades are less similar to the red conversion button.

The result? Even at a glance, the red button seems to draw my eye more than the green button.  But I’m not convinced that it is because it is red and the other green, so much as the red colour is more unique in the design than the green button.

My suggestion is that for a true colour comparison, an A/B split test must be done with unique colours – for example comparng red and green buttons on a page where then only other design elements are blue/grey/black/white would be ideal.  Likewise, though, to get a true colour comparison, multiple A/B tests would need to run in these circumstances, so on could get a true reflection on whether colour (and indeed colour psychology – which has always been presented as a design choice argument) does indeed influence a user’s choices.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>