If you’ve been in the optimization space for any length of time, you’ll know that no topic inspires greater debate than that of best practices.
Some claim that there are certain UX patterns – best practices – that ought to be applied to almost every website. Others argue that every UX pattern, no matter how obvious, needs to be tested.
Having run well over 10,000 a/b tests – and having seen best practices lose many times over! – our take on this probably won’t be a surprise.
This week I’m going to be sharing a testing story that highlights the dangers of blindly applying best practices. It also offers a blueprint as to what an optimal approach to best practices might look like.
Here goes…
—
We were working with The Motley Fool, a financial media company that provides expert investment advice to help its customers achieve long-term financial success.
Email marketing is critical to The Motley Fool’s growth, so we were tasked with using experimentation to maximize sign-ups for its mailing list.
When it comes to popular newsletters, a common best practice is to add social proof messaging to the sign-up funnel like this:
...and this
...and this
The idea is that by showing how many people have already signed up for your newsletter, you evoke the social proof heuristic and thereby increase the likelihood of a sign-up.
(In essence: ‘this many people can’t all be wrong’!)
So, anyway, The Motley Fool has an absolutely enormous mailing list – more than 121 million subscribers.
We thought this would be an easy win. So one of our first experiments was to apply best practice and highlight the size of Motley’s readership.
To do this, we created a variation of the signup modal that included the number of subscribers. We then ran this against the original page as an a/b test.
Best practice devotees might expect to see a reliable uptick in sign-ups here, but this isn’t what happened at all: the sign-up conversion rate fell by 11.2% in the variation.
There were a number of potential explanations for this result:
Social proof is ineffective on this website
This specific form of social proof (highlighting number of subscribers) is ineffective on this website
Something else is going on
In a bid to rule out #2, we decided to run a followup a/b test on the primary landing page of the email sign-up funnel.
This time, rather than highlighting the size of the readership, we instead opted to include a customer testimonial to the page:
When we ran this as an a/b test, it once again resulted in a significant reduction in the conversion rate.
From these two tests, a clear signal was beginning to emerge: social proof is negatively affecting users’ tendency to sign-up for the newsletter.
This ran against all of the conventional wisdom in our industry, so we decided to run some usability testing to try and understand the why behind the result. Here’s what we found…
People sign up to The Motley Fool’s newsletter because they are looking for exclusive insights that they can use to gain an edge on the market. By highlighting the size of Motley’s readership, we had inadvertently reduced this sense of exclusivity, which was why the conversion rate had tanked so badly.
OK, so, in the case of The Motley Fool: social proof = bad?
Not quite.
As it turns out, The Motley Fool was already using lots of social proof at the bottom of the funnel and on many of its order pages:
Given what we now knew about the effect of social proof on newsletter sign-ups, we hypothesised that removing some of this social proof content would increase the sense of exclusivity and thereby ratchet up the number of orders Motley receives.
We ran an a/b test where we removed customer testimonials from the order pages.
Guess what happened.
Conversion rates, average order value, and revenue per session all dropped significantly.
After a bit of head-scratching, we decided to run some more followup research and here’s what we found:
In bottom-of-funnel contexts, where users are asked to part with money and where anxiety is therefore high, social proof was actually serving as an effective means of alleviating anxiety and offering reassurance.
By removing social proof from the order page, we’d inadvertently increased user anxiety, which was probably why we saw such a significant conversion rate drop.
If we’d simply rolled out social proof across the site, as best practice proponents would recommend, some of these implementations would have improved the conversion rate, others would have harmed it, and the net result would have been flat.
By running these tests, we were able to separate the good implementations from the bad – and to retain the conversion rate uplifts while rejecting the downturns!
Also, maybe even more importantly, we’d also been able to unearth some powerful insights about Motley’s users:
Social proof harms exclusivity, but it alleviates anxiety
On areas of the website where anxiety is low, e.g. the primary newsletter landing pages, social proof harms exclusivity more than it alleviates anxiety, netting out at a decrease in conversions
On areas of the website where anxiety is high, e.g. order pages, social proof alleviates anxiety more than it harms exclusivity, netting out at an increase in conversions.
All in all, these insights were used to both fuel the experimentation program itself, while also informing decision making across a number of other areas of the business too.
So, back to the question that I started this post with:
Should you apply best practices to your website without testing?
Yes – but only if you don’t care about customer insights or conversion rates!
Tl;dr
If you have the traffic and the resources, we would always recommend testing best practices.
Even if your best practice is a winner, you may well find that its impact varies across different areas of your site/ user segments.
By testing these changes, you will have an opportunity to learn about your users, which may well pave the way for future winning test ideas.
We’d love to discuss how we can support you with your UX research and CRO needs.