Confirmation bias is a scourge of clean data, and everyone is susceptible to it. Marketers, like anyone else, have to work through their biases. Also, marketers have a ton of data on their hands. Data — as wonderful as it has been for increasing personalization and relevance — is incredibly easy to game.
“The more data we have, the more likely it is to find a correlation with what you want to see,” Michael Aagaard, senior conversion optimizer at Unbounce, said during the closing keynote at Unbounce’s Call to Action (CTA) conference in Vancouver. “If you torture the data long enough, it will confess anything.”
Torture, macabre it may be, makes for an especially apt metaphor for the way that marketers manipulate data to bolster their narrative. Anyone who works with data in any capacity understands that the raw numbers mean little without context, and it’s the analyst behind the data that give the numbers context.
It’s widely acknowledged that marketing has become a more scientific profession in the age of big data. Numbers abound, and marketers live and die by them.
When it comes to reporting performance, spinning a bad quarter is as simple as pulling averages from other fields in the spreadsheet. Aagaard offered the hypothetical example of a marketer reframing a revenue report by switching from revenue-per-user to revenue-per-visitor, as a means of showcasing how simple it is to derive big gains from data that hasn’t been properly contextualized. In these situations, marketers are enjoying the benefits of data, but they are missing a key part of what makes scientists and other data-based professionals successful – the scientific process.
“Scientists have the scientific process to stop confirmation bias. As marketing becomes more scientific, we have to understand the [scientific process],” Aagaard said.
Marketers do have certain tactics that check and balance the insights derived from data, but whether or not they are using them to their potential is another matter.
“Is A/B testing is a way to get rid of [confirmation bias]? Yes, if done right. But that’s the biggest ‘if’ I’ve ever seen,” Aagaard said.
What often happens with A/B testing, according to Aagaard, is marketers will run a test for a week or two, see lift, and then stop the test, falsely attributing that lift to what was tested. The data from such a test could point toward a lift of some triple-digit percentage, but Aagaard argues that if the test hasn’t run long enough to equalize, that user may be simply reacting to something new.
Aagaard challenged the audience at CTA 17 to question the logic itself behind some of these scenarios. Can changing the color of a button really generate 600% lift? “300% or 600% lift just doesn’t happen in the real world,” he said.
So what needs to happen here? Aagaard’s session was largely cultural criticism of the marketing world. He was spot on with much of his rhetoric, but cultural changes often come slowly, and often follow a catalyzing action. For marketers, according to Aagaard, this catalyzing action is solidifying which metrics are important to them and their organization.
“Define the metrics you care about and don’t stray. Before testing, ask why you think you need to change, what do you want to change, what impact do you expect to see, when do you expect to see results,” Aagaard said. “Write down the questions you want to ask and don’t stray. Ask critical questions before you start analyzing data. Maybe there isn’t that one solution that is always possible. Get out of the marketing bubble.”
Unbounce covered DMN’s expenses to attend CTA.