If I were to ask you what the most significant moments of your life have been to this point, what would you say? Would you mention the day you were married? The day you graduated college?
That first “real job” offer? If you’re a parent, maybe the day(s) your child(ren) were born? And unfortunately, these significant moments aren’t always positive, so maybe it was the death of a parent? Or the day your divorce was final? Whatever life event, I suspect it was something big and easy to come to mind. These things that change our lives, those we consider to be significant, can be monumental.
Now, let’s shift gears a bit. For anyone who works in the marketing research industry, or is a consumer of marketing research, the word “significance” takes on a different kind of meaning. In this context, it (roughly) represents a value that is statistically shown to be unlikely to have occurred by chance or pure randomness. For many, it is a term used for determining “what matters and what doesn’t. ” When a research study of any sort is performed, there are typically a large number of data points to consider. There is never a shortage of potential relationships (within the data) to test and pursue for possible “significant” findings . Unfortunately, this often leads to a situation where only those relationships flagged as significant are considered for further investigation. The consequence of this type of approach is a very short-sighted view of what the results could reveal about the truth of a situation. Subsequently, the actions that could or should be taken to address some business situation might be missed. Or even worse, an incorrect action could be taken.
This phenomenon is not unique to the marketing research industry. In recent months, there’s been a lot of discussion among academics and practitioners in many fields about the use of p-values that arise from statistical hypothesis testing. Without getting heavy into the math, the p-value is used to determine if a finding is “significant” or not. It essentially represents the probability that a finding could occur just by chance. The interpretation of that number, if it sufficiently low enough, (say, less than.05, or 5%) is that the number is “significantly different” than whatever it is being compared to. Over the years, p-values have become the default criteria for managers, publishers and others to determine if a finding is “worthy” of acting upon or publishing. In some cases, this has led to the unethical situation of “p-hacking”, where researchers will manipulate the data to ensure that the findings are “significant”, thus ensuring the research will be considered for publication.
The idea of p-hacking and other misuses of the statistical hypothesis testing approach has even led to the American Statistical Association (ASA) to release a statement earlier this year on the proper use and context of p-values. The document seeks to reground statisticians, researchers and consumers of research in the basic meaning, interpretation and best practices in the use of p-values and hypothesis testing.
There is a statement toward the end of the document that reads, “No single index should substitute for scientific reasoning.” This is where my earlier thoughts in this article come back around. With so much information available to us, human nature is to find shortcuts that get us to “the important stuff”. The use of p-values has, all too often, become this shortcut. The problem, however, is that there is an enormous amount of information disregarded, because it doesn’t pass the p-value test. All of it, thusly considered not significant, because it didn’t meet some mathematical threshold. And it is often just this information, when digested more holistically, that is necessary to make the next right business decision. We become so focused on the minutia that we miss the forest for the trees, we don’t recognize the big picture answers that the data is trying to reveal to us. There could be something monumental taking place. Significance means much more than just a mathematical equation.
So, as many around me have heard me say lately, don’t put all of your eggs in the “p-value equals significance” basket! Use it as a screening tool, sure. Use the approach to help you weed through the data and pull out some clues and nuggets. But, by all means, pay attention to the other data as well. Realize that there is a big picture story – a macro truth if you will – that exists if you just take the time to seek it out. Mental shortcuts are nice, but not if they result in missing the big picture and making the wrong decision for your business.
~ Bud Sanders