Election Lessons

With the Mid-term elections a couple of weeks behind us now, I’d like to take just a few minutes to discuss a couple of things that we observed and learned, as well as possible implications to our work as marketing researchers.

It is enough that the people know there was an election.   The people who cast the votes decide nothing.   The people who count the votes decide everything.

Joseph Stalin

Political polls use the same basic methodology as much of the statistical hypothesis testing work that we do in marketing research.   We may want to know if one brand of widget does a better job of widgeting than another brand, while pollsters want to know if one candidate is more popular enough to be able to predict that they will win an election.  Underlying the methodology are some basic assumptions about the distribution of responses and the representativeness of the sample used to make the estimates.

Prior to the recent election, there was a strong belief that Republicans might be able take over the majority of the Senate.  There were a number of races, though, that were projected to be very tight and were too close to call, (or as political pundits like to say, “within the margin of error”.)   When all was said and done, however, Republicans easily gained eight seats to end up with 53.   (They still have the possibility of gaining one more, as the incumbent Democrat in Louisiana is in a runoff that will take place in early December.)  And many of their wins were not nearly as close as expected.   Much of the political talk following the election has focused on how this happened, that is, how did the polls not predict that this would take place?

One reason involves the “representativeness of the sample” assumption. All good polls try to ensure that the people you are talking to are the same people that are actually going to vote. Nearly every poll asks the respondent how likely they are to vote in the coming election…  You are only allowed to proceed if you state that you are at least “likely to vote.”   It appears that many voters, particularly democrats this time around, did not actually vote. This created a bias in the polling results toward Democrats.

Related to this, because there was so much political news taking place in the weeks and days leading up to the election, many voters may have made up their mind late in the game. Another phenomenon that led to the polling community missing the mark this time around was the tendency of some pollsters to stop updating their polls too early. Once they felt the poll results had “converged” to a solid prediction of who was going to win, they ceased polling. In these situations, they missed any late movement among the electorate, of which there was probably a great deal this year, thus mis-projecting the race at hand.

Another representativeness issue involved who was contacted. Almost all political polling is done via landline telephone.   Estimates vary, but roughly 40% of American households do not even have a landline phone anymore!   That makes it pretty tough to say that polling people only with landlines is representative of the nation as a whole.

This representative sample issue is huge for us as researchers.   With every study we must not take for granted the design portion of the research.   We must make the best effort possible to contact the relevant individuals, and with the appropriate representativeness.   If there is a portion of our population of interest that cannot be reached in a traditional way, we must get creative to make sure that they have an equal chance of having their voice heard.   Otherwise, we risk looking like we don’t know what we’re doing when our recommendations don’t work out or our projections are way off.   A ResearchWISE® approach always ensures that the foundations and assumptions going into the research are solid, resulting in methodologically sound results and recommendations.

For years I have been spouting off (to anyone who would listen) about the framework of beliefs, leading to decisions leading to outcomes.   If you want better outcomes, make better decisions and if you want better decisions, adjust (make better) your beliefs.   While traditionally marketing research has been informing the belief component, more and more we are helping with the decision aspect.

As researchers, we need to understand that our roles are morphing, if they haven’t completely done so already.   Clients look to us not just for information (data), but for what to do with that data.   We must embrace this and leverage the fact that our history and background of handling the data piece makes us uniquely qualified to help our clients navigate through the decision making aspect.  Fortunately, my colleagues at Marketing Workshop not only get this, but we live and breathe it every day.  It’s all part of our ResearchWISE® philosophy.

 

~ Bud Sanders

Leave a comment

Comments

  • No comments found