Why Marketers Should Be On the Lookout For Unjustified Survey-Based Conclusions

This is the second of my three-part series discussing the many issues that can affect the validity of survey results and/or the credibility of survey reports. As I wrote in my last post, surveys and survey reports have become important marketing tools for many types of B2B companies, including those that offer different types of marketing-related technologies and services. Therefore, many B2B marketers are now producers and consumers of survey-based content.

Whether they act as producers or consumers, marketers need to be able to evaluate the quality of search-based content resources. Therefore, they need a basic understanding of the issues that can affect the validity of survey results and make survey reports more or less credible and reliable.

This post will discuss an issue that can easily undermine the credibility of a survey report. My next post will focus on some of the issues that can affect the validity of survey results.

The ‘sin’ of unjustified conclusions

A prerequisite for any reliable survey report is that it should contain only conclusions supported by survey data. This sounds like basic common sense – and it is – but unfortunately many survey reports include explicit or implicit conclusions that the survey findings do not actually support. In my experience, most unwarranted conclusions arise from blurring the lines between correlation and causation.

One of the basic principles of data analysis is that correlation does not establish causation. In other words, survey results may show that two events or conditions are statistically correlated, but this alone does not prove that one event or condition caused the other. Many survey reports emphasize the associations in the survey data, but few reports remind the reader that association does not necessarily imply causation.

The following chart provides an amusing example of why this principle is so important. The graph shows that from 2000 through 2009 there was a strong association (r = 0.992568) between the divorce rate in Maine and per capita consumption of margarine in the United States. (Note: To see this and other examples of nonsensical links, take a look at the Spurious Correlations website.)

I doubt any of us would argue that there is a causal relationship between the divorce rate in Maine and consumption of margarine despite the strong correlation. These two “variables” have no logical relationship.

But when there is a reasonable logical relationship between two events or conditions that are also statistically closely related, we humans are strongly inclined to conclude that one event or circumstance caused the other. This tendency can lead us to see a cause-and-effect relationship in cases where there is no actual relationship.

Here is an example of how this problem might appear in the real world. A well-known research company conducted a survey that mainly focused on capturing data on how the use of predictive analytics would affect demand generation performance. The survey asked participants to rate the effectiveness of their demand generation process, and the survey report includes results (presented in the following table) that demonstrate the relationship between the use of predictive analytics and demand generation performance.

Based on these survey responses, the survey report states: “Overall, less than a third of the survey respondents reported having a B2B order generation process that meets the goals well. However, When predictive analytics are applied, process performance goes up, and it effectively meets its set goals over half the time.(emphasis on original)

The survey report doesn’t explicitly say that predictive analytics was the cause of the improved performance, but it does come very close. The problem is that the survey data doesn’t really support this conclusion.

The data show that there is a relationship between the use of predictive analytics and the effectiveness of the respondent demand generation process. But the survey report – and most likely the survey itself – did not address other factors that may have affected demand generation performance.

For example, the report does not indicate whether survey respondents were asked about the size of their demand generation budget, the number of demand generation programs they run in a typical year, or the use of allocation in their demand generation programs. If these questions are asked, we may find that all these factors are also related to the performance of demand generation.

The bottom line is, when marketers conduct or sponsor surveys, they need to ensure that the survey report contains only those claims or conclusions that are legitimately supported by the survey data. And as consumers of survey research, marketers must always be on the lookout for unwarranted conclusions.

Top image courtesy of Paul Mison via Flickr CC.

Leave a Comment