“People don’t think what they feel, they don’t say what they think, and they don’t do what they say.”
David Ogilvy, founder of Ogilvy & Mather
Research surveys have become a prominent feature of the B2B marketing landscape in recent years. The growing need to produce effective thought leadership content, and easy access to free or inexpensive survey techniques have led many B2B marketers to make surveys an integral part of their marketing efforts.
Existing survey technology tools make it relatively easy to create survey tools and conduct online surveys. But these tools do not eliminate the challenges involved in designing surveys that produce accurate results. In fact, investigative research is a complex subject, and experience is required to do it well. The validity of survey data can be affected by many factors, many of which are not intuitively obvious.
This is my third dissertation dealing with a range of issues related to research surveys and survey reports. The first two jobs in the series are available here And here.
In this post, I will focus on a range of issues that can affect the accuracy of survey answers. These issues are by no means all the factors that can impair the validity of survey data, but they are important for marketers to understand as producers and consumers of survey-based content.
Experienced researchers have long been aware that survey respondents do not always answer survey questions accurately or honestly. response bias The general term used to describe the multiple tendencies of survey participants to respond inaccurately to survey questions. These biases can be conscious or unconscious, and they can have a significant impact on the validity of the survey data.
Response biases can occur for a variety of reasons, and sociologists have identified several forms of response bias. Here are some of the most common shapes.
acceptance bias (sometimes called agreen bias or yes saying) – Acceptance bias refers to the tendency of some survey respondents to choose a ‘positive’ response option (eg agree instead of Disagree) regardless of their actual opinion or preference. Bias is more likely to exist when a survey question asks for an opinion and offers two opposing options, such as agree Disagree or right wrong.
Required Attributes – This term refers to a type of response bias in which survey participants change their responses simply because they are taking a survey. Sociologists argue that this bias arises because some survey participants will attempt to determine the purpose of the survey and then unconsciously change their responses to fit the perceived purpose of the survey.
Question order bias – This term refers to the tendency of some survey participants to answer questions differently based on the order in which the questions were asked. For example, the research found that if survey participants were first asked about their general interest in a particular topic, their answers would indicate greater interest than if they were first asked a specific or technical question about the same topic.
extreme response bias – Extreme response bias refers to the tendency of some survey participants to choose only the most extreme response options available for survey questions. For example, if a survey presents a series of statements and asks participants to rate their agreement or disagreement with each statement using a 5-point scale, some participants will only give one or five point answers.
There are many steps that survey designers can take to mitigate the impact of response biases, but none of these approaches are likely to be 100% effective.
Bias for social desire
One type of response bias has become particularly relevant for marketers in today’s business environment. the Bias in social desirability It refers to the tendency of survey participants to answer survey questions in a way that they think will be favorably viewed by others as opposed to what they think, feel or do.
This bias is more likely to exist when survey questions relate to sensitive personal topics – such as alcohol consumption or drug use – or social “hot button” topics – such as climate change or racial diversity. Social desirability bias can lead to overreporting of “good” or socially acceptable attitudes and behaviors and underreporting of attitudes and behaviors that survey participants perceive as “bad” or socially unacceptable.
It is important for marketers to be aware of social desirability bias because a growing number of experts are now arguing that CSR has become an important aspect of customer buying decisions, and therefore companies should include “social purpose” messages in their marketing programs.
At first glance, this argument appears to be supported by a large body of research. Several surveys conducted over the past several years have found that many people – especially young people – are becoming more socially aware, and they increasingly expect business organizations to play a more active role in addressing social problems. Here are some examples of this research:
- In Edelman’s Trust Barometer 2021 survey of consumers in 27 countries, 86% of respondents said they expect the brands they buy to take one or more of several actions. These actions included Facing social challengesAnd Create positive change in society, aDealing with political issues, And Making our culture more accepting.
- In Accenture Strategy’s 2019 Global Consumer Pulse survey of consumers in 36 countries, 74% of young consumers (mainly Millennials and Generation Z) said they want companies to take a stand on issues “close to their heart,” and more than 50 said % of them shifted part of their spending away from their current service provider when a company disappointed them with their words or actions on a social issue.
- In a 2017 survey of consumers in 16 countries by BBMG and GlobalScan, nearly two-thirds (65%) of respondents said they wanted to support companies with a strong purpose.
The real takeaway here is that marketers should be careful about relying on survey research which is susceptible to social desirability bias.