Blunder. Miscalculation. Goof. Whatever verbiage you prefer, a mistake in survey design spells trouble, degrading the accuracy of your data and the subsequent extrapolations that can be made using that information.
Unfortunately, the distinction between ‘good’ and ‘bad’ survey design isn’t always obvious. To the untrained eye, a double-barrelled question is just a question. But to a market researcher grounded in survey design best practices, a double-barrelled question is a missed opportunity to reliably gauge a consumer’s opinion.
What exactly are survey errors, and why do they matter?
Simply put, survey errors are mistakes made during the construction and implementation of a surveying instrument. A redundant questionnaire, for instance, may prompt straight-lining – a phenomenon that occurs when respondents lose motivation and begin offering similar responses to all questions.
Since the goal of surveying is to make inferences about a larger population of interest using a sample, straight-lining and other types of response bias compromise a data set’s predictive power. This means you could invest loads of time, energy, and money in a survey, only for it to fall flat.
What causes survey errors?
There are two general ‘flavours’ of survey errors: sampling and non-sampling.
Sampling error is the extent to which the sample deviates from the population. Sadly, even creating a good survey won’t eradicate all traces of sampling error. However, it can be reduced by sampling from a representative audience.
Comparatively, non-sampling error occurs when questionnaires stray from survey design best practices. As is the case with conformity bias and acquiescence bias, how the questions are worded affects the generalisability of the resulting data. Fortunately, non-sampling errors can be mitigated through good survey design.
The consequences of survey errors
Survey errors come with some pretty tough consequences. If there are problems with your surveys, you will likely feel a blow in the quality of your data due to bias in your survey responses. Here are just a few different types of survey response biases to consider:
Response Bias
Response bias occurs when survey respondents provide answers that are not entirely truthful or accurate. This can happen due to various reasons, such as social desirability bias, where participants provide responses that they believe are socially acceptable or desirable. Response bias can significantly impact the validity of survey results and lead to misleading conclusions.
Non-Response Bias
Non-response bias refers to the situation where a significant number of invited participants do not respond to the survey. This can introduce bias into the data, as those who choose not to participate may have different opinions or characteristics compared to those who do respond. It's important to minimize non-response bias through strategies such as reminders, incentives, and follow-up communication.
What common survey design mistakes contribute to survey errors and how can you mitigate them?
1. Not catering to mobile responders
Gone are the days when respondents completed surveys on desktop computers. Today, 30 to 40% of people are answering questionnaires on their mobile devices, often during fragmented bits of leisure time researchers call ‘time confetti.’
In short, if you are not optimising your surveys to engage mobile responders, you are only capturing feedback from a narrow segment of your audience. This is an example of sampling bias. Get full representation by rolling out mobile-friendly surveys that are design and screen agnostic.
2. Ignoring best practices for survey length
The war for consumer attention has reached a crescendo. With the proliferation of smartphones has come a wave of distractions: a cacophony of social media notifications, emails, and phone calls that easily divert respondents’ attention.
The solution? Keep surveys short – preferably under 12 minutes, though 10 minutes is better. The longer the survey, the higher the dropout rate. Kantar has found that a survey that takes over 25 minutes loses more than three times as many respondents as one that is under five minutes.
3. Redundancy, redundancy, redundancy
An elective questionnaire has no room for exorbitance. Asking questions that prompt similar responses will annoy consumers and trigger survey drop-out. A redundant survey asks fundamentally the same question again but uses different words.
Consider this: If you ask respondents to rate a grocery product on a Likert scale based on “Healthy” and then another scale on “Nutritious,” you will get very similar results for the pairs. Likewise, you will get similar data when you ask a respondent to rate an online retailer on their “Convenience” and “Ease of Use” separately.
There is redundancy in having options with overlapping meanings. In this situation, the survey becomes unnecessarily long, and the survey-taker will likely become frustrated and forfeit the survey altogether.
4. Asking questions respondents can’t necessarily answer
Assumptive questions postulate about what the respondent knows and feels. For instance, assumptive questions might include in-depth questions around a bank’s customer service, mobile app, security measures, etc. – when they are only aware of the brand.
In this circumstance, just because a respondent has heard of the bank, it doesn’t mean that they have direct experience of using the bank. When faced with questions they can’t answer, many respondents will abandon the questionnaire or bend the truth.
This problem could easily be resolved by first asking if a respondent also has experience with the bank. Apply routing, and have respondents skip the detailed questions, if they don’t.
5. Failing to pare down verbose questions
Let’s face it – we are all pressed for time. So, when faced with an overly wordy question or a question with a long list of responses, survey-takers are likely to drop out or simply select a random answer. Neither is ideal.
For better results, reduce the words in the question by removing superfluous instructions or unnecessary introduction text.
Look at this overly-worded example: “Now thinking of the last time you purchased a cold beverage (such as cordial, soda, juice, bottled water etc.). This could have been from a café, restaurant, supermarket or gas station. Please rate on a scale of 1-10 where 1 is strongly disagree and 10 is strongly agree. How important the following factors were in influencing your decision?”
Instead ask, “Thinking about the last time you purchased a cold beverage, rate how important the following factors were when deciding to buy it.”
The question is easier to read and digest, and the degree of agreement can be seen on the scale presented. Respondents will more likely answer the question honestly and accurately.
6. Using biased and leading language
‘Did you enjoy our delicious ice cream?’ This question seems innocuous enough. After all, who wouldn’t enjoy a product described as ‘delicious’? But when subjective adjectives are incorporated into surveys, they prime the respondent to offer a positive answer. Even worse, leading questions can make survey-takers feel as if they are being manipulated into offering a certain response.
Instead, explore an empathetic approach to survey design. Help respondents feel more comfortable telling the truth. Refrain from using adjectives like ‘delicious’ and instead use plain, easy-to-understand language.
7. Asking complicated or overly personal questions
Some survey questions are nearly impossible to answer honestly. For example, if a question asks respondents to estimate how much money they have spent on alcohol products over the past six months, most would only be able to hazard a rough guess. Even if they do know, they may feel embarrassed to offer an honest answer.
Rather than asking survey-takers to reflect on their own behaviour, ask them to report on the behaviour of people they know. For example, you may ask respondents to predict how much money a friend may spend on alcohol during a night out.
This approach is derived from a classic psychology experiment. One group of students was asked whether they thought they would clean up after a meeting. Half of them said they would. Another group was asked to predict how many students they thought would clean after a meeting and they guessed 15%. The actual percentage of students that cleaned up was 13%. In short, this projective technique can help you get closer to the truth.
Subscribe to learn more
Interested in learning more about designing effective online surveys? Subscribe using the form below to receive a monthly research tip on online survey design, sampling, and data integration. You'll also be notified when new Online Survey Training Modules are released.