Survey design seems simple enough. Ask questions, get answers, and analyze data. But if you’ve ever stared at a blank document wondering how to write that first question, you know it’s not straightforward. And while everyone talks about data quality in market research, practical guidance on survey design often feels like trying to find customer service contact information – you know it should be there, but good luck finding it.
Many researchers learn survey design through trial and error, picking up habits (both good and bad) along the way. These habits accumulate over time, leading to questionnaires that might feel right but actually compromise data quality from the very first question.
Instead of giving you another theoretical framework or adding to your growing collection of bookmarked “best practices” articles, we got practical. We took a deep dive into survey design to reveal five cardinal “sins” that can compromise your data quality. Even experienced researchers occasionally fall into these traps, often without realizing it. Here are the five most common mistakes we see in survey design:
You might glance at this list and think, “I would never do that,” but no one ever sets out to create a flawed survey. All of these are done for a reason that seems like a good idea at the time. One by one, these stack up to create a survey that delivers poor-quality data. And no one can create great insights from that. Let’s give you some examples.
Researchers don’t set out to confuse survey participants with unnecessarily complex language. This can happen entirely by accident, which means it’s worthy of special attention in consumer market research. We frequently see two common causes: fancy language and jargon.
When looking at an individual survey question, it can seem like a good time to flex those vocabulary muscles. Read this: “Which pharmaceutical dispensary is in closest proximity to your primary residence?” Does anyone talk like that? No, and it’s confusing. It can and should be worded more simply, using commonplace language. In the context of an online survey, it will yield better, more natural answers if it’s adapted to natural language.
Industry jargon or obscure terminology are common culprits. For example, health and beauty experts may use the term “spoolie” to refer to the tool created for applying mascara. While this may be an important term to your R&D Department, what you need to be asking people about is a “mascara wand.” When the results are back from your survey, feel free to translate how humans speak back into the language your team needs, but when writing a questionnaire, be sure to use common terms.
Many researchers have used the 8th-grade rule with success. Checking your questions for comprehension at this reading level is helpful – not because the market research participants cannot read past this level – but because simplicity brings comfort, and comfort creates flow when people are reading and answering questions. When they struggle to understand what you’re asking, they spend more time wondering what the survey is about and less time providing insights into their thoughts, beliefs, needs, and wants. If you want to understand how participants will respond to the language you’re using, try reading/presenting the question to someone you know who is not in the industry you’re researching and see how they react. In the absence of having a friend readily available to help, check your writing by asking your generative AI platform of choice to assess your writing for clarity and understandability by acting as a member of your target audience.
Surveys are notorious for scope creep. What starts as a focused 3-minute questionnaire often balloons into a 32-minute marathon. While additional stakeholder input can bring valuable perspective, each new question added should be weighed against the survey’s original purpose. Every extra minute increases the risk of participant fatigue and dropout rates.
Think about it: How many times have you seen a survey grow from “just a few quick questions” into something that rivals a college entrance exam? The reality is that longer surveys don’t necessarily mean better insights. In fact, they often lead to lower completion rates and less thoughtful answers as participants lose focus and rush to finish.
Stay focused on your original research objectives. Before adding new questions, ask yourself: Does this directly help answer our core research questions? Will this information actually be used in the analysis? Short, purposeful surveys typically yield better data than lengthy ones trying to cover every possible angle. If you must add questions, consider breaking the research into multiple shorter surveys rather than creating one exhaustive questionnaire.
The compound question trap is an easy one to fall into. You might think you’re being efficient by asking about multiple things at once: “How satisfied are you with the cost, speed, and reliability of your home internet service?” But which part of that question are participants actually answering? Are they happy with the speed but not the cost? The answer becomes meaningless because you don’t know what aspect drove their response.
These double or triple-barreled questions force participants into mental gymnastics as they try to weigh multiple factors simultaneously. While you might think you’re saving space in the survey, you’re actually gathering less usable data. Even worse, these complex questions often lead to higher dropout rates as participants encounter questions that feel impossible to answer accurately.
Break compound questions into separate, focused inquiries. Yes, this means more individual questions, but each one will provide clear, actionable data. If you need to understand multiple aspects of a service or product, ask about each separately. This makes the questions easier to answer and provides much more valuable data for analysis.
Surveys are created to understand people. That very notion is predicated on the fact that we can’t read the minds of others, and so we must ask and explore their thoughts, beliefs, behaviors, and experiences to know them. But as humans, we make assumptions about others all the time. It’s understandable, but when you put your researcher’s hat on, this has to be curbed.
When your own opinions show through in the survey, you are subtly manipulating the data. Here’s an example: “On a scale of 1-5, how incredibly frustrating is it when you can’t find the customer service contact information that should be clearly displayed on the website?” This scale becomes meaningless when the question already presumes the respondent’s frustration.
Unbiased responses are the goal, and questions should be reviewed critically to look for (hopefully) unintentional creep of bias into the survey. Here are three fundamental principles that seasoned survey methodologists emphasize for minimizing bias:
Survey flow matters more than many researchers realize. A common mistake is jumping between topics without any logical progression, making participants feel like they’re playing mental ping-pong. One minute they’re answering questions about their morning routine, the next about their retirement plans, then suddenly back to what brand of toothpaste they use.
Even worse is when we dive into specific details before establishing basic context. Asking someone about their detailed preferences for a product feature before confirming they’ve ever used the product creates confusion and potentially unusable data.
Think of your survey as a conversation. As you wouldn’t jump between unrelated topics when chatting with someone, your survey should follow a natural flow. Start with broader questions to establish context before drilling down into specifics. For example, begin with “Which of these streaming services do you use?” before asking detailed questions about viewing habits on each platform.
Group related questions together and use clear transitions when changing topics. This helps participants stay engaged and provide more thoughtful responses, ultimately leading to better quality data.
At dtect, we know that quality insights don’t happen by accident. Proper design, sourcing, and field management all play a key role in delivering quality data to analyze and ensure the reliability of insights. We work tirelessly to keep bad actors, bots, and the unengaged out of your surveys through effective survey fraud prevention. However, the first step of writing a high-quality survey should not be overlooked as its importance can not be overstated.
When you have that polished survey ready to go, we’ll be here to help you make sure that fraudulent participants don’t respond to it!