You take pride in your work, so it can be tough going when things don’t go as planned. But getting a survey designed and in field is just the beginning. The real win is getting a project completed, feeling confident in the quality of participants’ answers, and doing an analysis that makes it all make sense.
There are many ways things can go wrong in a survey, and the list has grown over the years. A few of those undesirables in quantitative research include:
- Biased questions
- Lengthy questionnaires
- Inadequate sample sizes
- Inaccurate targeting
- Non-mobile optimized design
- Poorly written open-ends
- Inconsistent rating scales
When things go wrong, you have to learn the hard way. Often, these failures are kept private – or may be shared internally. But learning comes from openness and sharing. It takes a lot of guts to overcome that pit in your stomach and let your clients’ know when things didn’t go as planned.
At dtect, that pang of dread is our inspiration. Our platform’s mission is to prevent problematic participants and bots from entering surveys to solve some of these problems before they start. A few of our industry colleagues took the brave step to share their own moments when things did not go as planned in hopes that we all learn and continually improve.
As a result, we had an honest conversation with the client about what happened and presented our plan to move forward. Ultimately, we had to keep a much closer eye on the data as we collected it and couldn’t collect as many completes as previously planned. Still, at the end of the project, we were confident in the data quality and the results and recommendations we delivered. Our client also felt good that we were diligent about data quality and had their best interest in mind.”
John Holmes, Senior Director of Research at MDRG
Thoughts on Survey Fraud
John’s team encountered a common issue many professional market researchers face—high incentives attract legitimate and problematic participants. While such incentives can be an effective strategy to field projects quickly or fairly compensate participants for longer, more in-depth surveys, in this case, they attract bad actors. These fraudsters were likely connected with a survey farm, resulting in identical open-ended questions that couldn’t have been coincidental.
It’s crucial to identify this fraud. In this example, the MDRG team spotted duplicate answers while cleaning and prepping the data. Another way professionals assess response validity is to review survey completion times. Extremely short completion times may indicate speeding through the survey, while unusually long times could suggest distraction or multitasking. Consistent, unnaturally rapid completion times across multiple respondents might suggest bot usage. These data quality checks – such as looking for duplicate answers and analyzing completion times – are ingrained in research processes because the industry has come to expect some level of bad data.
Researchers rely on panel suppliers to provide thoroughly vetted participants, but each supplier has its own standards and leverages technology to varying degrees. While commonplace and often necessary, drawing on multiple sample sources without having a consistent way to assess quality across suppliers opens up more opportunities for fraudulent data to impact a survey. When developing partnerships with sample providers, it is crucial to understand their recruiting processes. Knowing what is done after someone is recruited to a panel is also important. What is done to vet and validate participants continually? Though it is impossible to eliminate all fraud in survey research, understanding the quality control methods your suppliers use is critical to choosing the right partners.
The journey to high data quality requires diligence throughout the research process. Internally, the first step is creating a data quality management strategy. Outlining standard operating procedures provides a consistent process for team members to follow. This creates a shared vocabulary and focuses on delivering high-quality data. A platform like dtect can be a strong component of any DQM plan, providing powerful capabilities to prevent fraud from entering surveys, thus reducing pressure on post-collection data cleaning. Such platforms can also provide insight into suppliers’ historical performance to inform decisions about which partners to include in the sample. We celebrate the savvy researchers focusing less time cleaning up data collection fails and more time focusing on wins for their clients.
Scott Farrell, Chief Operating Officer at Gazelle Global
Engaging Participants Effectively
No one likes to be asked the same question twice, but this can be part of a data quality check. If spot-check questions need to be added, it is all the more important that researchers should focus on designing concise and engaging surveys. Best practices:
- Always start with the objective when writing a questionnaire
- Don’t frustrate people with long screeners or paths to qualify
- Think of screeners as a funnel that narrows down your audience to those who can answer the questions you’ll ask
- Use multiple-choice questions when possible. Answers chosen from a list are more reliable, plain and simple
With decades of experience in global survey work, I am constantly reminded how important it is to have partners who stay on top of new technologies that emerge to defraud the survey industry. We partner with dtect because jumping on the latest “trend” in tech/platform usage is not a good idea. We always recommend that our clients include the dtect platform in their projects. Fraudsters are constantly upping their game, and potential pitfalls cannot be easily seen from our vantage point. While we stay focused on our client’s work, we need to know we employ cutting-edge technology to help us deliver the highest data quality for better business outcomes.
Delivering rigorous research is a tough job. Every day, we reflect on the gnawing unease researchers feel when encountering bad data. And this drives our mission to prevent survey fraud before it starts.
Got a story to share about when things have gone wrong? Want to talk about how to avoid that happening again? We’d love to hear from you!