dtect

Market Research Pros Reveal Data Collection Fails [and What They Did to Fix It]

Market Research Pros Reveal Data Collection Fails [and What They Did to Fix It]

You take pride in your work, so it can be tough going when things don’t go as planned. But getting a survey designed and in field is just the beginning. The real win is getting a project completed, feeling confident in the quality of participants’ answers, and doing an analysis that makes it all make sense.

There are many ways things can go wrong in a survey, and the list has grown over the years. A few of those undesirables in quantitative research include:

  • Biased questions
  • Lengthy questionnaires
  • Inadequate sample sizes
  • Inaccurate targeting
  • Non-mobile optimized design
  • Poorly written open-ends
  • Inconsistent rating scales

When things go wrong, you have to learn the hard way. Often, these failures are kept private – or may be shared internally. But learning comes from openness and sharing. It takes a lot of guts to overcome that pit in your stomach and let your clients’ know when things didn’t go as planned.

At dtect, that pang of dread is our inspiration. Our platform’s mission is to prevent problematic participants and bots from entering surveys to solve some of these problems before they start. A few of our industry colleagues took the brave step to share their own moments when things did not go as planned in hopes that we all learn and continually improve.

John Holmes insights“We were once looking to find a niche audience for one of our travel clients. Our target was limited by age, income, geography, travel behaviors, and perceptions of specific types of travel. We knew finding who we were looking for would be challenging, so we started with a high incentive. We were doing a pretty good job collecting data over the first week in the field and thought we’d get out of the field on time (projected two weeks). Something seemed wrong when we started looking more closely at the data in preparation for an update with the client. The data wasn’t consistent, but the real tell was that answers to our open-ended questions were everywhere. Some responses were in other languages. We found two or three respondents with the same responses to every question. We thoroughly reviewed the data and found that nearly half of the data we collected was fraudulent, most likely due to the high incentive.

As a result, we had an honest conversation with the client about what happened and presented our plan to move forward. Ultimately, we had to keep a much closer eye on the data as we collected it and couldn’t collect as many completes as previously planned. Still, at the end of the project, we were confident in the data quality and the results and recommendations we delivered. Our client also felt good that we were diligent about data quality and had their best interest in mind.”

John Holmes, Senior Director of Research at MDRG

Thoughts on Survey Fraud

John’s team encountered a common issue many professional market researchers face—high incentives attract legitimate and problematic participants. While such incentives can be an effective strategy to field projects quickly or fairly compensate participants for longer, more in-depth surveys, in this case, they attract bad actors. These fraudsters were likely connected with a survey farm, resulting in identical open-ended questions that couldn’t have been coincidental.

It’s crucial to identify this fraud. In this example, the MDRG team spotted duplicate answers while cleaning and prepping the data. Another way professionals assess response validity is to review survey completion times. Extremely short completion times may indicate speeding through the survey, while unusually long times could suggest distraction or multitasking. Consistent, unnaturally rapid completion times across multiple respondents might suggest bot usage. These data quality checks – such as looking for duplicate answers and analyzing completion times – are ingrained in research processes because the industry has come to expect some level of bad data.

Researchers rely on panel suppliers to provide thoroughly vetted participants, but each supplier has its own standards and leverages technology to varying degrees. While commonplace and often necessary, drawing on multiple sample sources without having a consistent way to assess quality across suppliers opens up more opportunities for fraudulent data to impact a survey. When developing partnerships with sample providers, it is crucial to understand their recruiting processes. Knowing what is done after someone is recruited to a panel is also important. What is done to vet and validate participants continually? Though it is impossible to eliminate all fraud in survey research, understanding the quality control methods your suppliers use is critical to choosing the right partners.

The journey to high data quality requires diligence throughout the research process. Internally, the first step is creating a data quality management strategy. Outlining standard operating procedures provides a consistent process for team members to follow. This creates a shared vocabulary and focuses on delivering high-quality data. A platform like dtect can be a strong component of any DQM plan, providing powerful capabilities to prevent fraud from entering surveys, thus reducing pressure on post-collection data cleaning. Such platforms can also provide insight into suppliers’ historical performance to inform decisions about which partners to include in the sample. We celebrate the savvy researchers focusing less time cleaning up data collection fails and more time focusing on wins for their clients.

Scott Farrell insights“In the past, we asked for a zip code or state up front, and then whichever was asked up front, we asked at the back and compared. It usually caught a small percentage of mismatches, which we might reject. To provide a less annoying [read: better] experience for survey participants, we followed the advice of the panels and exchanges, and instead of asking for the zip code, we piped it from the original database. Then, the spot-check question was moved to the middle of the survey. With this minute change, our rejects went up a lot – by about four times. For a while, no one put together what was happening (mainly because the people reviewing the data were not necessarily part of the move to pipe the zips and didn’t think back to it being mentioned to them). It eventually got to my desk, where it was plain as day that the panel database was either incorrect or simply out of date. We now recommend that if a demographic is integral to project analysis, it should not be a data point from the sample database. Even at the risk of expanding the interview length (LOI) or spending more money with time in field, this is a better approach than relying on accurate data from the panel database.”

Scott Farrell, Chief Operating Officer at Gazelle Global

Engaging Participants Effectively

No one likes to be asked the same question twice, but this can be part of a data quality check. If spot-check questions need to be added, it is all the more important that researchers should focus on designing concise and engaging surveys. Best practices:

  1. Always start with the objective when writing a questionnaire
  2. Don’t frustrate people with long screeners or paths to qualify
  3. Think of screeners as a funnel that narrows down your audience to those who can answer the questions you’ll ask
  4. Use multiple-choice questions when possible. Answers chosen from a list are more reliable, plain and simple

With decades of experience in global survey work, I am constantly reminded how important it is to have partners who stay on top of new technologies that emerge to defraud the survey industry. We partner with dtect because jumping on the latest “trend” in tech/platform usage is not a good idea. We always recommend that our clients include the dtect platform in their projects. Fraudsters are constantly upping their game, and potential pitfalls cannot be easily seen from our vantage point. While we stay focused on our client’s work, we need to know we employ cutting-edge technology to help us deliver the highest data quality for better business outcomes.

Delivering rigorous research is a tough job. Every day, we reflect on the gnawing unease researchers feel when encountering bad data. And this drives our mission to prevent survey fraud before it starts.

Got a story to share about when things have gone wrong? Want to talk about how to avoid that happening again? We’d love to hear from you!

Keep yourself and your data up-to-date

Subscribe