dtect

Questions to ask during data analysis: 3 expert researchers weigh in

Questions to ask during data analysis: 3 expert researchers weigh in

At dtect, we don’t do research – we make research better. While our platform works tirelessly to maintain data quality by keeping fraudulent participants out of your surveys, we’re constantly learning from researchers about what happens after data collection. Understanding how analysts approach their work helps us build better tools to protect data quality from the start. That’s why we reached out to three seasoned researchers to share their analytical approaches.

What we learned might surprise you. Or maybe not, if you’ve spent any time staring at a dataset wondering if that outlier is trying to tell you something important or if it’s just another bot having a particularly creative day. With your goal of market research excellence in mind, we thought we’d share what these experts had to say.

Challenging your assumptions: the devil’s advocate approach

Dominic Carter, CEO of The Carter Group, brings up a crucial point about confirmation bias in analysis:

Dominic Carter

“In analysis, when I’m digging into the data or reviewing source materials and I think I’m onto a promising story, I always ask myself, ‘What’s the opposing point of view? What evidence can I find against my story about people and the forces acting on them?’ You should keep asking those questions until you feel really confident that you have thought about an issue from all sides.”

– Dominic Carter, CEO, The Carter Group

This approach is particularly relevant in today’s research landscape, where the pressure to find compelling narratives can sometimes override methodological rigor. Carter’s method of actively seeking contradictory evidence serves as a built-in quality check for analysis. It’s not just about finding a story in the data – it’s about finding the right story.

The devil’s advocate approach becomes even more critical when dealing with large datasets. When you have thousands of responses, it’s tempting to focus on the patterns that confirm your initial hypotheses. However, this is precisely when you need to dig deeper and look for evidence that might challenge your assumptions.

This kind of thorough analysis only works when you start with clean data. After all, you can’t effectively challenge your assumptions if you’re not confident in the data’s integrity to begin with. That’s where preventative measures against survey fraud become crucial – they ensure that the opposing viewpoints you find are genuine participant perspectives, not artifacts of poor data quality.

Finding the human story: empathy in numbers

Hannibal Brooks, Senior Insight Manager at Olson Zaltman, takes a uniquely human-centered approach to data analysis:

Hannibal Brooks

“One main question I like to start with is ‘How does this data let us empathize with this segment of consumers?’ If we can’t look at the data and realistically imagine how it relates to how a decision-maker is thinking or feeling in a specific moment or area of life, we need to reconfigure our approach, or possibly, ask a different question.

When the results seem off, I take a different tack. First, it’s to look for how systematically wrong something is. Basically, ‘Is this error telling a story, or a series of random lies?’ Because when data is off, it *could* just be a matter of perspective.”

– Hannibal Brooks, Senior Insight Manager, Olson Zaltman

This perspective shifts the focus from pure statistics to genuine understanding.

This perspective is particularly valuable in an age where automated analysis and AI-generated responses can make data feel sterile and disconnected from real human experiences. Brooks reminds us that behind every datapoint is a real person making real decisions. When the data feels disconnected from human reality, it might be a sign of quality issues.

Brooks also offers a practical approach to handling suspicious data: “Is this error telling a story, or a series of random lies?” This distinction is crucial. Systematic errors often point to specific issues in survey design or sampling, while random inconsistencies might indicate fraudulent responses or disengaged participants.

The key to finding authentic human stories in your data is ensuring those stories come from real humans in the first place. When survey farms and bots infiltrate your sample, they don’t just add noise to your data – they actively obscure the genuine human insights you’re trying to uncover.

Building trust through verification: a systematic approach

Chris Connolly, VP Research Services at The Logit Group, brings a structured perspective to data validation:

Chris Connolly

“As the primary advocate for my clients here at Logit, I use the following data integrity checks to ensure that the data collected is both accurate and actionable. This is fundamental in building and maintaining the trust and confidence of my clients. By conducting cross-validation, I make sure the data is logically coherent, avoiding any surprises for my clients during analysis. When I screen for contradictions, I protect the integrity of the dataset by flagging responses that could mislead the insights. Through consistency checks, I ensure that respondents answer in a way that aligns with the survey structure, reinforcing the reliability of the survey. Performing an IR reality check allows me to confirm that the screening logic and survey setup are functioning as expected, minimizing the risk of unanticipated data gaps. Lastly, historical comparisons help me highlight trends or shifts, giving my clients the context they need to interpret the findings within their broader strategic framework.

By taking these proactive steps, I show my dedication to delivering high-quality data and act as a safeguard for my clients’ research investments. This level of diligence strengthens my role as their trusted advocate, ensuring their objectives are met with clarity and precision.”

– Chris Connolly, VP Research Services, The Logit Group

This systematic approach includes cross-validation, contradiction screening, consistency checks, IR reality checks, and historical comparisons for a comprehensive verification process that serves multiple purposes. Beyond simply identifying problematic responses, it builds confidence in the insights being delivered to clients. Each check adds another layer of assurance that the findings accurately represent the target population.

Connolly’s approach underscores the importance of proactive quality measures. While post-collection cleaning is necessary, the most effective strategy is preventing bad data from entering your dataset in the first place. This aligns with what we’ve learned at dtect – the earlier you catch quality issues, the less they cost in terms of time, money, and client trust.

The beauty of systematic verification is that it creates a feedback loop. Each check not only validates the current dataset but also informs future research design. When you know what types of issues commonly arise in your data, you can take steps to prevent them in future studies.

Putting it all together

These three perspectives – challenging assumptions, finding human stories, and systematic verification – form a powerful data quality framework that ensures integrity in research. But they all depend on one crucial factor: beginning with clean data.

Think of it this way: you wouldn’t trust a witness who’s been caught lying, no matter how compelling their story. Similarly, you can’t trust insights derived from fraudulent or low-quality survey responses, no matter how sophisticated your analysis.

That’s why at dtect, we focus on data quality management through preventing survey fraud before it enters your data stream. Our platform stops bots, location spoofers, and other bad actors at the door, ensuring your analysis starts with reliable data. Because when you’re confident in your data quality, you can focus on what really matters: uncovering meaningful insights that drive business decisions.

Ready to spend less time cleaning data and more time analyzing it? Let’s talk about how dtect can help protect your survey integrity.

Keep yourself and your data up-to-date

Subscribe