From Vibes to Verification: Why it’s time to stop choosing sample suppliers like it’s 1999 - Articles

Articles

Stay at the forefront of the consumer insights and analytics industry with our Thought Leadership content. Here you’ll find timely updates on the Insights Association’s advocacy efforts, including the latest legislative and regulatory developments that impact how we work. In addition, this section offers expert perspectives on innovative research techniques and methodologies, as well as valuable analysis of evolving consumer trends. Together, these insights provide a trusted resource for professionals looking to navigate change, elevate their practice, and shape the future of our industry.

From Vibes to Verification: Why it’s time to stop choosing sample suppliers like it’s 1999

By Cassandra Fawson, Data Quality Co-Op


There’s a strange disconnect in our industry right now. On one hand, we are obsessed with rigor: demanding statistically sound methodologies, sophisticated analytics, and airtight recommendations from every research project we run. On the other hand, when it comes to selecting the suppliers who feed into that very work, we’re still flying blind.

Ask someone how they choose a sample provider, and the answer will often fall somewhere between “they took me out to dinner” and “I’ve just always used them.” Sure, they might say “quality,” but dig deeper and that usually means, “We had fewer scrubs last time,” or “That one project went badly with Vendor X.” It’s all anecdotal.

We’ve started calling it “choosing vendors based on vibes,” and unfortunately, that’s not far off. This wouldn’t be as big of a problem if we weren’t in the business of data. But we are. So it’s more than ironic, it’s a credibility issue. If we expect brands to trust the insights we deliver, we need to be just as rigorous behind the scenes as we are in front of them.

When vibes fill a data void

In the absence of usable, standardized metrics, it’s human nature to default to what feels familiar. You remember who was easy to work with. You remember who ghosted you during a tight deadline. But “ease of partnership” and “data quality” aren’t always correlated, and they certainly aren’t interchangeable.

The reality is that many teams simply don’t have the right tools or benchmarks to evaluate supplier performance objectively. So anecdotal wins and losses become the de facto decision criteria. Is one spike in scrubs actually reflective of systemic issues, or just a one-off with a sub-supplier? Without aggregated trend data, you’re guessing. And guessing isn't good enough anymore, not when fraud rates are climbing, audiences are harder to reach, and the pressure of synthetic data production is looming.

What if your suppliers had credit scores?

Imagine applying for a mortgage and the bank says, “We’re not going to check your credit score. We’re just going with our gut.” It would be laughable. But that’s how many researchers are sourcing the very data they stake their reputations on.

What if you had a supplier “credit report” instead? One that combined scrub rates, duplications, attention metrics, and client satisfaction into a single scorecard, tracking performance over time, not just project to project. Suddenly, your sourcing decisions wouldn’t have to be reactive. They’d be proactive, data-informed, and defensible.

We’re building toward that future now, creating ways to aggregate in-survey and technical quality signals, track them across studies, and pair them with satisfaction scores from real project managers. That’s how you get beyond vibes without losing the nuance of a real working relationship.

Stop treating data quality like it’s subjective

Part of the challenge here is that we don’t even speak the same language. Everyone says they care about “quality” and “fraud,” but individual definitions vary widely.

To some, a removal is someone screened out at the front door. To others, it’s a manual kickout mid-survey. Even terms like “scrub” or “attention check” can mean wildly different things depending on the platform or person. Without shared definitions, how can you expect to compare vendors or trust the metrics you're looking at?

We’ve had to create glossaries and cheat sheets just to get aligned with clients. And while that helps, it also highlights how immature our frameworks are compared to other industries. Healthcare, finance, logistics, these sectors all have clear standards and shared terminology. Market research is lagging behind on this front.

But that’s slowly changing. As researchers start demanding more from their supply chain, and as the tools become available to support that demand, standardization will follow. And with it, a clearer path toward transparency, consistency, and actual accountability.

It’s time to bring our back-end choices up to the same standards as our front-end deliverables 
If you wouldn’t accept “just a feeling” as a valid insight, don’t let it guide how you source your data. We’re at a turning point where we need to seek a better infrastructure to support smarter, more reliable decision-making.

We need to support human judgment with signals that go beyond memory and perception. Because when you finally have the tools to see vendor performance clearly, you’ll wonder how you ever operated any other way.

So let’s start holding ourselves to the same standard we expect from our work: data first, decisions second (and good vibes only when they’re backed by facts).

About the Author:
Cassandra Fawson bridges science and market research as head of Client Success at Data Quality Co-op (www.dataqualityco-op.com), where she ensures research and technical teams stay aligned on data quality goals. A former biotech researcher, she now applies her analytical expertise to advancing data quality in market research. When Cassandra isn't crunching numbers, you can find her coaching the local high school mountain bike team or taking her three children skiing.

Related

Share

Login

Members only Article - Please login to view
  • Back to top