Recap ReDem Quality Day #3 – 2025

ReDem Quality Days 2025 – Day 3 Recap

Can the Industry Keep Up? A Look at Standards, Benchmarks & Global Action

As data fraud becomes more sophisticated and pervasive, the insights industry faces a crucial challenge: Can it keep pace? At ReDem Quality Day #3, our closing session of this powerful three-part series, industry leaders came together to answer that question—not with panic, but with purpose.

This final event spotlighted two core elements: a keynote by Debrah Harding, Managing Director of MRS, on how the industry is tackling the data quality crisis through global collaboration, and a panel discussion that discussed the topic of „AI in market research: Blessing, Curse or Wake-Up Call?“

Talk 1: Meeting the Data Quality Challenge

 
Speaker: Debrah Harding (Managing Director, MRS)

In a compelling and data-rich presentation, Debrah Harding walked us through the latest findings of the Global Data Quality (GDQ) Initiative, an international cross-association movement uniting major research organizations worldwide—including MRS, Insights Association, ESOMAR, VMÖ, Global Research Business Network, SampleCon, and others.

Debrah Harding made it clear: the current ecosystem is marked by inconsistent language, fragmented tools, and a tendency to shift blame rather than solve problems.

Key Observations:
  1.  £209 million – The annual cost of poor data quality in the UK: Research agencies are losing an estimated £209 million per year in the UK alone due to poor data quality. This includes costs from re-fielding, lost staff time, license fees, and compensating clients for unusable results. 
  2. Wave 0 Benchmarking Results (USA, 2024): The GDQ benchmarking study, was launched by the Insights Association in 2024. The aim is to define what quality benchmarks look like and provide industry benchmarks that help determine one’s own performance and data quality. In several waves, the GDQ will track the metrics over time and report whether data quality is imporving (or not). Conducted in the United States, the GDQ’s Wave 0 Benchmark study collected data from 35 companies (14 agencies and 21 sample providers) and included more than 2.5 million interviews. The initial base wave results reveal:
    • It’s striking that, on average, more than 43% of low-quality data still needs to be removed during the survey – even after initial pre-survey checks. This includes responses from both disengaged participants and fraudulent actors.
    • The study also revealed that end-link encryption is not yet standard practice across the industry: only 62% of research agencies currently use it, highlighting clear room for improvement on the agency side.

3. 30% of data removed by buyers in B2B studies: Removing of a large number of cases is common practice in particular with respect to B2B samples, with one-third of buyers on average removing 30% or more of cases.

4. Overall buyer satisfaction is low in B2B: 38% of B2B sample buyers say their overall satisfaction with the online sample bought is very low or low. Satisfaction is higher among B2C sample buyers.

5. Low sample quality drives dissatisfaction: While B2C sample buyers report higher overall satisfaction—mainly due to better ratings on speed and price—both B2C and B2B buyers express below-average satisfaction when it comes to the quality of the sample.

“This is not a journey anyone can take on their own—we need to move in a single direction on this data quality journey so we continue to improve.” - Debrah Harding

Resources to Support Quality in Practice

The GDQ and the MRS have developed various resources to help companies act:

  • Transparency Checklist: Key questions every buyer should ask suppliers before commissioning a project.

  • Internal Approaches Guide: 30+ quality techniques for addressing data quality challenges. These techniques include: open end checks (e.g., copy and paste detection), link security, data analysis checks (e.g., participant behvior in the survey), etc. 

  • Glossary of Terms: A common vocabulary to align clients, agencies, and platforms.

  • GDQ Pledge: A public list for suppliers who committ to transparency and data quality standards.

Panel Discussion: AI in market research: Blessing, Curse or Wake-Up Call?

The closing session of our ReDem Quality Days series brought together five thought leaders to tackle a pressing question:
AI in market research: Blessing, curse or wake-up call?

Moderated by Julia Mittermayr (COO, ReDem), the panel featured:

  • Debrah Harding (Managing Director, MRS) – Champion of industry-wide standards through the Global Data Quality initiative
  • Hartmut Scheffler (ex-Managing Director, Emnid/TNS/Kantar) – Long-time advocate for research rigor and transparency
  • Florian Kögl (CEO, ReDem) – Expert in real-time fraud detection using behavioral AI
  • Shifra Cook (CEO, Ayda) – Specialist in fraud detection through analyzing incentive payment details
  • Oscar Carlsson (CEO, Milo Advisory) – Strategy advisor and bridge between research buyers, platforms, and agencies
  • 1. Opportunities: How AI Can Improve Data Quality
  • Chasing the “perfect” sample source is exhausting and NOT the solution.
  • We can’t rely on sample providers to do what they say and say what they do.
  • Until we start validating people’s ID, the future is curation.
  • Set up your own process, use your own tools, ask for proof.
  • Take ownership of the quality assurance process.
2. Threats: What Keeps Experts Up at Night
  • AI-generated survey responses: Survey participants use AI tools like ChatGPT to generate answers, making inattentive respondents and bots harder to catch.
  • Lack of visibility: Many data buyers are unaware of how much AI-generated or low-quality data is entering their studies.
  • No shared quality definitions: What counts as “poor quality” varies widely across providers, creating inconsistent data quality standards.
  • Synthetic data risk: AI-generated datasets trained on flawed survey input can amplify biases and produce misleading trends. Hartmut Scheffler: “If your foundational data is poor, synthetic data just makes it worse“. Oscar Carlsson: „We need to understand (it) a lot better before it’s released in the wild“. The panel warned that synthetic data might become the industry’s next major crisis.
  • “Computers talking to computers”: The risk of entire studies being filled and analyzed without any real human touch. 
3. Responsibility: What Needs to Change
  • Buyers‘ power to request transparency: Clients must demand visible fraud metrics and make data quality a central procurement criterion. Oscar Carlsson: “If buyers don’t ask for quality, no one will invest in delivering it.”
  • Helping buyers ask the right questions: Florian Kögl “Many suppliers say they’re doing a lot for data quality. But unless buyers know how to ask specific questions, like ‘What exactly are you doing to detect AI-generated responses?’, those claims don’t mean much. We need to help buyers ask the right questions, so suppliers can’t hide behind vague assurances.”
  • Standards and transparency: Panelists called for visible metrics like cleanout rates, abandon rates, fraud detection methods and tools used.
  • Data cleaning gets more time consuming and complex: Debrah Harding shared an example from a client in the financial sector who now spends 60% of their time on data cleaning—a shift she described as „unrecognizable“ compared to just three years ago. This reflects how quality has moved to the center of client-side responsibilities, especially in regulated industries like finance and pharma. In contrast, many other buyers still treat quality as solely a supplier issue.
  • Professional oversight: It’s not enough to have “a human in the loop”—they need to be skilled in research logic, fraud signals, and AI tools. Hartmut Scheffler: “We need the professional in the loop.”
  • Clearer definitions of fraud: Especially in qualitative, where the line between poor fit and actual fraud is more subjective. Shifra: “We need a shared understanding of what fraud actually means in qual.”
  • Limited accountability: Without audits or disclosure requirements, poor-quality suppliers can operate unchecked.  Industry-wide standards should become default, not optional.
4. The Future: Skills and System Changes Ahead
  • New skills are required: Anomaly detection, Prompt engineering for survey logic, Understanding algorithmic bias, Data governance & compliance with evolving AI rules
  • Shared transparency culture:
    • Report fraud metrics openly
    • Use shared glossaries and frameworks (e.g. GDQ)
    • Stop treating QA methods as proprietary black boxes
  • Stronger cross-industry alliances: Buyers, platforms, and suppliers must align on definitions, thresholds, and minimum quality standards.
  • AI won’t replace researchers—but it will change the rules of the game.
 

Key Takeaways

  • AI is a double-edged sword: It introduces new risks but also offers the strongest tools yet for detecting fraud.
  • The industry must act fast—especially on the buyer side—to demand transparency and specific, advanced quality controls.
  • Education is part of the solution: Empowering teams to ask the right questions and use the right tools is just as important as the tools themselves.
  • The future belongs to those who invest in both technology and training. We can keep up—but only if we make data quality a shared priority.
 

Closing Remarks

If there’s one message to take away from this third ReDem Quality Day, it’s this: the industry can keep up—but only if we stop treating data quality as someone else’s job. Whether you’re a buyer, a researcher, or a platform, the responsibility to safeguard insights is now shared. AI gives us the tools to do it better than ever before. And we must use them because data quality is no longer a background concern—it’s become a strategic issue for the entire insights industry. AI is forcing us to rethink old processes, take fraud more seriously, and raise the bar on transparency. While some buyers and providers have already started adapting, the panel discussion highlighted that collective progress will depend on stronger alignment, shared standards, and buyer-side pressure. What happens next will define the credibility of research for years to come.

Catch up on the full webinar session at your convenience. We will send the recording link to your inbox. 

Julia Mittermayr