Recap ReDem Quality Day #2 – 2025

ReDem Quality Days 2025 – Day 2 Recap

Real-Life Fraud Lessons and Fixes

After exposing the scope and mechanics of survey fraud in Day 1, ReDem Quality Day 2 took a bold step forward: from theory to practice. Titled „Real-Life Fraud Lessons and Fixes,“ this session offered a deep dive into what happens when fraud meets reality — and how experienced researchers are adapting on the frontlines.

Two sessions. One mission: fight back with smarter tools, layered defenses, and battle-tested insight.

Talk 1: Evolving Survey Fraud: New Challenges & Responses

 
Speaker: Florian Kögl (CEO, ReDem) & Brian Kirby (Director Sampling, mindline)

This session focused on the practical realities of survey fraud as it exists today: no longer limited to inattentive respondents or isolated cases, but driven by structured, scalable fraud behaviors — often powered by AI. The presentation combined Florian Kögl’s technical perspective on how fraud has evolved with Brian Kirby’s operational view of managing it in real projects.

Key Observations:
  1. Fraud happens in two stages: First, fraudsters bypass verification measures—like email, phone, document, or face/video checks—to gain access to a panel. Then, once inside, they flood the system with fake responses to complete as many surveys as possible and maximize their rewards.
  2. Fraud is increasingly organized: Fraudsters today use tools and workflows that resemble professional operations:
    • Services like TextVerified or Receive SMS help them bypass SMS/email verification.
    • Face reenactment apps are used to trick photo ID verification.
    • AI models like Claude or ChatGPT, and tools like „AI Form Fill“ are used to auto-fill survey answers — especially open ends.
    • These methods are becoming more accessible, and their output more difficult to detect with conventional checks.

Picture: AI bots gave similar answers with varied phrasing but recurring keywords. Source: mindeline

3. Three fraudster types dominate:

    1. Overmotivated humans taking dozens of surveys a day
    2. Script-based bots
    3. Sophisticated AI bots

4. Even moderate levels of bad data severely skew results: In a brand tracker for a major beer brand, ReDem identified and removed 16% bad responses. Without this cleanup, the results would have been significantly distorted:

  • A 26% inflation in TV ad recognition
  • A 10% inflation in brand consideration
  • A 20% drop in brand awareness 

Enough to mislead decision-makers and derail strategy.

"Fraudsters are already using AI to fill in open-ended responses that are nearly impossible to tell apart from real human answers." — Florian Kögl

“It’s not just about protecting data. It’s about protecting our decisions. Bad data doesn’t only waste money — it sends you in the wrong direction.” - Brian Kirby

How can we catch fraud and use AI to our advantage?

Florian Kögl and Brian Kirby presented a robust framework that moves beyond traditional QC into real-time, behavior-based and AI-assisted checks.

Pillar 1: Open-End Checks

  • AI can be used to read, assess and categorize answers to open ends.
  • Detects nonsense, duplicates, wrong language, excessive length, or AI-generated content.
  • This can be applied during or after fieldwork, and helps filter out low-quality entries at scale.

Pillar 2: Typing Behavior Analytics

  • Analyzes the keystrokes and typing behavior of respondents.
  • Helps distinguish between humans, bots, semi-automated setups and copy/paste answers.
  • ReDem has build a Behavioral Analytics Playground to illustrate how these checks work.
  • This layer adds behavioral context that pure answer-based checks cannot capture.

Pillar 3: Interview Coherence Checks

AI excels at identifying whether responses across a survey actually make sense together—a task that’s nearly impossible to scale manually. Unengaged or fraudulent participants often skim questions, resulting in answers that contradict each other or clash with reality.

AI flags these inconsistencies:

    • Internal contradictions: A respondent claims Spotify is their favorite podcast platform—but fails to recognize the brand later in the survey.
    • External implausibilities: A participant reports driving a 2025 car model not yet released, or claims to commute via subway while living in a remote area with no transit access.

Spotting these contradictions is crucial for identifying poor-quality respondents, but doing so manually is time-consuming and costly. That’s where AI gives us a major edge, enabling fast and scalable detection.

AI is inherently neutral—it can serve as both a weapon and a shield. While fraudsters use it to generate responses that mimic real humans, we must harness its power to protect data quality. Our job is to make AI the guardian, not the threat." — Florian Kögl

Talk 2: Fighting Fraud One Layer at a Time – Lessons from a Large-Scale Study

 
Speaker: Karine Pepin (The Research Heads / CASE4Quality)

Karine Pepin, co-founder of The Research Heads and key member of CASE4Quality, presented insights from a large-scale international survey where she implemented a layered fraud detection approach. She shared practical learnings on which methods proved effective—and which fell short.

What made this study unique:

  • It spanned multiple countries and languages with varied fraud risks
  • It combined cutting-edge AI with traditional QC checks
  • It transparently shared quantified outcomes of each tool used
The Swiss Cheese Model
Karine Pepin’s fraud prevention strategy mirrors a “Swiss cheese” model — each layer has holes, but stacked together, they form an effective barrier:
  1. Pre-survey filters (Layer 1)

    • Two external tools were used to block suspicious respondents before entry. One tool used digital fingerprinting (managed by the sample provider) and the second tool redirected participants to an external platform for various pre-screener checks and digital fingerprinting. 

    • Tool A blocked 20–40% of traffic. Tool E caught 30–65%.

    • However, panels reduced traffic from the projects when blocked responses weren’t rewarded.
      „If panels can’t make money from your survey, they’ll stop sending people to it“ – Karine Pepin. 

  2. In-survey fraud detection (Layer 2)

    • Three other tools assessed open ends and worked via API integration. 

    • These tools caught between 3 and 23% of the qualified participants.

  3. Post-survey AI review (Layer 3)

    • Two other tools were used to conduct AI-assisted Open-End reviews (post data collection).

    • Of the participants reviewed, they flagged between 35–75%.

    • The downside of AI checks: They can be unforgiving and potentially too strict. 
    • A manual check added nuance but was time-intensive and subjective — highlighting the importance of automating what’s automatable.

Sophisticated fraud needs smarter detection

  • Incidence rates: fraudsters matched the targeting criteria in 92% of cases—compared to just 13% for real participants.
  • Fraud tends to hide the extremes – it flattens the tails of your data.
  • Traditional quality checks only detect unengaged and inattentive participants – not fraud!
  • You can’t fix what you can’t see – fraud skews results in unknown ways.
  • When bad data follows a pattern, it doesn’t just add noise, it creates bias.

"Fraudsters easily bypass traditional quality control traps" - Karine Pepin

"Inspect what you expect" - Recommendations by Karine Pepin:
  • Chasing the “perfect” sample source is exhausting and NOT the solution.
  • We can’t rely on sample providers to do what they say and say what they do.
  • Until we start validating people’s ID, the future is curation.
  • Set up your own process, use your own tools, ask for proof.
  • Take ownership of the quality assurance process.

Closing Remarks

Traditional cleaning techniques like speed checks or trap questions still have a place, but are no longer sufficient. Fraudsters have adapted, and in many cases, their responses now look credible on the surface. The real risk lies in the false sense of security that comes from relying solely on these older methods.

What’s needed instead is a layered approach that addresses fraud at different points in the survey lifecycle: before, during, and after the interview. This includes pre-survey identity checks, in-survey monitoring of typing behavior and answer quality, and post-survey coherence checks that evaluate the internal logic of responses. AI can support all of these stages — not as a single solution, but as one component in a broader quality assurance system.

The discussion also highlighted an important shift in mindset. Fraud detection is no longer just about data hygiene — it’s about the integrity of insights. If poor-quality data makes it through to analysis, it can quietly distort outcomes and lead to faulty decisions. The case study on brand tracking clearly demonstrated that even relatively small percentages of low-quality responses can significantly impact key metrics.

In short, survey fraud is no longer a marginal or manageable problem. It is an operational reality that requires structured, ongoing attention. The tools are evolving — both on the side of fraud and defense — and so must the workflows and expectations of those who manage data collection.

Catch up on the full webinar session at your convenience. We will send the recording link to your inbox. 

Julia Mittermayr