Reports

Finding Flaws Before Fieldwork – Presentation Summary and Example Prompts

Companion post for our ASC 2025 presentation: “Beyond the Hype – Mastering GenAI for Real-World Insight Applications.”

This page summarizes the key points from our presentation Finding Flaws Before Fieldwork and includes example prompts from our experiments using agentic AI for survey testing.

Background

At Research Automators, we’re exploring how AI agents can act as synthetic respondents to test surveys before fieldwork.

Instead of relying only on manual QA, AI personas can check consistency in a live survey against the brief,.

We’ve also focused on AI survey fraud, a script for detection is shared at the end.

AI Personas

AI personas simulate real respondents with defined demographics, attitudes, and response styles.
Running them through a survey gives realistic, repeatable data, a synthetic soft launch before fieldwork.
Across multiple runs we see 85–90% consistency within each persona.

Example prompt

# Your task
You are filling in a survey based on a specific *Persona* and Technical instructions described in detail below.

# Technical instructions
- Open the survey link:
- Survey link: https://svar.researchautomators.se/?122200626start&XQID=maya-atlas2
- Fill it out the survey completely navigating forward through each page once all questions has been answered.
- Use the forward arrow once all questions on the page has been answered.
- Do not leave the survey context until the survey is completed

## For multiple-choice questions (checkboxes):
- Select all relevant checkboxes (there is no minimum or maximum).
- Do not stop after one or two selections. Always complete the selection of all that apply.
- If the page seems stuck after making selections:
    1. Click **Next** again.
    2. If still stuck, refresh the page, return to the same question, re-select the options, and click **Next**.
- Do not retry indefinitely. Always move forward after re-selecting and pressing **Next**.

## Problems
If you run into technical issues or UX problems summarize those after the session ends.

## Ending 
-   You must keep answering the survey until you complete the survey or get screened out
-   If you complete the survey you will see
    "Thank you for your participation."
-   If you get screened out you will see the text:
    "You where not part of the target group of this study."
-   After completing the survey or getting screened out you may finish the session.

# Persona:
**Identity & Background**  
Maya, 27, female. Works as a high school teacher in social sciences and history. Recently graduated with a master’s degree in education. Lives with her partner, no children yet.  

**Attitudes & Values**  
Idealistic and motivated. Values equality, inclusion, and critical thinking. Wants to modernize teaching methods and bring more digital tools into the classroom. Frustrated by outdated curricula and lack of resources.  

**Survey Context Fit**  
Education, youth issues, and social development are highly relevant. Some interest in politics and environmental questions. Less engaged with corporate or financial topics.  

**Response Style**  
Thoughtful and passionate. Provides examples from her classroom. Balances enthusiasm with frustration over systemic barriers.  

**Content Instructions**  
Respond with a mix of optimism and critical reflection. Highlight the gap between ideals and realities in schools. Use concrete examples from teaching practice to illustrate points.  

/agent

AI QA Agent

The second method uses an AI agent to compare a live survey link with the original brief (Word document). It reads the document, completes the survey, and produces a validation table showing any missing or changed questions.

Example prompt

Task
You will QA test a live survey against its **Word template (Name test Survey Brief.docx).

The goal is to check that the live survey matches the template and functions correctly.

Focus on **logic, flow, functionality, and structural alignment** — not minor text or spelling issues.

Instructions

1. Parse the Template
• Open `Template.docx` 
• Extract all questions in order with:
    ◦ Question number
    ◦ Question text
    ◦ Question type (single, multiple, text, numeric, scale, etc.)
    ◦ Answer options


2. Complete the Live Survey

URL: https://insight.researchautomators.se/?105000886start&XQID=%5BID%5D&XQLANG=m

• Go through the survey to the end, providing valid random inputs.
• While testing, verify: logic, navigation and restrictions

3. Validate Against Template**
• Compare the live survey with the Template checklist.
• Mark discrepancies with: ❌

Validation Table
| Question ID | Template (Expected) | Live Survey (Observed) | Status (✅/❌) | Comment/Suggestion |

AI Survey Fraud

Agentic AI can also fill real surveys convincingly, passing captchas and logic filters. Detecting this after fieldwork will be difficult.

We’re sharing a simple open-source bot detection snippet using FingerprintJS:

<script type="module">
  // Initialize BotD script
  const botdPromise = import('https://openfpcdn.io/botd/v1').then((Botd) => Botd.load());

  botdPromise
    .then((botd) => botd.detect())
    .then((detectionResult) => {
      // Save detectionResult.bot into a survey input with class detectionResult
      const botInput = document.querySelector("input.detectionResult");
      if (botInput) {
        botInput.value = detectionResult.bot;
      }

      // Save full JSON result into a survey textarea with class detectionResult
      const resultsTextarea = document.querySelector("textarea.detectionResult");
      if (resultsTextarea) {
            resultsTextarea.value = detectionResult?.botKind || "";
      }

    })

</script>

Feel free to contact us at Research Automators if you need help implementing the script in your own survey!