Introduction
AI is no longer a side project in research. It shows up in how we design questionnaires and discussion guides, in how we build sample frames, and in the software that helps us turn thousands of comments into something decision-ready. For research buyers, that can make the market feel crowded and confusing, with “AI” being offered at almost every stage of the process. At Impact, we genuinely welcome AI and can see the huge benefits it brings, but we’re equally careful about how and where we use it.
We use AI in many parts of our business: in our marketing, in the way we write proposals and thought pieces, and even in drafting blogs like this one. On projects themselves, it’s especially helpful in shaping research materials – from questionnaires to discussion guides and sample frames – and in qualitative work it helps us code, theme and synthesise large volumes of feedback. Done well, that means clearer designs, faster turnaround and better value.
But value is only half the story. For research buyers, the real questions are: Where does AI fit? Where doesn’t it? And how do you know it’s being used responsibly? This blog sets out what our clients are telling us, and how we use AI across the research journey while keeping quality front and centre.
What our clients are saying
- Curious but cautious
Recently, we asked some of our clients – across consumer, healthcare, energy and water – how they feel about AI in market research. The picture that emerged was very clear: there’s a lot of curiosity, but also a healthy dose of caution.
Across sectors, clients are already experimenting. In one study, everyone we spoke to was trialling AI tools of some kind, and over four in five told us they feel curious about what AI could do for their insight work. But none had fully adopted AI into their research workflows yet. As one client put it, “AI has not proven itself yet, human experience can spot nonsense answers or data.”
The main worries weren’t about science fiction scenarios – they were about the basics of good research:
- the general accuracy of AI-generated information
- whether outputs are repeatable and consistent
- the risk that AI simply bakes in existing bias
As one client said:
“AI uses the data it has and data is inherently biased in many ways, so there is a risk that AI will perpetuate and reinforce bias.”
- Sector differences
Our healthcare clients add an interesting perspective. In that sector, around two-thirds told us they would be comfortable with an AI bot conducting interviews – a strikingly high level of openness. That’s perhaps not surprising. Pharma and healthcare businesses have been early adopters of AI in areas such as drug discovery, clinical development and medical information, where the potential benefits are substantial. As a result, AI is already woven into many of their internal processes, and that confidence is beginning to trickle into how they think about market research too.
When we spoke specifically to water companies, a similar pattern of curiosity and caution emerged, but with a different emphasis. They were positive about using AI to summarise complex datasets and pull out top-line messages quickly – “a useful tool, particularly for drawing out summary level insights from multiple/complex data sets” – but worried about losing depth and nuance: “it still can lose the sentiment and means you don’t take in the true understanding of what’s being said.”
There were also concerns about AI asking clumsy or inappropriate follow-up questions in qual, which could frustrate customers and weaken the overall experience.
- Their dream AI tool
Interestingly, when we asked people to imagine their “dream” AI tool, the wish list was very focused on what good research should already do: “generate clear actionable insight”, “analyse data, connect the customer journey, provide predictive outputs” and “create insight and error check fast.” In other words, clients want AI that helps them join the dots, spots errors quickly and gives them a more complete, predictive picture of their customers – without lowering the bar on quality or context.
That balance of excitement and caution mirrors our own view at Impact: AI should help us generate better, more actionable insight – but it has to be reliable, transparent and firmly anchored in human judgement.
Quality and human oversight: our non-negotiables
If there’s one thread running through everything our clients told us, it’s this: they’re open to AI, but not at the expense of quality. Accuracy, bias, data integrity, depth of understanding – those are the non-negotiables.
For us, that’s exactly where trained researchers come in. AI can draft, summarise and sort at speed. Human experts decide what’s fit for purpose, what needs refinement, and what should be discarded entirely. The real question isn’t “Where can we plug AI in?” but “Where does AI help without weakening the standards we apply to all good research?”
Those standards don’t change just because AI is in the mix. We still hold ourselves to clarity, fairness, robustness, inclusion and transparency – the same principles you’ll find in professional codes and regulatory expectations. AI can help us get there more efficiently, but only if the people using it:
- understand research methods deeply,
- know the sectors they’re working in,
- can recognise when something “looks wrong”, and
- feel confident pushing back on automated outputs.
That’s where trained, experienced researchers earn their keep. They don’t just use AI; they manage it – deciding when it’s helpful, when it needs correcting, and when it should be ignored.
The research journey
We think about that management across the research journey: design, fieldwork and analysis. At each stage, we set out deliberately how AI can help, what could go wrong, and which quality checks our researchers put in place.
| Stage | AI’s role | Benefit | Human oversight | Human’s role |
| Design | Drafts objectives, questions and guides from briefs | 💡 High | 🧠🧠 Needed | Align to decisions, remove bias, right-size burden |
| Fieldwork – quant | Flags suspect cases, patterns and routing issues | ⚡ Medium | 🧠🧠🧠 Critical | Judge exclusions, protect sample, brief suppliers |
| Fieldwork – qual | Runs simple chats, suggests probes to moderators | ⚡ Medium | 🧠🧠🧠 Critical | Build rapport, handle sensitivity, lead discussion |
| Analysis – quant | Highlights patterns, suggests interpretations, drafts exec summaries | 💡 High | 🧠🧠 Needed | Check stories against tables, apply stats rigour, shape the narrative |
| Analysis – qual | Transcribes, clusters themes, drafts neutral summaries | 🚀 Very high | 🧠🧠🧠 Critical | Correct errors, surface nuance, sign off insights |
The sections that follow explain what that looks like in practice.
- Design: protecting question quality and fairness
AI is genuinely helpful in the early stages of design. It can turn long stakeholder documents into draft objectives, suggest question structures and offer more conversational wording. That saves time.
But question quality is where research lives or dies. Even the best-looking questionnaire will fail if the questions are biased, unclear or mis-aligned with the business problem. This is where trained researchers make the difference. An AI assistant can summarise what’s on the page; it can’t sit in a briefing and pick up on the politics, legacy issues or regulatory sensitivities that shape what really needs to be asked.
Our researchers use those conversations to challenge and refine AI-generated drafts so the design reflects real decisions. They look for leading language and unfair assumptions – especially in utilities and water, where fairness and vulnerability are under scrutiny – and balance robustness with respondent burden. Every questionnaire and discussion guide is ultimately owned, edited and signed off by a researcher who understands both good design and the sector.
- Fieldwork – quantitative: keeping data credible
In quant, one of the biggest AI-related risks is poor data: AI-generated or AI-assisted responses that look plausible but mean very little. The same technology that helps us draft can also be used to flood surveys with coherent but meaningless answers.
To protect quality, our fieldwork suppliers use automated and technical checks to flag speeding, straight-lining, odd response patterns and suspicious open-ends, plus device, IP and routing issues. On top of that, Impact’s researchers review samples of data by hand, looking for subtler signs that something isn’t right and making the final call on exclusions.
In this part of the journey, AI is an early-warning system. People who understand the topic, audience and stakes still decide what stays in the dataset and what doesn’t.
- Fieldwork – qualitative: speed versus depth
In qual, AI chatbots can run simple interviews or tasks more quickly and cheaply than humans. For some low-stakes, exploratory exercises, that can be a sensible trade-off – provided clients know they won’t get the same depth, nuance or flexibility they would from a trained moderator.
The risks are clear: clumsy or inappropriate follow-up questions, tone-deaf responses in sensitive situations, and a tendency to smooth out the more complex or emotional parts of what people say. Our position is straightforward: we may use AI to support text-based interactions or suggest extra probes, but for anything involving emotion, vulnerability or regulatory sensitivity, moderation stays firmly in human hands.
Trained moderators are better at building rapport, reading between the lines and knowing when to push – and when to hold back. AI can assist in the background; it doesn’t take the lead.
- Analysis – quantitative: faster interpretation, same standards
On the quant side, we still rely on established statistical tools and tables for cleaning, weighting and chart creation. Where AI helps most is in making sense of the numbers: suggesting ways to interpret patterns, highlighting possible stories in the data, and drafting first-cut executive summaries.
In short, AI can speed up interpretation and first drafts; people make sure the conclusions are robust, balanced and ready to stand up in front of a board.
- Analysis – qualitative: speed with safeguards
Qualitative analysis is where AI can make the biggest difference to pace – and where strong safeguards matter most.
We use AI to generate transcripts from interviews and groups as soon as sessions are complete, then our interviewers read and correct them, checking against the audio, fixing mis-heard words and making sure each voice is assigned correctly. Only then do we treat the transcript as data.
From there, AI is very helpful for suggesting themes and clustering similar comments across large volumes of feedback. But turning those patterns into reliable insight remains a skilled job. Our researchers refine code frames, check themes against the raw data, deliberately surface minority and dissenting views, and connect what’s being said back to the original questions and sample design.
AI also helps draft neutral summaries – which we treat as a sketch. Analysts test every headline against the evidence, bring back nuance where needed and make sure limitations are clear. If an “insight” can’t be traced back to real data or real voices, it doesn’t make it into the final debrief.
Conclusion
Across all of this, the pattern is the same: AI does the heavy lifting on volume and speed, while our researchers stay responsible for judgement, context and the final story. That’s how we’re choosing to use AI at Impact – not as a replacement for people, but as a set of tools that help us deliver clearer, faster, high-quality insight that you can trust.
Want to talk about AI in your research?
If you’re exploring how to use AI in your insight work, or how to make what you’re already doing safer and more effective, we’d be happy to chat. We can review your current workflow, show where AI can genuinely help, and where human oversight is critical, and leave you with a simple action plan.