In qualitative research, newcomers to genartive AI and critiques of generative AI often assume that the technology is here to replace interpretation. Many stop before they start because they think “analysis” means pressing a button. When these first attempts don’t produce finished, nuanced insights, they conclude AI can’t do qualitative work.
The problem isn’t the tool. The problem is the expectation.
AI isn’t a coding engine. It’s a junior collaborator that needs orientation, context, refinement, and check-ins. You’re still the researcher. You make the decisions. You provide the standards. You judge relevance. You guide the sense-making.
If that sounds familiar, it should. Qualitative analysis has always been iterative: memoing, refining sensitizing concepts, comparing segments, asking better questions of the data. Working with AI is no different. You’re still doing the steering.
You now have a small analytic team at your elbow. Not someday. Today.
Think of an AI assistant the way you’d think about a new research assistant: capable, sometimes surprising, and never fully right on the first pass. You don’t dismiss humans for a weak memo. You ask them to revisit it, sharpen the concepts, and bring evidence. An AI assistant deserves the same engagement.
In other words: you’ve become a supervisor. Your job is to guide the analytic conversation.
This is not one-shot prompting. It’s iterative movement. In qualitative language, that’s second nature.
1. Name the collaborator
Some researchers find themselves talking to AI like they’re typing into a search box. Short. Abrupt. Little context. In a recent workshop, someone admitted she became less polite than she would ever be with a human assistant. It made her realise she wasn’t collaborating at all. She was issuing commands.
A simple shift helps. Give your AI assistant a name. Call it Sandra or Tom. Then watch what changes.
Many researchers notice that once they name their AI partner, they start explaining their study context, clarifying their aims, stating what matters and why. The tone becomes more conversational. They ask better questions. They probe. They revise. Responses improve because the collaboration is clearer.
Anthropomorphism may feel odd in research, yet treating the system as a co-analyst encourages the habits qualitative work depends on: clarity, context, and curiosity. Naming just nudges you to show up differently. And that difference shapes the quality of the exchange.
2. Schedule the review
AI can surface patterns in minutes, not weeks. The bottleneck becomes your time to read, critique, and develop follow-up questions. Set time aside. Without review, there’s no analysis — only output.
3. Give feedback, iterate
First responses are drafts. We know this from transcripts, fieldnotes, and memos. Tell AI what’s missing. Ask it to sharpen a category, return more contrasts, bring deviant cases, or look across metadata. This mirrors constant comparison; it isn’t new work, just new support.
4. Raise expectations
If you approach AI as a summarizer, it will summarize. The work stays shallow.
Ask for more. Request conceptual depth, relationships between themes, supporting excerpts, and contrasting voices. The output improves when expectations increase.
Published critiques of generative AI in qualitative research often report the same issues. The AI did not align with the researchers’ own interpretations. It produced surface summaries. Results felt incomplete. Summaries sounded bland and missed nuance.
These outcomes usually reflect how the system was guided. Low demands produce low-value output. When researchers expect deeper interpretation and steer toward it, the quality shifts.
5. Extend the benefit of the doubt
If a response feels off, don’t stop. Clarify. Add context. Share examples. Redirect. The next iteration is usually closer to what you meant.
This mirrors everyday analytic work: questioning, refining, comparing, interpreting. Nothing about this is new.
When the output improves, it’s not because the system suddenly became more capable. It’s because the steering improved.
With a human collaborator, a misunderstanding doesn’t end the relationship. You clarify, offer guidance, and try again. The same stance works here. Assume capability, provide direction, and keep shaping the work rather than concluding it cannot be done. That's steering in practice.
CA to the power of AI — formalizes this dialogic process. Rather than coding the whole dataset up front, we work through deliberate questioning. We begin with descriptive responses, then probe. We write memos synthesizing what we learn. We examine and re-examine patterns. We push ideas upward toward theoretical explanation. This is abductive movement in practice.
The AI assistant is not deciding for you. It is accelerating your access to material, surfacing contrasts, and helping you bring order to complexity. You still evaluate quality and decide what matters.
QInsights provides a structured space to do exactly this kind of guided analysis. You can:
Everything stays tied to the source material, so you can review what supports each insight.
We are not replacing qualitative reasoning. We are supporting it.
AI is not a threat to qualitative practice. It’s a collaborator that requires the same habits we already value: reflexivity, iteration, curiosity, and evidence.
The moment you expect a final answer from a single prompt is the moment you’ve left qualitative work. The moment you treat AI as a thinking partner — one that needs supervision, context, and feedback — is the moment you begin doing qualitative analysis with AI.
The question isn’t whether AI will replace interpretation. It won’t. The real question is whether we will learn to supervise it well.
This shift is already happening. The researchers who lean into co-analysis will move faster, ask better questions, and develop more robust insight.
The practice remains ours. The tools have multiplied. Now let's guide them.
This blog was inspired by Jeremy Utley's blog: "You're a Manager now (But Most Don't Know It)"