Susanne Friese, October 26 2025

Response to the Open Letter that Opposes the Use of Generative AI for Reflexive Qualitative Research

Last week an open letter against the use of generative AI for reflexive qualitative research made the round. At the time of writing about 400 qualitative researchers had signed it.

Open Letter

Here is their position:

“We hold the position that analytic approaches such as reflexive thematic analysis are human research practices requiring a subjective, positioned and reflexive researcher and therefore the use of AI in such approaches is not methodologically congruent. We additionally reject AI for reflexive qualitative approaches on the grounds of social and environmental justice.”

In their letter they explain their position in a bit more detail followed by three reasons that are meant to support their position. I will discuss each reason in turn:

Reason No. 1: GenAI as simulated intelligence is incapable of meaning making

This statement reflects a misunderstanding of how GenAI is used in qualitative work when applied by researchers with AI literacy and methodological awareness. It assumes that proponents of GenAI expect the technology to perform meaning-making on behalf of the researcher or to transfer interpretive authority to the machine. While such concerns may apply to naive automation claims found in some domains, they do not justify rejecting methodologically grounded, researcher-led applications.

Thus, the critique is built on a false premise: that GenAI is being proposed as a replacement for the researcher’s reflexive sense-making role. When someone uses this premise to reject GenAI entirely, they reveal more about their misunderstanding of how GenAI can be properly integrated than about any inherent limitation of the technology.

Rejecting GenAI because “it cannot do meaning-making” is like rejecting highlighters because “they cannot interpret a text.” Of course they can’t. 

Instead of arguing that GenAI should be banned because it lacks consciousness, we should argue that qualitative researchers must remain in control, drawing on GenAI as a dialogic catalyst, not an epistemic authority.

The problem is not GenAI.

The problem is naive expectations and use.

The open letter is fighting the wrong enemy.

Simulated Intelligence

Before I continue to reason 2, let’s take a closer look at the two words “simulated intelligence”. The two words imply that the system does not think or understand.

It gives the appearance of intelligence, but it is not “intelligent” in the human or cognitive sense.

The word “intelligence” even when prefaced with “simulated,” evokes associations with agency, reasoning, interpretation, understanding, meaning-making. This can subtly reinforce the idea that GenAI has some kind of internal cognitive process, only slightly diminished by the qualifier “simulated.”

So even though it tries to downgrade AI’s capability, it still frames it using the terminology of intelligence, which is philosophically and scientifically loaded.

1. It presupposes that we all agree on what counts as “intelligence”—and we don’t.

2. It conflates performance with cognition.

3. It frames AI as “like intelligence but fake,” which still centres the intelligence metaphor rather than focusing on what it actually is: pattern-based language modelling.

4. It risks reinforcing both inflated expectations (in laypeople) and reactionary fear (in opponents).

Thus, if “simulated” already implies: → no thinking → no understanding → only the appearance of intelligence …then to follow that with “therefore it cannot create meaning” is simply restating the premise.

In other words: The authors of the open letter define GenAI as something that cannot understand or create meaning. They then argue it should not be used because it cannot understand or create meaning.

This style of reasoning suggests the authors are not engaging with actual use cases where GenAI plays a supportive or catalytic role in human meaning-making. Instead, they build a scenario where researchers are supposedly expecting GenAI to perform deep interpretive work autonomously—a scenario that reflects some automation-oriented approaches in engineering or commercial contexts, but not how methodologically informed qualitative researchers with AI literacy engage with the technology


Further discussion in the open letter regarding Reason No. 1:

“First, GenAI remains simulated intelligence only, based on statistical predictive algorithms without any understanding of the world, or the meaning of the language that constitutes the data being analysed, or indeed the meaning of the resulting themes produced when simulating qualitative analysis.”

This statement appears to respond to a subset of publications—largely from computational and engineering domains—that either attempt to operationalize the six steps of thematic analysis within AI- or NLP-driven workflows or apply models like ChatGPT to code data through prompting  (e.g., Anakok et al., 2025; de Paoli, 2024; Deiner et al., 2024; Flanders et al., 2024; Goyannes et al., 2024; Nyaaba et al., 2025; Turbobov et al., 2024; Wen et al., 2025; Zhang et al., 2024).

If this is what they are reacting to, then a legitimate and more productive critique would be to analyze those attempts and demonstrate why such mechanistic implementations fail to align with reflexive thematic analysis. Instead, the authors generalize from poor use cases to a total rejection of GenAI in any role within meaning-based inquiry.

The references they cite under Reason No. 2 show a notable absence of contributions from qualitative researchers working within the epistemological traditions they are defending, overlooking emerging work that explores responsible, researcher-led human–AI collaboration (e.g., Chubb, 2023; Friese, 2025; Hoffmann et al., 2025; Hayes, 2025; Izani and Voyer, 2024; Krähnke et al., 2025; MacGeorge, 2025; Morgan, 2025; Nguyen & Nguyen-Trung, 2025; Perkins & Roe, 2024; Schäfffer and Lieder, 2023; Thominet et al., 2024; Wash & Brink, 2023). The absence of engagement with such work raises questions about whether their position is being shaped in dialogue with qualitative innovation or simply rejecting GenAI on principle based on limited or inappropriate implementations.

“While genAI with human involvement might be able to produce something that superficially resembles reflexive qualitative analysis (through a simulation of the methodological process), it cannot be reflexive, because, by definition, reflexive qualitative analysis is an inherently meaning-based technique.”

This argument continues the above line of reasoning. Ideas about how generative AI can be used appropriately and in a methodologically congruent way are ignored; instead, they base their argument on the weaknesses of attempts to simulate current methodological processes. By doing so, they also fail to question why these processes have been necessary up until now and, in doing so, overlook opportunities to rethink methodology in light of the new technology now available to us.

Looking ahead to their third reason, it appears they may not even want to entertain this possibility. Their rejection of AI seems driven more by broader resistance to the technology based on other reasons than by a careful, methodologically grounded critique of how it might be used appropriately.

Let’s look at their further argumentation for reason no. 1:

“Just as the meaning-based requirement of reflexive thematic analysis, for example, distinguishes it methodologically from word-counting techniques such as content analysis (which can be automated), so too it must also exclude genAI on the basis that genAI is fundamentally incapable of genuinely making meaning from language (Webster, 2025). Failure to recognise these limitations of genAI risks analyses that reinforce dominant paradigms and biases.”

This paragraph is loaded with assumptions.

1. It assumes that GenAI is equivalent to automated, surface-level methods like word counting.

2. It assume that using GenAI necessarily leads to passive acceptance of AI-generated outcomes.

3. It assumes that the existence of bias in AI models inevitably leads to reinforcing dominant paradigms, with no room for human critical intervention or bias interrogation.

4. It does not consider GenAI as a dialogic or interrogative tool that can be used critically within meaning-based inquiry.

Wouldn’t most experienced qualitative researchers agree that there is considerable space between content analysis and reflexive thematic analysis? And content analysis itself is not simply about counting words. In fact, basic word counts aren’t even something Large Language Models are particularly suited for—you would need machine learning procedures specifically designed for that task. I was under the impression this open letter was directed against the use of generative AI. Here they seem to be equating GenAI with crude frequency-based text analysis.

The open letter continues:

“That is, the algorithmic patterns upon which genAI operates predisposes genAI to identify, replicate and reinforce dominant language and patterns; risking the further quieting of marginal voices and practices, including those of critical scholars. The voices and practices of people who live/breathe/feel/imagine/construct knowledge in the maroons of life - along with their stunning/quirky/complex/unpredictable ways - may be lost or worse; sacrificed.”

Concerns about LLMs reinforcing dominant Western discourses are legitimate when models generate interpretations primarily from their pretraining data. In those cases, marginalized perspectives can be overshadowed by statistically dominant voices embedded in global corpora.

However, in a dialogic, data-grounded setup—such as those discussed in the emerging qualitative scholarship largely overlooked in the open letter—the LLM is not asked to replace the researcher’s interpretive work or produce culturally neutral judgments. Its role is restricted to retrieving and reorganizing meaning within the interview material supplied by the researcher, for example via Retrieval-Augmented Generation (RAG). In this configuration, the interpretive space remains anchored in the actual voices of participants, including marginalized voices present in the dataset, rather than in generalized patterns from the model’s pretraining.

RAG is not a perfect safeguard. No technical measure can fully eliminate bias or prevent the model from subtly drawing on broader linguistic priors. But grounding the system in participant language greatly narrows its interpretive range and ensures that the analysis remains primarily tied to what respondents actually said. If marginal voices are in the data, they are accessible; the model does not “refuse” to retrieve them. The key determinant is how the researcher engages with those outputs—what is asked, what is challenged, what is taken forward.

In this mode, the LLM operates as a pattern amplifier within the researcher’s own data rather than as an external cultural authority. The researcher, who holds contextual, relational, and embodied understanding of participants—especially when working with vulnerable or marginalized communities—remains responsible for assessing resonance, relevance, and ethical sufficiency. Used critically, an LLM does not overwrite marginalized perspectives with homogenizing generalities; it supports the researcher in noticing connections, tensions, and contrasts that are already present in the material.

Rejecting naïve uses of general-purpose chatbots is sensible. But conflating such practices with all uses of GenAI obscures more careful, methodologically congruent approaches that preserve human interpretive agency and remain grounded in participant voices.

Reason No. 2: Qualitative research should remain a distinctly human practice

Open letter statement:

“Second, reflexive qualitative research is a distinctly human practice, undertaken by humans, with or about humans (e.g., through interviews, focus groups or textual data), and for the benefit of humans. The central tenet of social science research is to more deeply understand people and social processes, and to explore and interrogate meaning-making. Researchers often do this through connecting with and observing social others. While some researchers suggest that GenAI-supported qualitative analyses are helpful, so long as a human is included in the analytical ‘loop’, they also warn that our desire for GenAI to be reliable reduces our capacity to critically appraise GenAI outputs (Gamieldien, Case, & Katz, 2023; Lixandru, 2024; Tornberg, 2024; Xiao, Yuan, Liao et al., 2023). Others emphasize that uncritical use of GenAI introduces epistemic risks to the interpretive meaning-making core of qualitative research (Nguyen & Welch, 2025). We hold the position that only a human can undertake reflexive qualitative analytical work, and therefore use of GenAI is inappropriate in all phases of reflexive qualitative analysis, including initial coding. Researchers must anchor the process of making strong psychodynamic interpretations in their own humanity.”

To break it down and create a simple overview. This is how the arguments enfolds:

1. Reflexive qualitative research is fundamentally and exclusively a human practice.

2. It involves humans, studies humans, and is done for human understanding.

3. The core of qualitative research is meaning-making rooted in human relationality and reflexivity.

4. Therefore, only humans can carry out reflexive qualitative analysis.

5. GenAI is inappropriate in all phases, including early stages such as coding.

6. Researchers must anchor interpretation “in their own humanity.

Sure, one can take this position. However, this is presented as self-evident truth rather than a defensible stance within a broader epistemological debate.

Their citations are selective and methodologically weak. Most of the referenced studies (e.g., Gamieldien et al., Lixandru, Tornberg, Xiao et al.) are technical explorations by computer scientists rather than contributions grounded in reflexive or interpretivist qualitative traditions. Even the Nguyen & Welch paper, while correctly warning against shallow chatbot-style interactions, evaluates generic prompting rather than methodologically guided, abductive dialogue. As a result, their conclusions critique casual “chatting” rather than serious, structured AI-supported analysis as practiced by qualitative researchers with sufficient AI literacy and experience in using such tools.

Furthermore, their argument presupposes that the only valid model of qualitative interpretation is one where meaning is generated exclusively within an isolated human consciousness.

This excludes established concepts such as:

These traditions position meaning-making as relational, co-constructed across human and non-human agents, tools, discourses, and environments and pre-date generative AI. They do not deny the human researcher’s ethical responsibility but allow for cognitive scaffolding, external triggers, and mediated thinking. An AI assistant in this context remains a non-agentic cognitive artifact that supports human reasoning, not an interpretive subject.

The authors of the open letter assume, or so it appears, that “anchoring analysis in one’s humanity” means total methodological isolation from non-human partners. But human reflexivity does not require solitude. It can involve external stimuli, conceptual provocations, dialogic prompts, or “thinking with” artifacts, texts, theories—and potentially AI outputs—while maintaining human interpretive agency.

My assumption is that even reflexive thematic researchers are influenced by their environment throughout the meaning-making process. Reflexivity is not performed in cognitive isolation. Interpretation is shaped by encounters with artifacts (e.g., images, texts, objects), embodied experiences in physical and sensory spaces, prior engagements with theory, emotional resonance triggered by conversations with peers or participants, and even unexpected associations that emerge through memory or intuition. In that sense, meaning-making is already distributed across human experience, tools, spaces, and symbols.

Given this, it is difficult to see why a technological artifact such as an AI tool should be categorically excluded from this wider ecology of thinking. If a researcher can think with a photograph, a memo, a whiteboard brainstorm, or a theoretical concept, why is it inherently illegitimate to think with an AI-generated rephrasing, contrast, or suggestion—provided the researcher remains interpretively responsible and critically reflexive? The issue is not whether AI participates in meaning-making as a conscious subject (it does not), but whether it can act as a cognitive stimulus within a researcher-led reflexive process, much like any other non-human resource that researchers routinely engage with.

Reason No 3: The established manifold harms of genAI, especially to the environment and workers in the Global South

The open letter:

“Third, we draw your attention to the concerning exploitative, colonialist and extractivist practices in which big AI corporations engage, which have harmful impacts on humans and the planet due to exposure to electronic waste and the increased use of water and energy, land clearing, devastation of habits and greenhouse gas emissions, by the data centres being built to service genAI expansion. We are concerned about these serious ethical and health issues. As qualitative researchers concerned with social justice, and bound by ethical obligations to minimise harm, we note that several prominent researchers have raised concerns about the negative impact of increased use of GenAI both on our environment and on fellow humans. Critics have pointed to the extractivist, racist, imperialist and exploitative ethos motivating Big AI Tech in their quest for profit (Hanna and Bender, 2024; Brennan et al., 2025; Tacheva and Ramasubramanian, 2023; Mejias and Couldry, 2024), and which is transforming epistemic agency in higher education (Lindebaum et al., 2025). For example, Galaz and colleagues show that AI has rapid and extensive uptake in multiple industries including farming, forestry, aquaculture, and – ironically – climate change. Yet this uptake poses significant harms, such as AI-bias-driven increased inequity and food insecurity, cascading failures, and AI-driven irreversible changes in ecosystems (Galaz et al., 2021). The genAI boom is accompanied by the expansion of massive infrastructural components to support it, including data centres and under-sea cabling (Wang et al., 2024; Hogan, 2024). These infrastructures expose humans and other elements of ecosystems to significant habitat disruption and environmental hazards from land-clearing, deep sea tunnelling, greenhouse gas emissions and impacts caused by its water and energy use (Hosseini et al., 2025; Lupton, 2025; Osmanlliu et al., 2025). Another way in which genAI poses considerable harms to human health is through exploitation of workers working on training or moderating digital data content. Researchers have identified the psychological effects on AI data workers in the Majority World who are tasked with helping train large language models to detect and filter toxic content (Mejias and Couldry, 2024; Tacheva and Ramasubramanian, 2023). While this third point is more concerned with ethical objections rather than a methodological concern, we see them as interconnected and warn against ignoring these complex negative impacts of our choices on others, especially in light of points 1 and 2. For these reasons, we oppose GenAI for reflexive thematic analysis and other reflexive qualitative approaches.”

The environmental and labour issues raised in the open letter are real and deserve serious attention. However, the article moves to an absolute position: even if AI worked well, we should reject it because it harms the environment and exploits vulnerable workers. This implies a total moral prohibition rather than a call for responsible, regulated use. This stance has two main issues:

The Open Letter lacks proportional comparison

The open letter treats AI’s environmental footprint as uniquely unacceptable, but it does not contextualize those harms within broader debates on proportionality, mitigation, and responsible innovation. Yes, data centres consume water and energy, and these impacts should be taken seriously. However, environmental policy typically evaluates technologies not through absolute rejection but in terms of harm reduction, regulation, and benefit-to-impact balance.

Existing research shows that AI’s water and energy consumption, while non-trivial, is significantly lower than sectors such as agriculture, transportation, animal-based food production, or residential heating (see video link below). This does not excuse AI’s footprint but places it in a realistic policy landscape where the appropriate response is not categorical abstinence, but targeted intervention: prioritizing models hosted in renewable-powered data centres, supporting efficiency-focused architectures, and advocating for transparency and regulation of resource-intensive deployments.


The Open Letter directs responsibility to the wrong actors

Refusing to use GenAI in qualitative research will not change the behaviour of large AI corporations, just as one small academic group going vegan would not meaningfully affect global meat production. The problem is systemic — tied to how AI is built, governed, and incentivized under current capitalist conditions. Take a look at this video for a more balanced perspective.

Should I feel guilt about using AI?



The creator of the video content, Simon Clark, acknowledges the environmental and ethical problems of GenAI without downplaying them, but also points out that whether we as individuals use generative AI or not has only a minor impact on our overall footprint. In fact, making different lifestyle choices—such as adopting a vegan diet or taking one flight less per year—would have a far greater effect (References).

What can an individual researcher do instead?

We cannot solve systemic extractivism through individual abstinence from research tools. But individuals can act meaningfully by:

The broad ethical objection in reason No. 3 effectively shuts down any serious engagement with dialogic, reflexive, or cognitive distributed approaches. Because the authors conclude that GenAI should be rejected outright on moral grounds, they do not meaningfully consider whether responsibly used, researcher-led, dialogic AI could operate within more sustainable and ethically governed frameworks. As a result, they approach conversational AI methods with minimal AI literacy, prompting superficially, observing poor outcomes, and then using those outcomes as proof that AI cannot support qualitative reasoning. In other words, their ethical rejection reinforces a methodological refusal to explore how qualitative researchers with appropriate epistemic training and responsible usage practices might develop iterative, reflexive, human-led analysis in ways that do not depend on uncritical automation or extractive scaling.

Concluding Remarks

The open letter raises important ethical questions, but its categorical rejection of generative AI rests on questionable assumptions, selective engagement with the literature, and a refusal to distinguish methodological flawed use from responsible, researcher-led, reflexive use. Treating AI as inherently incompatible with meaning-making ignores established traditions of distributed cognition and sociomaterial practice, where thinking does not occur in isolation from tools, environments, or conceptual provocations. It is not the existence of AI that threatens reflexive inquiry, but its uncritical use.

If we accept that reflexive qualitative research is grounded in human subjectivity, then the real task is not to ban external cognitive stimuli, but to ensure that humans remain epistemically and ethically responsible for the interpretive process. Properly used, GenAI does not replace reflexivity; it can act as a catalyst that provokes abductive insight, stimulates critical questioning, and helps researchers surface tensions, contradictions, and silences within their own dataset—always under human evaluation.

Environmental and labour harms deserve serious attention, but the appropriate response in research practice is not abstinence by prohibition, but engagement through harm reduction, regulatory advocacy, and the development of methodologically rigorous frameworks for responsible use. Qualitative researchers are well positioned to interrogate sociotechnical systems reflexively rather than retreat from them.

The choice is not between human-centered reflexivity and AI, but between rejecting new analytical scaffolds on principle or shaping them critically from within our epistemological traditions. A blanket ban forecloses methodological evolution and silences nuanced, practice-based innovation already emerging within the field. Instead of closing down inquiry through fear-based absolutism, we should be asking: under what conditions, within what limits, and guided by which reflexive commitments can AI be used to deepen—not dilute—qualitative understanding?

References

Anakok, I., Katz, A., Chew, K. J., & Matusovich, H. M. (2025). Leveraging generative text models and natural language processing to perform traditional thematic data analysis. International Journal of Qualitative Methods, 24. https://doi.org/10.1177/16094069251338898

Barad, K. (2007). Meeting the universe halfway: Quantum physics and the entanglement of matter and meaning. Duke University Press.

Braidotti, R. (2013). The posthuman. Polity Press.

Brennan, K., Kak, A., & Myers West, S. (2025). Artificial Power: AI Now 2025. Landscape Report. https://ainowinstitute.org/publications/research/ai-now-2025-landscape-report

Chubb, L. A. (2023). Me and the Machines: Possibilities and Pitfalls of Using Artificial Intelligence for Qualitative Data Analysis. International Journal of Qualitative Methods, 22. https://doi.org/10.1177/16094069231193593

Deleuze, G., & Guattari, F. (1987). A thousand plateaus: Capitalism and schizophrenia (B. Massumi, Trans.). University of Minnesota Press. (Original work published 1980)

De Paoli, S. (2024). Further Explorations on the Use of Large Language Models for Thematic Analysis. Open-Ended Prompts, Better Terminologies and Thematic Maps. Forum Qualitative Sozialforschung Forum: Qualitative Social Research, 25(3). https://doi.org/10.17169/fqs-25.3.4196

Fenwick, T., Edwards, R., & Sawchuk, P. (2011). Emerging approaches to educational research: Tracing the sociomaterial. Routledge. 

Flanders, S., Nungsari, M., & Loong, M.C. (2025). AI Coding with Few-Shot Prompting for Thematic Analysis. ArXiv, abs/2504.07408.

Friese, Susanne, Conversational Analysis with AI - CA to the Power of AI: Rethinking Coding in Qualitative Analysis (April 27, 2025). Available at SSRN: https://ssrn.com/abstract=5232579 or http://dx.doi.org/10.2139/ssrn.5232579

Galaz, V., Centeno, M. A., Callahan, P. W., Causevic, A., Patterson, T., Brass, I., ... & Levy, K. (2021). Artificial intelligence, systemic risks, and sustainability. Technology in Society, 67, 101741.

Gamieldien, Y., Case, J., & Katz, A. (2023). Advancing qualitative analysis: An exploration of the potential of Generative AI and NLP in thematic coding. SSRN Electronic Journal. https://doi.org/10.2139/ssrn. 4487768

Goyanes, Manuel & Lopezosa, Carlos & Jordá, Beatriz. (2024). Thematic Analysis of Interview Data with ChatGPT: Designing and Testing a Reliable Research Protocol for Qualitative Research. SocArXiv. https://doi.org/10.31235/osf.io/8mr2f_v1

Hanna, A., & Bender, E. M. (2024). Theoretical AI Harms are a distraction: fearmongering about artificial intelligence's potential to end humanity shrouds the real harm it already causes. Scientific American, 330(2). 10.1038/scientificamerican0224-69

Hayes, A. S. (2025). “Conversing” With Qualitative Data: Enhancing Qualitative Research Through Large Language Models (LLMs). International Journal of Qualitative Methods, 24. https://doi.org/10.1177/16094069251322346

Hoffmann, S., Lieder, F. R., Rundel, S. (2025). Hybride Forschungswerkstätten in der Erziehungswissenschaft. Soziotechnische Interpretation mit generativen Sprachmodellen. Erziehungswissenschaft 36 (2025) 70, S. 47-54. DOI: 10.25656/01:33609

Hosseini, M., Gao, P., & Vivas-Valencia, C. (2025). A social-environmental impact perspective of generative artificial intelligence. Environmental Science and Ecotechnology, 23. https://www.sciencedirect.com/science/article/pii/S2666498424001340

Hogan, M. (2024). The fumes of AI. Critical AI, 2(1). https://doi.org/10.1215/2834703X-11205231

Hutchins, E. (1995). Cognition in the wild. MIT Press.

Izani, E., & Voyer, A. (Year). The augmented qualitative researcher: Using generative AI in qualitative text analysis [Preprint]. SocArXiv. https://osf.io/preprints/socarxiv/gkc8w_v1

Krähnke, U., Pehl, T., & Dresing, T. (2025). Hybride Interpretation textbasierter Daten mit dialogisch integrierten LLMs: Zur Nutzung generativer KI in der qualitativen Forschung. SSOAR. https://nbn-resolving.org/urn:nbn:de:0168-ssoar-99389-7

Lindebaum, D., Nolan, E., Ashraff, M., Islam, G., & Ramirez, M. F. (2025). The transformation of epistemic agency and governance in higher education through Large Language Models – toward a future of organized immaturity. Organization Studies, 0(ja). https://doi.org/10.1177/01708406251392002

Lixandru, D. (2024). The use of artificial intelligence for qualitative data analysis: ChatGPT. Informatica Economica, 28(1), 57–67. https://doi.org/10.24818/issn14531305/28.1.2024.05

Lupton, D. (2025). Towards a digital planetary health perspective: generative AI and the digital determinants of health. Health Promotion International, 40(5). https://doi.org/10.1093/heapro/daaf153

MacGeorge, R. B. (2025). Conversations With my Data: Exploring the Potential of Large Language Models in Qualitative Futures Research. World Futures Review, 0(0). https://doi.org/10.1177/19467567251330210

Mejias, U. A., & Couldry, N. (2024). Data Grab: The New Colonialism of Big Tech and How to Fight Back. University of Chicago Press.

Masny, D. (2016). Problematizing qualitative research: Reading a data assemblage with rhizoanalysis. Qualitative Inquiry, 22(5), 379–387. https://doi.org/10.1177/1077800415617203

Morgan DL. Query-Based Analysis: A Strategy for Analyzing Qualitative Data Using ChatGPT. Qualitative Health Research. 2025;0(0). doi:10.1177/10497323251321712

Nguyen, D. C., & Welch, C. (2025). Generative Artificial Intelligence in Qualitative Data Analysis: Analyzing—Or Just Chatting?. Organizational Research Methods, 10944281251377154. DOI: 10.1177/10944281251377154

Nguyen-Trung, K., & Nguyen, N. L. (2025, March 4). Narrative-Integrated Thematic Analysis (NITA): AI-Supported Theme Generation Without Coding. SocArXiv. https://doi.org/10.31219/osf.io/7zs9c_v1

Nyaaba, M., Min, S., Apam, M. A., Acheampong, K. O., Dwamena, E., & Zhai, X. (2025, March 11). Optimizing generative AI's accuracy and transparency in inductive thematic analysis: A human–AI comparison. SSRN. https://doi.org/10.2139/ssrn.5174910

Orlikowski, W. J. (2007). Sociomaterial practices: Exploring technology at work. Organization Studies, 28(9), 1435–1448. https://doi.org/10.1177/0170840607081138

Osmanlliu, E., Senkaiahliyan, S., Eisen-Cuadra, A., Kalla, M., Kalema, N. L., Teixeira, A. R., & Celi, L. (2025). The urgency of environmentally sustainable and socially just deployment of artificial intelligence in health care. NEJM Catalyst Innovations in Care Delivery, 6(8). https://catalyst.nejm.org/doi/full/10.1056/CAT.24.0501

Perkins, M., & Roe, J. (2024). The use of generative AI in qualitative analysis: Inductive thematic analysis with ChatGPT. Journal of Applied Learning & Teaching, 7(1), 390–405. https://doi.org/10.37074/jalt.2024.7.1.22

Schäffer, B., & Lieder, F. R. (2023). Distributed interpretation – Teaching reconstructive methods in the social sciences supported by artificial intelligence. Journal of Research on Technology in Education, 55(1), 111-124. https://doi.org/10.1080/15391523.2022.2148786

Tacheva, J., & Ramasubramanian, S. (2023). AI Empire: Unraveling the interlocking systems of oppression in generative AI's global order. Big Data & Society, 10(2). https://doi.org/10.1177/20539517231219241

Thominet, L., Amorim, J., Acosta, K., & Sohan, V. K. (2024). Role Play: Conversational Roles as a Framework for Reflexive Practice in AI-Assisted Qualitative Research. Journal of Technical Writing and Communication, 54(4), 396-418. https://doi.org/10.1177/00472816241260044 (Original work published 2024)

Törnberg, P. (2024). Large language models outperform expert coders and supervised classifiers at annotating political social media messages. Social Science Computer Review, Article 08944393241286471. https://doi.org/10.1177/08944393241286471

Turobov, A., Coyle, D., & Harding, V. (2024). Using ChatGPT for Thematic Analysis. ArXiv, abs/2405.08828.Wang, P., Zhang, L.-Y., Tzachor, A., & Chen, W.-Q. (2024). E-waste challenges of generative artificial intelligence. Nature Computational Science, 4(11), 818-823. https://doi.org/10.1038/s43588-024-00712-6

Walsh, S., & PALLAS‐BRINK, J. (2023). The Ethnographer in the Machine: Everyday Experiences with AI‐enabled Data AnalysisEthnographic Praxis in Industry Conference Proceedings

Wen, C., Clough, P., Paton, R., & Middleton, R. (2025). Leveraging large language models for thematic analysis: a case study in the charity sector. AI & SOCIETY. https://doi.org/10.1007/s00146-025-02487-4

Xiao, Z., Yuan, X., Liao, Q. V., Abdelghani, R., & Oudeyer, P.-Y. (2023). Supporting qualitative analysis with large language models: Combining codebook with GPT-3 for deductive coding. Companion Proceedings of the 28th International Conference on Intelligent User Interfaces, 75–78. https://doi.org/10.1145/3581754.3584136

Zhang, H., Wu, C., Xie, J., Rubino, F., Graver, S., Kim, C., Carroll, J.M., & Cai, J. (2024). When Qualitative Research Meets Large Language Model: Exploring the Potential of QualiGPT as a Tool for Qualitative Coding. ArXiv, abs/2407.14925. https://doi.org/10.48550/arXiv.2407.14925

Written by

Susanne Friese

Tags

Older AI and Human Collaboration in Market Research Moderation (guest article)