• Workshops
  • Keynotes
  • Rethinking Analysis
  • Community
  • Blog
  • Contact
Susanne Friese, March 29 2023

Will we soon talk about the AI interviewer effect?

Recently, I came across three pieces of information– one came in the form of a brochure where a company was advertising the fantastic new ways AI will transform marketing research. The second was a presentation of a new tool incorporating AI. The third was an interview in the Wall Street Journal with Prof. Gerd Gigerenzer, director of the Harding Institute for Risk Literacy at the University of Potsdam, Germany.

Gigerenzer's research highlights that complex algorithms, such as deep neural networks, excel in stable and well-defined situations. However, when it comes to uncertain or unstable problems, like predicting consumer preferences for strategic planning, humans outperform algorithms due to their ability to pick up on essential cues for making good decisions.

To me, it almost sounds cynical when Gigerenzer says that if we want to get the most out of AI, we have to make the world more predictable, including human needs and behaviour. 

Deep artificial neural networks, while adept at finding regularities and correlations, lack the causal thinking, theories, intuitions, and common sense that human intelligence possesses. These networks are powerful calculating machines but do not understand text or context as humans do.

The Twist

However, marketing research tech companies tell us otherwise. They state that their new AI-powered tools can give us instant results to UNDERSTAND customers and consumer experience better.

While I was watching the presentation of the tool Canvs ai, I told myself just take it in and be open. Just forget about your understanding of what “Understanding” means and go with the flow. Given that, I liked what I saw.

If you have very structured unstructured data, as is the case with open-ended questions from surveys, and you have enough of them (1000+), the tool, I believe, will give you good results.

Their newest gimmick wasn’t yet ready for release, but they promised that you could ask the tool questions like: “What are all the pros and cons people have mentioned about product x.” So this is exciting stuff. I think it will work reasonably well, as the AI is trained based on previous data from various industries.

But remember, we are dealing here with fairly structured data, standardized questions and years of experience with people’s responses to those questions.

False expectations

I'm less optimistic about the following promise I read in the brochure. The claim is as follows:

AI will take over conducting interviews and focus groups. The disadvantage of human-conducted interviews are costs and geographical limitations. Further, they are not scalable, and they are subject to bias from multiple sources: “the people asking the questions, those recording the data, and those ultimately writing the report. A moderator can—wittingly or unwittingly—influence the input they receive through their tone and body language, as well as the questions they choose to ask or avoid asking.” “AI, on the other hand, is wholly unbiased so long as it is trained correctly. AI can analyze responses while interviews and focus groups are in progress, offering real-time feedback which researchers can use to adjust their approach as needed to gain the most valuable insights.”

Let’s take a closer look at this – AI is entirely unbiased AS LONG AS IT IS TRAINED CORRECTLY. The same could be said about researchers. If researchers were trained adequately, they would introduce less bias.

Elsewhere, I have written about the low quality that seems to be standard in qualitative marketing research – and now AI should fix it all? Marketing researchers seem to be happy with writing notes and discussing those instead of working with transcripts and actually analysing the data - it takes too much time, the client won’t pay for it, bla la bla.

Yes, of course, there is bias if you take notes, as it is impossible to remember everything that was said. Today, however, it is no effort to get a transcript. THIS is a valuable application of AI.

Regarding the alleged wittingly or unwittingly influence – where there is interaction, there is also reaction. Even if you sit still, don’t grin, smile or make a funny face  - the respondent will react. The same happens if AI is conducting an interview. This also constitutes an interaction, and thus, there will also be a reaction influencing the data you get.

Regarding data analysis – I already showed what kind of analysis AI can do (See article: AI powered versus human powered qualitative data analysis). What kind of analysis do you expect to get in REAL TIME when the data collection process is still ongoing?

If you are skilled and know how to ask interview questions, then you can learn something in the first minute of an interview that helps you understand something that is said 20 minutes later. Of course, not the second you hear it. You need the transcript and sit a bit with the data to analyse it to gain this insight (=deep understanding).

So don’t be fooled by those advertising messages. Yes, AI might be able to conduct interviews. But the quality will be different, there will be bias, and there will be AI interviewer influence.

I am looking forward to the first reports where AI interviewer influence is discussed :-)

When to Use and Not to Use AI

According to the insights that Gigerenzer gained from his research at the Max-Planck Institute, complex algorithms perform well if you have a situation which is stable and well-defined. Examples are chess, Go, and Jeopardy. If you have a situation or problem that is unstable, like predicting future market development or consumer behaviour, then his recommendation is to keep your hands off complex algorithms.

I have repeatedly expressed that I use AI tools myself and experiment with them. So I am not opposing their use. I am convinced they will revolutionize the world we live in, not only marketing research. But do consumers behave like the pieces on a chessboard? Do they follow the rules of the game? Are they rational decision-makers? And do we live in a stable, predictable world?

In other words, what do we get if we let deep artificial neural networks, i.e. machines, take over and let them collect and analyse data?

Let's see whether chat GPT-4 has an opinion about it

I had a little chat with chat GPT-4 on this topic. It seems to follow my line of argument, being more realistic about what it can and can’t do. I especially liked that part about establishing rapport 😊.

Chat GPT-4:

"Deep learning models, like those based on natural language processing (NLP), have made significant advancements in recent years, enabling them to understand and generate human-like text. However, conducting live qualitative interviews with consumers using deep learning models still has its limitations:Nuance and context: While NLP models can understand and respond to text inputs, they may still struggle with capturing the nuances, emotions, and contextual cues that are essential for in-depth qualitative interviews. Human interviewers are better equipped to respond to these subtleties and adjust their approach accordingly.Ethical considerations: Using an AI model to conduct live interviews may raise ethical concerns, particularly regarding informed consent and transparency. Participants should be made aware that they are being interviewed by an AI and have the option to decline or withdraw.Complex probing: In qualitative interviews, interviewers often need to probe further into participant responses, asking follow-up questions to explore the topic more deeply. While NLP models have progressed in understanding context and generating relevant responses, they may not be as effective as human interviewers in probing complex or emotionally charged topics.Building rapport: One of the key aspects of qualitative interviews is establishing rapport and trust with the participants. This helps interviewees feel more comfortable sharing their thoughts, feelings, and experiences. AI models may struggle to establish the same level of rapport and empathy as a human interviewer."That said, AI models can still be used to support qualitative research in several ways:Transcription and analysis: AI models can be used to transcribe and analyze the content of interviews, helping researchers identify key themes and patterns more efficiently.Chatbots for preliminary interviews: AI chatbots can be used to conduct simple preliminary interviews, gathering basic information about the participant and their experiences, which can then be used to inform a more in-depth, human-led interview.In conclusion, while deep learning models have made significant progress in understanding and generating human-like text, they are not yet capable of entirely replacing human interviewers in conducting live qualitative interviews. AI can, however, play a supportive role in various aspects of qualitative research." 

I then questioned chat GPT-4 about being able to do data analysis for qualitative data. I love how it tends to agree with me at first (maybe a US American bias?), followed by a HOWEVER…. or as above: That said, AI models can still...

Chat GPT-4:

“You are correct that there is a distinction between what AI models can do in terms of analysis and the deeper, more contextual understanding that a human researcher can provide. AI models, particularly NLP models, are primarily focused on identifying patterns, themes, and correlations in text data, which can be helpful for processing and organizing large amounts of information.However, human researchers are often better equipped to analyze qualitative data to understand the underlying meaning, intentions, and context behind the text. They can draw upon their own experiences, intuition, and domain knowledge to interpret the data in a way that AI models may struggle to replicate.AI models can still play a complementary role in the analysis process by:Assisting with the initial coding: AI models can help researchers identify recurring themes, keywords, and phrases in the text, which can then be used as a starting point for human-led analysis.Visualizing patterns and trends: AI models can help visualize the relationships between different themes, codes, and topics, providing researchers with an overview of the data that can inform their subsequent analysis.Reducing the time spent on manual tasks: By automating certain aspects of the analysis process, AI models can save researchers time and effort, allowing them to focus on more complex and nuanced aspects of the data.In conclusion, while AI models have limitations in terms of providing the same depth of analysis as human researchers, they can still play a valuable role in supporting the research process. The most effective approach is often a combination of both AI-assisted analysis and human-led interpretation, where the strengths of each method can be leveraged to provide a more comprehensive understanding of the data.” 

I hope you enjoyed my little conversation with chat GPT-4 as much as I did :-)

Just yesterday, the first CAQDAS software announced the integration of generative AI. I am of course checking it out and will be testing it. Stayed tuned for more.

Written by

Susanne Friese

Tags

Older Sometimes you need a little bit of data and lots of right brain
Newer Rethinking Qualitative Data Analysis: Do we truly want a faster horse?