AI-Moderated Interviews: If, When, and How to Use Them
AI interviews offer faster feedback at scale, but they're not a replacement for in-depth, human-led semistructured interviews.
AI-Moderated Interviews: If, When, and How to Use Them
Maria Rosala
January 30, 2026
Email article
Summary:
AI interviews offer faster feedback at scale, but they're not a replacement for in-depth, human-led semistructured interviews.
In the last year, unmoderated interviews became available for the first time. AI can now facilitate an interview for you. But how well do they really work?
We conducted a study with ten participants and 2 AI interviewers — Marvin and UserFlix — to learn the answer.
###
In This Article:
TL;DR
How AI-Moderated Interviews Work
Can AI Interviewers Build Rapport?
Use Cases for AI-Moderated Interviews
When to NOT Use an AI Interviewer
About Our Study
TL;DR
- AI-moderated interviews can help you collect structured input at scale. They’re best when you already know what to ask — like product feedback, recruitment screening, or multilingual interviews without translators.
- They follow the script, not the insight. They stick to your script and may probe when an answer is short or unclear (or when you told them to), but don’t chase unexpected insights, skip, or reframe weak or irrelevant questions.
- The experience can feel “almost conversational,” but still unnatural. Summaries help participants feel heard, yet interruptions, long pauses, repetitive questions, and poor time management are common.
- Use them to supplement, not to replace human moderation. For messy problem spaces, high-stakes decisions, or studies that require deep domain knowledge and real-time judgment, human interviewers still outperform. These tools are early — they’ll improve — but they’re not yet suited for semistructured, in-depth discovery interviews.
How AI-Moderated Interviews Work
An AI-moderated interview is a live interview with a participant facilitated by an AI system with a synthetic voice.
Currently, most AI interviewers on the market are voice-only. In the future, they may incorporate a human-like avatar.
An AI interviewer is a voice that asks questions. Participants can see themselves on camera, and sometimes the question or dialogue is transcribed on the screen. (UserFlix is shown above.)
Like in unmoderated usability testing, participants can complete their interview at a time of their choosing, without a human facilitator. This scheduling flexibility makes it convenient for both participants and researchers.
To set up an AI interview, you must share your research goals or an interview guide with the tool. UserFlix has an AI assistant that helps you craft an interview guide, and Marvin allows you to specify the level of probing for each question.
[UserFlix has an AI assistant that you can chat to when setting up your AI-moderated interview.]
When creating a new AI-moderated interview study, UserFlix’s AI assistant can help you craft an interview guide (if you haven’t got one already).
[In Marvin, you can select how much probing is done for each question. For each question, you can select "Keep it brief", "Probe a little", or "Probe more"]
Marvin allows you to specify how much you want the AI interviewer to probe for each of your questions.
The AI interviewer asks the questions in your guide and “listens” to the response. The system can follow up on the last response, if needed, based on your guide or research goals. For example, you might instruct an AI interviewer to request more detail if a response to an important question is very short.
Can AI Interviewers Build Rapport?
Rapport is built when participants feel they are understood and accepted by the interviewer, which allows for free flowing, honest conversation. To build rapport, the facilitator needs to listen carefully to what the participant says, watching their nonverbal cues such as body language and facial expressions, and also appropriately communicating to the participant that they have are being heard and understood.
AI interviewers don’t have faces, and they can’t read facial expressions. As any good interviewer knows, we gather and communicate a great deal of information not just from what is said, but from how it’s said and from nonverbal cues (like facial expressions).
As one participant in our study put it:
“If I look … [at] you like this [pulling a displeased face] you're going to notice if something's wrong or if I'm trying to tell you something or expressing something.”
Naturally, we were curious to see how well a synthetic voice can build rapport with real participants.
Participants Felt Heard
Despite the lack of nonverbal feedback from the interviewer (like nodding, smiling, or eye contact) some participants felt they had built rapport with the AI interviewer.
“I remember this was one of the [survey] questions. Do you feel understood? (...) yes, I did. Definitely. This was the most outstanding (...) idea I had about the overall experience.”
“It did a good job of (...) mirroring back to me what I had said so that I felt (...) heard and understood (...) that felt reflective of what a real conversation feels like with another human.”
The ability that AI had to condense and summarize the thoughts shared by participants was the biggest contributing factor and was the most common positive piece of feedback given about the experience. AI interviewers made participants feel heard, and some participants were impressed with how well and concisely the AI interviewers summarized their thoughts.
Here are some examples of the AI interviewer’s summarization:
UserFlix: “Thank you for that clear rundown. It sounds like you're deeply embedded with your product managers, UX designers, content designers, and engineers, ensuring that everyone stays aligned and informed. Now I’d love to shift gears a bit…”
Marvin: “Yeah, I get your point. That kind of deep in-depth psychological nuance reading, body language, adapting in the moment, AI's not quite there. It's clear that human expertise is still crucial for that level of research. Thanks for sharing that example…”
Summarization is an important facilitation technique. However, when human interviewers use summarization, they paraphrase and then check in with participants to ensure that their understanding is correct. Unfortunately, neither AI interviewer gave participants a chance to weigh in; they summarized and moved swiftly on to the next question.
One participant accidentally interrupted the AI interviewer constantly to confirm the summary, making the conversation stilted and awkward. She explained,
“I wasn't sure whether the AI was looking for confirmation (....) [it] changed the topic right away. (…) I would have expected (...) a little bit of a pause to, you know, kind of allow me to say ‘Yeah, (...) that's how I meant it’.”
At times, the AI interviewer even expressed empathy to participants. For example, UserFlix reassured a participant who apologized for the length of her answers to the questions.
Participant: “Oh my gosh, I feel like my answers are really long, I'm sorry.”
[...]