- Discovery Research
- Usability Testing
AI & UX Research: Friend or Foe?

We once viewed research as exclusively human territory. That's quickly changing. More people are using AI as their research assistant. However this raises a lot of eyebrows and begs the important question: is AI a friend or a foe to UX research?
I believe AI is ushering in what could be a golden age of research. In the past, doing research well was often slow, expensive, and limited to well-funded teams. Many companies made product decisions based on hunches, rushed feedback, or poorly interpreted analytics. The result? Wasted time. Failed launches. User frustration.
AI is changing that. It is making high-quality research faster, more scalable, and more accessible to teams of all sizes. It automates the tedious parts of the process: transcription, note-taking, video tagging, theme clustering, and even initial synthesis. What once took hours can now happen in minutes. That shift frees up time for researchers to do what they do best: find meaning, identify patterns, and advocate for users.
AI can even help with research planning. Tools like ChatGPT and Claude can help researchers brainstorm interview guides, write unbiased survey questions, and generate ideas for participant recruitment. AI can flag outliers in data, summarize interview themes, and visualize sentiment (not well, but at scale. The technology is not perfect, but it is incredibly useful.
This changes the game. For smaller teams, it lowers the barrier to entry. For bigger teams, it allows for speed and scale. Most importantly, it challenges the outdated perception that research is slow, expensive and optional. With AI, we are closer than ever to making rigorous, continuous research a standard part of product development.
But the picture is not all rosy. While AI can help us do better work, it can also invite ethical risks and critical blind spots. As researchers, we must look this in the eye.
Let us start with consent. Are participants aware that AI is involved in the research process? If an AI is analyzing their voice, facial expressions, or tone, they deserve to know. More than that, they deserve the right to opt out. Transparency must be non-negotiable. We cannot afford to treat participants as data points. If anything, AI should make us more human in our research ethics, not less.
We also need to ask ourselves where the data is going. Many AI research tools are third-party platforms. What are their privacy policies? How is the data stored, and who has access? If you are running research through a platform that uses AI, you need to know what is happening behind the scenes. UX research often involves sensitive information—especially when working with vulnerable groups. Protecting that data is not just good practice. It is your responsibility.
Then there is the issue of bias. AI does not eliminate bias. It can amplify it. Most AI tools are trained on large datasets, many of which carry embedded biases. If these tools are interpreting user data, they might misrepresent marginalized voices or reinforce harmful stereotypes. That is not hypothetical. It is already happening in AI image generation, voice recognition, and language modeling. So we need to stay critical. We need to audit our tools, question their outputs and diversify the data we train them on.
There is another thing AI cannot replicate: empathy. Trust. Body language. The unspoken. The subtle. These are core ingredients in qualitative research. They are what help us connect with our participants and read between the lines. AI may be able to transcribe an interview, but it cannot feel the discomfort in someone’s pause or the hesitation in their voice. It cannot earn someone’s trust. That still requires human presence.
AI can augment our work. It can speed it up. But it should not replace the human heart of UX research. There is a danger in thinking of AI as a replacement for real insight. Insight does not come from the volume of data alone. It comes from interpretation, empathy, and context. That is something only people bring.
We also need to remember that AI itself needs research. We are designing and deploying AI tools faster than we can study their impact. How do users feel when they are talking to an AI? What do they trust, and what do they fear? What expectations do they have? What harms might they experience? These are not abstract questions. They are design questions, and we need research to answer them.
So, is AI a friend or a foe to UX research?
It depends on how we use it. In the right hands, with the right ethics, AI can be a powerful research partner. It can democratize insights, speed up delivery, and bring rigor to more teams. But we have to stay grounded in our values. We have to protect consent, privacy, and truth. We have to use AI with intention.
UX research is, at its core, about understanding people. AI is a tool that helps us do that better—but only if we never forget the human at the center of the work.
Let us use AI to bring us closer to our users, not further away.
