Transformation Partners in Health and Care > News and views > Guardrails and guidance: ensuring the safe use of AI in qualitative research 

This blog is the first from Transformation Partners in Health and Care (TPHC) and the Institute of Public Care at Oxford Brookes University (IPC). By combining TPHC’s NHS-facing transformation and analytics expertise with IPC’s academic rigour and leadership in research and evaluation, we’re exploring how emerging technology can be used thoughtfully and safely. Starting with AI in qualitative research, we’ll be sharing how we’re approaching use of this in our own teams and offering tips on what to watch out for as you navigate some of these challenges in your own organisations.

Qualitative research is profoundly insightful. It gives us the nuance, depth and context that numbers alone can’t reach, telling us compelling stories about the services we’re working with. It helps us understand lived experience, explore complexity and — at its best — amplify voices that are often overlooked. In health and social care especially, this human-centred insight is not a “nice to have”; it’s essential. 

But qualitative research also takes time. Rigorous analysis requires skill, reflexivity and careful judgement, particularly for the more intense methods such as grounded theory or phenomenology. Even lighter-touch qualitative work can feel resource-heavy when teams are under pressure to move quickly and, in today’s climate, with less resource. 

Enter artificial intelligence. 

Our teams are now successfully using AI tools to transcribe interviews, summarise free-text responses, and surface themes across large datasets. Used well, this saves time and extends analytical capacity. Used badly, they can introduce new risks that undermine the very strengths of qualitative research. This blog focuses on those risks and, importantly, what can be done about them. 

Why AI changes the risk profile for qualitative research 

AI introduces challenges that are quite different from those we’re used to managing, so we need to adapt and avoid playing catch-up. 

Confidentiality and data protection 

Qualitative data is often rich, personal and sensitive. Uploading transcripts or open-text responses into AI tools can create risks if data is stored, reused or processed outside agreed governance arrangements and the agreements made with participants providing you with that data*.  And so, not all tools are suitable for health and social care data, and “easy to access and use” does not always mean “safe to use”.

*An example here is from Microsoft AI Researcher Exposure (2023): While attempting to share open-source training data on GitHub, Microsoft researchers accidentally exposed 38 terabytes of private data due to a misconfigured SAS token. This included internal passwords, private keys, and over 30,000 internal Microsoft Teams messages.

Bias and partial perspectives 

The Large Language Models used in AI tools are trained on vast and imperfect datasets. That means they can reproduce existing biases and dominant narratives. In qualitative analysis, this can subtly shape which themes are emphasised and which experiences are sidelined — particularly when working with marginalised groups. A challenge we humans face too.  

Surface-level interpretation 

Qualitative analysis is not just about spotting patterns. It involves interpretation, challenge, theory-building and reflexivity. AI can generate plausible summaries very quickly, but plausibility is not the same as validity. Without careful human oversight, outputs can oversimplify meaning, miss context or, in some cases, simply be wrong. 

False confidence 

AI outputs often sound authoritative and polished. This can make them harder to challenge, especially for less experienced researchers. The risk is not just error, but misplaced trust. 

So, what does “safe use” actually look like? 

None of this means AI should be avoided altogether. It does however mean that clear guardrails are essential. Our teams have developed guidelines and processes that help mitigate the risks, and we can help you too.  

Start with data safety 

Before using any AI tool, teams should be clear about what data is being shared, how it is anonymised, where it is processed (or stored) and whether this aligns with organisational policies and ethical commitments. 

Be transparent about AI use 

AI should never be invisible in the research and evaluation process. Being open about where and how it has been used — for example, in transcription, coding support or summarisation — helps maintain trust and allows findings to be interpreted appropriately. 

For transparency, this blog has been created entirely by humans!

Keep humans firmly in the loop 

AI should support human analysis, not replace it. Teams need to actively interrogate outputs, test alternative interpretations and bring contextual understanding that AI simply does not have. 

Use tools that are fit for purpose 

Not all AI tools are designed for qualitative research. Understanding what a tool can and cannot do, and using it only for appropriate tasks, is critical to maintaining rigour. 

Practical questions to ask before using AI 

For teams considering AI in qualitative work, a few simple questions can be surprisingly powerful: 

  • What problem are we actually trying to solve? 
  • Is AI the right tool for this task? 
  • What are the risks to participants, data and interpretation? 
  • How will we quality-assure the outputs? 
  • Who remains accountable for the final analysis? 

Developing shared guidance, ethical checklists or review processes can help ensure AI use is consistent, thoughtful and proportionate rather than ad hoc.  

We can work with you to create and disseminate these within your teams and organisations in a way that fits how you work now and how you want to work in the future.  If you would like to talk about your needs, or just for a chat, email: rf-tr.tphc-communication@nhs.net.

Looking ahead: this is just the start 

AI is evolving at an extraordinary speed, and its role in qualitative research is only going to grow. The challenge is not whether to engage with it, but how to do so without losing the rigour, ethics and humanity that sit at the heart of qualitative approaches. 

This blog is the first in a short series exploring the safe and effective use of AI in research and evaluation. Let us know what AI topics you’d like to hear more about.  

Upcoming posts will look in more depth at: 

  • Prompting with purpose – why prompt design matters so much in qualitative analysis, and how poor prompts can distort findings 
  • Quality assurance and human judgement – practical ways to test, challenge and validate AI-supported analysis 
  • From policy to practice – examples of how organisations are building governance, guidance and capability around AI use in research 
  • Harnessing technology to transform research and evaluation – reflecting on the benefits of technology to improve efficiency in research and evaluation

Our TPHC expert

Dr Alex Mears, Research Specialist

Has over 20 years’ experience in and around the NHS and holds qualifications in law, research methods, and psychology, including a PhD. His expertise includes evaluation, research methodology, statistical analysis, and curriculum development.

He is a peer reviewer for the National Institute for Health Research and several academic journals. 

Alex has delivered projects for national and regional clients, serving as PMO lead, subject matter expert in digital, and evaluation lead. He is also trained in PRINCE2, agile (Scrum Master certified), and consultancy.