Using AI to support service delivery

Introduction

AI is creating opportunities to extend the reach and impact of support services. It’s no longer a question of whether or not to start, it’s now about how to use AI meaningfully to enhance delivery, improve experiences for staff and improve outcomes for end users.

But according to the 2024 Charity Digital Skills Report, only 22% of charities feel prepared to respond to AI opportunities. Similarly, from Healthia’s work across the sector, we know that many organisations have yet to start and those that have are in the very early stages of experimentation.

There isn’t a manual on how to work with AI, and right now, the best way to learn is from peers, through knowledge sharing and experimentation. So we have conducted research to explore:

  • How people are really getting started
  • Where AI has helped organisations do more with less
  • Opportunities, risks and ethical considerations

Our research involved depth interviews with people already using AI in service delivery, a short survey of our network plus desk research into examples and case studies from organisations like Macmillan Cancer Support and RNIB.

In this report we share practical insights and lessons on navigating AI’s potential - whether it’s improving internal processes, tailoring user support, or enabling more accessible services.

Executive summary

People we spoke to recognise that AI is moving fast but are often working with limited resources, so it can be hard to keep up. There is a gap between current activity and what people think could be possible.

People talked about the potential for AI to:

  • Accelerate digital service transformation on a budget
  • Make services more user-centred and accessible
  • Boost internal process efficiency
  • Allow staff more time to focus on human connections

Through the research, three clear themes emerged:

  • Top-down vs bottom-up, where to start with AI?
    While some organisations are adopting top-down AI strategies, supporting frontline staff to experiment and learn can spark practical solutions and uncover scalable innovation.
  • Digital vs. human support, do you need to choose?
    How do we retain the human touch as the world becomes digital? Technology should be an enabler not a replacement for what people do best.
  • Opportunity vs. risk, how to strike the right balance?
    Concerns about unpredictable and sometimes inaccurate results are being mitigated by starting small and taking an iterative approach.

We include case studies from Macmillan Cancer Support, RNIB, Scope, and Breast Cancer Now, exploring AI integration with services.

And lastly we share principles and ideas that have emerged around ways to get started with AI in service delivery.

Findings

1. Top-down vs. bottom-up: where should we start?

AI is moving fast

AI is moving fast so it’s hard for organisations with limited resources to keep up.

The 2024 Charity Digital Skills Report revealed a gap between organisations investing in digital, and those that aren’t. It highlighted a growing division between small and large charities, with the larger charities advancing when smaller charities are more likely to just be starting out. 

The report found that 64% of small charities with an income of up to £1m are at an early stage with digital (‘curious’ or ‘starting out’), compared to 26% of large charities. In comparison, 74% of large charities have a strategy for digital (‘advancing’ or ‘advanced’ stage), compared to 36% of small charities. 

The pace of AI development risks increasing this gap.

In addition, charities aren’t typically seen as early adopters and are often overlooked and excluded from conversations driving technological development.

Implementing a small-scale, bottom-up approach

AI is multifaceted and continually evolving so it’s hard to know where to start when you are thinking about service delivery. This can lead to an overly risk-averse approach and a focus on developing a detailed, all-encompassing organisational strategy that tries to document all of the possible risks and mitigations before allowing experimentation.

However, some organisations have taken a different approach. They are supporting a process of small-scale experiments, within a governance framework, where staff can test and evaluate ideas that may warrant further large-scale exploration.  

There’s also no need to reinvent the wheel. AI developed for one use case or service can be adapted and reused elsewhere. Rachael Gilthorpe from Action for Children told us their approach is to “Build once and implement twice”.

Creating an environment for innovation

Some organisations have created powerful ways to foster innovation. For example, by providing research and development (R&D) time for staff, to support people to explore and test ideas using AI. Those closest to service delivery are often more likely to uncover challenges that could be addressed with technological solutions.

Giving staff the space to experiment is a great way to get started, but it also needs open-door policies from cross-functional leadership to allow ideas with potential to scale safely. Robust policies, governance, and assessment models uncover potential blockers and risks early in the process.

Case study: Macmillan

Macmillan have developed an AI-powered semantic search tool to enable users to find what they need…no matter the question. 

“A visitor to our website might search for ‘Who will look after my goldfish?’. Our standard keyword search would return zero results for that phrase, but our new Smart Search, which is an AI-driven semantic search, will successfully return a link to our ‘Pet Care’ page, even though the word ‘goldfish’ does not appear on the page.”

Howard Bayliss – Macmillan

Macmillan creates space for innovation by providing dedicated R&D time, and an open-door policy with Technical Leadership, which encourages staff to try out new ideas. Using this time, the digital team developed an AI tool to improve the website experience for users. Through experimentation, they developed a semantic search tool that supports users in finding the information they need on the Macmillan website, even if the questions they ask are obscure or contain spelling errors/slang. The semantic search tool was designed to overcome the risks related to AI giving false information or sensitive data, by training the tool to only provide answers in the form of links to publicly available content on Macmillan’s website. 

‘Smart Search’ is now live and Macmillan is gathering real-time data on user satisfaction and effectiveness.

What value does this bring?

  • It enables users to ask questions in a way that feels natural to them, resulting in better engagement and support
  • It saves Macmillan time on signposting and uses content that is already available

Starting small still requires a best practice and safe approach

‘Just starting’ with AI needs to go hand-in-hand with an understanding of risks and limitations. Changes need to be made in an open and collaborative way. We discuss this more in ‘Opportunity vs. risk: how do we strike the right balance?’.

Taking advantage of integrated AI solutions

For organisations with limited time and access to in-house AI expertise, it’s possible to pilot off-the-shelf AI tools integrated with software already in use (for example, Microsoft’s Copilot or Google’s Gemini). This means it’s possible to experiment with AI to support service delivery behind the scenes, without the need for developer time. These tools can support you to:

  • Write emails, reports, or funding applications
  • Generate meeting summaries with minutes and actions
  • Tailor messaging to different audiences (E.g., customising fundraising campaigns
  • Create draft presentations in a charity’s unique branding or tone of voice

2. Digital vs. human support: do we need to choose?

"The most important thing for us is to stay true to the needs of our users. People want to speak to a 'real person' so we need to find out how to optimise service delivery whilst answering that need." 

Rachael Gilthorpe, Digital Services and Service Design Lead at Action for Children

We need to do more with less

Charities are stretched more than ever before, with demand for services growing whilst budgets are increasingly restricted by the challenging economic climate.

But what if AI can support transformation, allowing more face-to-face support for those in need without having to compromise?

Finding out what AI can and can’t do

Face-to-face interventions tend to be prioritised by service providers as it's often the best way to personalise support and help people to feel cared for. This can result in digital being deprioritised.

But a user-centred approach—prototyping and testing changes with end users and staff—can identify where AI is fit-for-purpose and importantly, where it’s not. 

Jonathan Chevallier, chief executive of Charity Digital said: “You should always be user-led in your designs [...] Too many businesses before were maybe just saying, how do we stop these calls coming in and save a lot of money? And they created massively frustrated users.” (source: Third Sector Podcast: Charities and ChatGPT).

Can AI make services more user-centred?

AI can’t fix everything. Starting with a focus on users and truly understanding the problem is key to creating services that help both staff and those who rely on them.

What if charities could use AI to:

  • Extend service provision out-of-hours?
  • Identify the most vulnerable people so their needs can be prioritised?
  • Spot insights that may go uncovered by humans?
  • Limit the exposure of staff to distressing data?

Enhance don’t replace

AI offers new ways to manage, make sense of and enhance our use of data to better serve people. Through our conversations, we found that AI – in the form of chatbots or dynamic questionnaires – could help services to enhance their front door, making their services more accessible to people even out-of-hours.

A proactive chatbot could also help gather more information about users and their needs to help spot hidden needs earlier in the journey. This means that when staff get back to users they’re not starting from scratch and already have case notes to work with.

Treating AI like a member of the team

Carefree’s AI chatbot FIN AI has not only allowed them to provide support to their users when staff aren’t available, it has also enabled them to respond to the growing demand without needing more resources. Charlotte Newman and Miruna Harpa from Carefree said “We treat FIN AI like a member of the team [...]. Our values are a very important part of our process so the bot needs to talk like us and be warm, caring, proactive and supportive."

Carefree FIN AI demonstrates that the opportunity with AI isn’t to replace staff but instead to support small organisations to scale and grow.

Some user needs can’t be met by people

In the same way that sometimes technology isn’t the answer, the opposite can also be true. In some specific contexts or for particular user groups, digital-first is the answer. For example, through our co-design work for the NHS, we spoke to teenagers who were more comfortable opening-up about their mental health struggles online instead of attending a clinic drop-in session.

Offering multiple front doors into a service increases the chance to ‘meet’ people on their terms when they’re ready to get help.

Content tailoring and accessibility

AI tools can help organisations tailor content more easily and speed up the creation of accessibility features. As technology advances, it can go beyond simple tasks like text-to-speech, adding user-focused features like applying Plain English to online content or automatically creating subtitles for videos and transcripts for podcasts.

The opportunity to increase knowledge about your users and respond to their needs quickly

Some organisations are already using AI to increase their knowledge about users and their needs, expanding the opportunities for conducting user research. For example, Parkinson’s UK used AI to monitor and analyse users' conversations and trending topics across digital channels, and then used this data to shape content strategies and tailor resources to what users really need.

Balancing human and digital support

Overall, it’s important to understand the opportunities and limitations of AI-powered technology to establish where it is fit for purpose and where it isn’t. Building context-specific knowledge and solutions based on user needs increase the chances to make a positive impact on lives.

Case study: Scope

Scope is exploring how AI can support their research team.

“We have to manually input a lot of emotionally challenging, qualitative data into our database…could AI help with this? This could help take some of the emotional load off the team, and allow us to focus on other parts of our work” 

Nina M. - User Researcher at Scope

Scope is running small experiments to explore how AI can assist user researchers in the manual aspects of processing and sharing user insights. 

The content design team is investigating how AI can help reduce time spent on manual tasks. They have been conducting experiments with mock data to see how AI can support their needs and processes.

One experiment is focusing on developing an internal AI chat tool to enable other departments to quickly access user insights, which the team currently provides manually. Another experiment is exploring whether there is potential for AI to assist in uploading emotionally challenging qualitative data to their database, which would reduce effort as well as the emotional burden on the team.

What value does this bring?

  • Reducing time spent on manual tasks can allow the team to focus on the work where their skills add most value, for example, uncovering research insights, writing user stories.
  • Easy access to user insights within the organisation will inform evidence based decision making.

Case study: RNIB

RNIB are using AI to make bank statements accessible 

“We see ourselves as pushing the boundaries of what’s possible. By combining user research and AI solutions, we are able to provide a more tailored, helpful audio experience for blind and partially sighted bank customers.”

Aidan Forman - Director of Technology and Digital Transformation at RNIB

The Royal National Institute of Blind People (RNIB) found that blind and partially sighted users struggled to get essential information quickly from transcribed documents. Traditional audio reads every word in order, without any prioritisation, making it tedious to access details. Imagine every bank transaction being read aloud one by one?

To address this, RNIB conducted research to understand what information is important to users across different document-types. The team then developed an AI solution using Azure OpenAI, to interpret the document and provide a more tailored, helpful audio experience for users.

What value does this bring?

Access to information is a fundamental right and this innovation improves experiences for blind and partially sighted people, going beyond providing simple ‘access’. Through a combination of research to identify a problem worth solving and innovation with AI, RNIB have created a solution that ensures users can get to the information they need as quickly as possible.

3. Opportunity vs. risk: how do we strike the right balance?

Understanding the risks and limitations of AI

Like any technology, AI comes with its own set of risks and limitations. These include ethical and privacy concerns, such as the risk of biases or data breaches, especially in the context of customer facing tools. 

For some, those risks still overcome the opportunities and a lack of trust can lead organisations to ban AI altogether. This can mean teams bypass restrictions and deploy their own solutions to cope with their needs. 

Instead, people told us they have had more success through increasing AI literacy across an organisation while simultaneously educating staff around risks and limitations. Some organisations are already providing AI training for their staff through guidance documentation and ‘lunch and learn’ sessions.

The importance of data governance and AI policy

Everyone we spoke to said that a strong data governance model and AI policy were the bedrock for the ethical implementation of AI. It’s important to involve as many stakeholders as possible in this process, from IT, to data security, digital, marketing, accessibility and customer support. 

Services are the sum of their parts and sometimes only those closest to the work will be able to spot specific issues. As well as causing security risks, a poorly designed AI tool could alienate users through its use of language or a lack of awareness of condition specific context. 

Policies need to be regularly reviewed and updated to keep up with AI development and an evolving understanding of user needs. There are templates and guidance available online that can help organisations get started with creating an AI policy. An important consideration is that such policies and governance are key enablers of bottom-up innovation as discussed in the 'Top-down vs. bottom-up: where should we start?' section.

Working in the open 

By working openly and learning from others, both within and outside their field, organisations can better identify and manage risks, helping them move forward. Although it may seem counterintuitive, involving more risk-averse teams early on can encourage innovation through teamwork, inclusion, and co-creation.

Engaging with individuals and teams from across the whole organisation invites individuals to share thinking, experience and perspectives on the considerations and decisions around AI. 

Collaboration can also accelerate the process. For example, Comic Relief has joined an AI collaborative working group alongside other UK charities. They meet regularly to learn from each other, build skills and discuss best practices in a safe, open setting.

Restricting AI by design to minimise risk

"Necessary governance and restriction can lead to innovation" 

Howard Bayliss - Senior Developer at Macmillan

AI isn’t always completely accurate - hallucinations are well documented. Some risk can be mitigated through design. It’s possible to design tools in a way which restricts the type of data the AI references, and controls the type of answers shared to users. You can read more about this great example in the Macmillan Case Study.

Case study: Breast Cancer Now

“Charities are thinking 'should we or shouldn't we' with AI. But I think this is the wrong question. The question is ‘how is AI impacting the people we serve and how should we respond to that now and in the future?”  

Cath Biddle - Associate Director of Digital, Breast Cancer Now

Breast Cancer Now is focusing on developing AI literacy across the organisation, including all staff,  leadership and trustees. They want to adopt AI systems that are effective and efficient but also that avoid discrimination or harm.

They are doing this by hosting lunchtime learning sessions and writing internal guidance to help increase knowledge around the use of AI. Through open dialogue, they want to engage everyone in the journey, and ensure everyone is aware of the risks and limitations of AI.  They are also piloting ‘off the shelf’ tools so that they can learn through experience.

What value does this bring?

  • Staff can use AI more confidently and responsibly when they understand its risks and limitations.
  • Involving everyone also brings diverse perspectives, helping to uncover important considerations such as related to ethics, accessibility, or data biases that might otherwise be missed.

Getting started

Drawing on what we heard from charities involved in this research, as well as further desk research conducted by the team, we have developed the following set of principles to help you get started with or accelerate your existing work with AI.

Five principles

  1. Start with people and prioritise problems worth solving
  2. Understand and mitigate the potential risks
  3. Create an environment for innovation
  4. Start small and iterate
  5. Don’t do it alone

1. Start with people and prioritise the problems worth solving

There are so many things AI could do, how do you know what are the right problems to solve? By starting with user needs and evaluating how well particular solutions will meet particular needs, we reduce the risk of investing in the wrong solution.

Don’t be afraid to start with staff needs: efficient internal processes will also improve experiences for the people you support. Solutions tailored to specific user needs will have the biggest impact.

2. Understand and mitigate potential risks

People who had already started their journey with AI all said that a strong data governance and AI policy were the foundation of secure and ethical AI development. Are the right voices involved early in the process? Given the pace at which AI technology is evolving, these need to be regularly reviewed against new features and criteria.

3. Create an environment for innovation

Innovation doesn’t happen in a vacuum. Strong leadership is necessary to drive change. By educating and encouraging staff to test new ideas, we can expedite learning. Having open-door policies or creating AI multi-disciplinary working groups will foster the right environment for innovation.

4. Start small and iterate

You don't need to be an AI expert to start experimenting or simply starting the conversation. It can also be hard to keep-up with the pace of AI development. It’s OK to start with small scale pilots using off-the-shelf solutions and see where this takes you.

5. Don’t do it alone

Embrace collective power! Collaborate with people inside and outside your organisation. By sharing insight, ideas, challenges and solutions we can accelerate AI development that ultimately increases service impact.

Ten ideas for ways AI can help with service delivery

Below we have summarised some potential use cases for AI within charity service delivery that were shared or referenced during our research.

  1. Casework management - AI can simplify tedious casework management and free up time for face-to-face support. For instance, HelpFirst uses AI to help staff prioritise the most vulnerable users and summarise case notes.
  2. Out-of-hours support - AI can keep the service front door open with enhanced scripts and questionnaires, providing support even outside regular hours. This potential is further explore in our section ‘Can AI make services even more user-centred?’
  3. Online customer insight tracking - AI can monitor and analyse users' conversations and trending topics across digital channels and use this data to shape content strategies and tailor resources to what users really need. Parkinson's UK, for example, used AI to better listen to and respond to the needs of their online communities.
  4. Funding applications - AI can help with drafting and refining grant applications, suggesting structure and language enhancements based on previous, successful submissions. More on this can be found in our section on 'Taking advantage of integrated AI solutions'.
  5. Accessibility - AI can improve access to information for people with access needs. For example, RNIB uses AI to enhance its accessible document service. Read more in our RNIB case study.
  6. Volunteer matching - AI can help managers to coordinate volunteers by matching their availability, preferences and skills to opportunities. LiveImpact’s blog discusses how AI can enhance volunteer management in non-profits.
  7. Customer support - AI, such as chatbots, can help facilitate some aspects of initial contact and out of hours support. Mind uses Limbic’s virtual referral assistant, while Carefree uses AI to help with certain aspects of its customer support, automating messages and tailoring them to user behaviour.
  8. Educational resources in emergencies - AI can offer real-time, tailored answers to urgent questions based on extensive data and guidance. Save the Children, for instance, is developing an AI tool to help parents, teachers, and caregivers get quick, context-specific child protection advice for emergencies and daily situations.
  9. Website navigation and search - AI-powered search tools can help users navigate charity websites, to find essential information quickly and efficiently. As discussed in our case study, Macmillan has developed an AI-powered semantic search tool to help users to find what they need online.
  10. Adding accessible content - AI can automatically generate subtitles for videos and transcripts for podcasts. Additional insights are available in our section on content tailoring and accessibility.

About Healthia

Healthia is a strategic research and service design partner working with organisations to support needs-based transformation. Our work unlocks decisions based on evidence not guesswork and builds a solid foundation for change.

Clients include NHS England, NHS ICBs, Public Health Wales, Macmillan Cancer Support, Breast Cancer Now, Scope and WithYou.

Typically, organisations choose to work with us when:

  • they are starting a transformation project,
  • need to improve a service or launch new services,
  • are unsure about where to invest for impact,
  • and lack structured evidence around what people need.

We work to:

  • uncover unmet needs,
  • identify opportunities for where digital and AI can help,
  • co-design change,
  • prototype opportunities for innovation,
  • build capability across an organisation,
  • build watertight cases for investment.

We achieve these outcomes:

  • clarity and vision for needs-based transformation,
  • confidence around where and how to invest,
  • better experiences for all people involved.

ServiceShift: AI innovation sprint

Is your organisation struggling to navigate AI's complexity, unsure how to separate real opportunities from hype? Maybe there’s uncertainty around: where to begin exploring AI, how to identify valuable applications, and how to effectively manage risks and costs.

Healthia’s ServiceShift AI Innovation Sprint is a concise one-week, people-centric approach to help organisations identify and prioritise impactful AI opportunities. 

The sprint includes co-creation sessions to explore service challenges, ideation to generate potential AI solutions, and a focused evaluation phase to assess feasibility and impact. By the end of the week, teams have defined roles, clear priorities, and an actionable roadmap to guide AI initiatives forward.

Outcomes

  • Alignment: We foster a cross-functional team with shared goals, ensuring collaborative ownership of AI opportunities.
  • Focus: We prioritise high-impact, feasible AI applications aligned with organisational objectives.
  • Momentum: We build enthusiasm and buy-in across stakeholders, creating internal advocates for AI solutions.
  • Action: We establish a roadmap with defined next steps, guiding confident progression toward meaningful AI-driven service improvements.

We’d love to work together. Get in touch to see if we'd be a good fit for your situation. Contact gareth.fryer@healthia.services, 07984 972 234.

Appendix

Authors and editors of this white paper:

  • Molly Northcote - Consultant
  • Cécile Pujol - Senior Consultant
  • Gareth Fryer - Consulting Director
  • Claire Reynolds - Consulting Director
  • Sam Menter - Managing Director and Founder

Resources for charities

Guidance on AI and data protection (Information Commissioner’s Office)

AI policy template for charities (Platypus Digital and William Joseph)

AI hub (Charity Comms)

Evaluating generative AI text tools using an experimental framework (Neontribe via CAST’s Shared Digital Guides)

References

Amar, Z. (2024) How is AI changing organisations?

Amar, Z., Ramsay, N. (2024) Charity Digital Skills Report

Bayliss. H (2024) AI Semantic Search 

Charity Digital (2023) How are charities using artificial intelligence in service delivery?

Good Innovation (2024) AI Charity Collaboration

Joseph Rowntree Foundation (2024) Grassroots and non-profit perspectives on generative AI

Latham, P. (2024) Charities and Artificial Intelligence

Scurr, D. (2024) Charities harness AI for greater impact: Insights from the Deloitte Digital Connect panel session

Tanner, J. (2023) Five tips for leading in the AI era

Third Sector (2023) Third Sector Podcast: Charities and ChatGPT

Twentyman, J. (2021) Parkinson’s UK turns to AI to head off the ‘charity crunch’

Disclaimer

Any tools mentioned in this white paper are not endorsements. The examples and case studies are for illustration only, and readers should conduct their own research before adopting similar approaches or tools. The contributions from the charities we spoke to reflect their professional experiences, but don’t necessarily represent the views of their employers or any related organisations. All information is shared in good faith to encourage thoughtful discussion and to help you deliver Healthia® services.

Thank you to the organisations who have contributed to this research including: