
Why Your Charity’s AI Projects Keep Stalling (And What Actually Works)
Why Your Charity’s AI Projects Keep Stalling (And What Actually Works)
You can listen to a conversation of this here:
If you work in a small or medium‑sized charity, you have probably approved at least one new “AI‑powered” tool, attended a webinar about AI’s potential – and then watched very little actually change in the day‑to‑day work of your team.
You are not alone. In 2024 only 15% of charities were using AI in their service delivery, with most usage happening “behind the scenes”, and the main barriers were lack of skills, training, and concerns about quality and ethics. Separate research with 1,500 small charities found that 79% were not using AI tools at all, with top barriers including lack of technical skills, limited understanding of what AI can do, and worries about data privacy.
Meanwhile, many staff are quietly experimenting with ChatGPT or Copilot without any policy, guidance, or strategic direction.
This article explains, in plain English, why AI projects in charities keep stalling – and sets out a practical sequence that actually works for organisations with 15–60 staff and limited capacity.
The uncomfortable truth: AI is already in your charity, just not strategically
Even in organisations that say they “aren’t using AI yet”, individuals often are. In interviews for a recent operations audit of a UK charity, staff described using ChatGPT to draft grant applications, press releases, and social media posts – without any formal training or policy. Some had pasted beneficiary‑related summaries into free AI tools to help with report writing, again with no organisational guidance.
Sector‑wide data shows the same pattern. The Charity Digital Skills Report 2025 found that concerns about data privacy, GDPR, and security are now one of the top barriers to AI adoption (43%), alongside limited digital skills and lack of training. Yet AI adoption is growing: commentary from Zoe Amar notes that AI use “shows no sign of slowing down”, with more smaller charities using AI than the previous year.
In other words, AI is already present – but as fragmented, individual experiments rather than a deliberate, organisation‑wide approach.

Five reasons AI roll‑outs fail in small and medium charities
1. Tool‑first, not problem‑first
The most common failure pattern is simple: the organisation leads with a tool, not a problem. A demo looks impressive, a discounted licence is offered, trustees are keen to show they are “keeping up” – so the tool is bought, training is arranged, and everyone hopes it will help.
Outside the charity sector, large studies mirror this: AI projects that start with “let’s try AI” rather than “let’s solve this specific problem with AI” are far more likely to stall. Without a clear link to a business outcome or ROI, initiatives lose momentum and leadership attention.
Inside charities, the dynamic is the same. If a Director cannot answer the question “Which precise process will change when we introduce this tool?” the project is already on shaky ground.
2. No behavioural change plan
Technology alone does not change how work is done. In the Thrive Together audit, Salesforce had been implemented three years earlier, but fundraising staff still maintained their own spreadsheets; the CRM became a sixth data source rather than the single source of truth it was meant to be.
The Database Coordinator described the situation clearly: “Nobody trusts the central system, so everyone keeps their own version of things. And because everyone keeps their own version, the central system never gets updated properly.”
This is not unusual. Research on digital change in charities notes that capacity and change management – not technology – are often the biggest barriers to adoption. If there is no plan for how behaviours will change (what will be retired, what will be mandatory, who will support people through the shift), staff default back to what they know.
3. Fragmented data and systems
AI depends on data. Many charities have highly fragmented systems: donor data in multiple unconnected tools, beneficiary data in individual spreadsheets, reporting numbers scattered across email threads and file shares.
In one medium‑sized UK charity, six separate repositories of donor information were identified: the official CRM, a personal spreadsheet, JustGiving, Stripe, a legacy campaign sheet, and figures embedded in finance reports. Creating a simple lapsed donor list – something that should take 10 minutes in a well‑managed CRM – took most of a morning, with lingering doubts about accuracy.
When the underlying data is fragmented and inconsistent, AI projects struggle; integrating clean, usable data often becomes the hidden, unbudgeted task that stalls progress.
4. No clear owner for AI governance
For most charities, AI is nobody’s job. Digital or IT leads are already stretched, operational leaders are consumed by service delivery, and boards are still getting to grips with basic digital oversight.
The Charity Digital Skills Report highlights that data privacy and governance concerns are major barriers, particularly for small charities, alongside a lack of training and digital skills. Parallel work on the “state of AI in charities” notes that senior leaders often lack the bandwidth and confidence to build any kind of plan for responsible AI adoption.
Without someone clearly accountable for AI policy, risk assessment, and internal communication, usage remains ad hoc and risky.
5. Staff anxiety about jobs and GDPR
Many staff associate AI with job losses and surveillance. Surveys show that charity workers already face high levels of job insecurity and burnout; more than one in four fear losing their job this year, and recruitment is challenging for over half of charities.
Layer on top of that legitimate concerns about data protection: 31% of small charities in one survey cited data privacy fears as a barrier to AI use, and wider sector research shows GDPR and security worries high on the list.
In interviews, staff spoke about being uncomfortable raising how they were using AI because they expected pushback or did not feel equipped to navigate the data protection questions. This combination of anxiety and uncertainty quietly undermines adoption.

What the charities that do get results do differently
The charities that are seeing real benefits from AI – even with very modest investment – follow a different sequence.
They start with diagnosis, not technology
Instead of asking “Which AI tool should we use?”, they ask “Where is our team losing time, and what is that costing us?” The focus is on identifying high‑friction, repeatable tasks and quantifying the staff time involved.
This diagnostic work can be as simple as structured interviews and basic time estimates, but it forces clarity about what problem is being solved.
They map real workflows, not imagined ones
A technique that works well is the “yesterday morning” method: asking staff to walk through what they did step‑by‑step, rather than describing their job in abstract terms.
In practice, this reveals hidden workarounds, duplicate data entry, and unrecorded tasks that never appear in process maps but consume hours each week – like repeatedly exporting and reformatting CSVs between unconnected systems.
They quantify the cost in hours and pounds
Once processes are identified, leading organisations put numbers against them: hours per week, people affected, and an approximate loaded hourly rate (salary plus on‑costs).
In the Thrive Together case, a series of such calculations revealed an estimated £47,000–£58,000 per year in recoverable staff time, across reporting, donor data management, onboarding, and internal communication inefficiencies. Similar process‑mapping work from sector accountants shows that even simple improvements can release significant capacity in finance and operations teams.
Without this quantification, AI projects stay in the realm of “innovation” rather than “stopping the bleed”.
They separate operational AI from beneficiary data
Effective organisations draw a firm line between operational data (e.g., donor communications, internal documents, anonymised aggregates) and sensitive beneficiary information.
Most of the early gains come from automating or supporting tasks around reporting, fundraising copy, internal admin, and non‑personal data transformation – areas where GDPR risk is far lower. This allows charities to move forward with AI while building a more careful, slower approach to any potential use near casework.
A simple 3‑step sequence you can steal without hiring anyone
If you do not have budget for external support yet, you can still adopt the core sequence that underpins successful AI adoption in charities.
Step 1: Interview your people
Pick 5–8 people across services, fundraising, operations, and finance. Over two weeks, schedule 45‑minute conversations with each. Use consistent questions such as:
“Walk me through yesterday morning – what did you do first, then what?”
“Which tasks felt like a poor use of your skills?”
“What do you do more than once a week that follows the same pattern every time?”
“Where do you copy‑and‑paste data between systems?”
This mirrors the stakeholder interview approach used in structured operational audits.
Step 2: Turn conversations into a list of processes
After each interview, list:
Every manual process mentioned
Any task that sounded repetitive or templated
Tools and systems used
A rough estimate of weekly time per task
AI tools like Perplexity or ChatGPT can help summarise transcripts and flag recurring patterns, but the key is that the list is grounded in what staff actually do, not what policy documents say they do.
Step 3: Build a basic ROI view
Create a simple spreadsheet with columns for:
Process name
Time per week (hours)
Staff involved
Estimated hourly cost
Annual hours (time per week × 52)
Annual cost (annual hours × hourly cost)
Sort by annual cost. The top 5–10 items are your “expensive problems”. Many will be candidates for automation or AI support; others may be fixable by standardising templates, clarifying responsibilities, or consolidating systems.
This exercise gives you a grounded, charity‑specific case for change – something that generic AI training or tool demos cannot provide.
When to bring in an external diagnostic (and what good looks like)
There is a place for external help, particularly when:
You need an independent view for trustees.
Internal relationships or politics make it hard to surface issues.
You lack capacity to analyse interviews and model ROI.
A good external diagnostic for a small or medium charity should:
Be time‑boxed (e.g. two weeks) rather than open‑ended consulting.
Start with interviews and process mapping, not a product demo.
Quantify recoverable time and cost, with conservative assumptions.
Separate “quick wins” from longer‑term changes.
Include AI and automation opportunities, but only where appropriate and safe.
Provide a board‑ready summary, not just a technical report.
Free “AI assessments” from software vendors can appear attractive, but their primary purpose is to sell a product. Tools have their place; diagnosis should come first.
Questions to ask before approving the next AI proposal
Before signing off on any AI or digital spend, consider asking:
Which process will this change, exactly?
If the answer is vague (“our marketing”), the proposal is not ready.
Where does this process live today?
Ask which systems, spreadsheets, and teams are involved now.
What will we stop doing if we buy this?
If nothing will be retired, you are layering complexity on top of existing work.
What behavioural changes are required, and how will we support them?
Training once is not enough; consider ongoing support and ownership.
How will we measure success at 90 days?
Agree specific metrics (e.g., hours saved, turnaround time reduced, error rates) up front.
If a proposal cannot withstand these questions, you are likely looking at another stalled initiative.

The bottom line
Charity AI projects do not stall because charities are “behind” or resistant. They stall because tools are introduced without a clear, quantified problem to solve, without behaviour change plans, on top of fragmented data and anxious, overstretched teams.
Start with diagnosis. Map how your organisation actually works. Quantify the cost of inefficiency. Then – and only then – decide where AI belongs.
