Skip to main content

Have you ever looked at the wave of AI developments around you and thought, “It all looks great, but I’m not sure this will work for my organisation”? Or, conversely, you may have thought, “everything seems possible with AI, so where do we begin?” If you have, you’re not alone. Many organisations in the public and non-profit sector are interested in what AI has to offer, but feel that it is simply out of reach – too complex, too risky, or requiring resources they don’t have.

ADC Consulting recently worked with the War Child Alliance and Rutgers, expertisecentrum seksualiteit to develop an AI Assistant designed to support their fundraising and reporting workflows. Through this project, we realised something important: the real barriers to AI adoption aren’t primarily technical. They’re human and organisational – and with the right approach, they can be overcome.

In this article by Elianne Anemaat (Senior Manager), we will explore a set of common barriers we frequently encounter in the public and non-profit sector – questions and concerns that often stand in the way of AI adoption. We believe that the approach and collaboration between ADC, War Child Alliance and Rutgers offers valuable insights into how to navigate those questions effectively.

What if we don't know enough about AI to even begin?

One of the first hurdles we often encounter in the public and non-profit sector is a perceived lack of expertise. Many people feel they need to become AI specialists before they can responsibly engage with the technology.

Our solution was to meet everyone where they were. Crucially, both Rutgers and War Child had internal AI ambassadors – curious, open-minded individuals who were ready to explore and experiment. That curiosity was instrumental in kickstarting the project and shaping its direction.

By starting with real people describing real problems, we created relatable personas and use cases. Through a co-creation process, teams from Rutgers and War Child participated in scoping, design, testing, and rollout. This process yielded immediate results and AI literacy grew naturally out of engagement, rather than theory. Ownership of the tool increased alongside confidence and capability.

Crucially, this co-creation process helped build trust – not just in the tool, but also between developers and users. Newcomers to AI can feel overwhelmed by consultants. We approached our work together as a partnership, learning from each other along the way. That shift fostered mutual understanding, which was essential in shaping a tool tailored to the organisations’ context and needs.

What if I can't trust what the AI gives me?

Another barrier is distrust of AI. Given that many NGOs work with highly sensitive data, an AI tool potentially posed some serious risks. There was also concern about not being able to trust or understand the system’s outputs.

We recognised that demystifying AI starts with understanding how it actually works, so we spent time during the development process educating teams about generative AI and its capabilities and limitations. We also prioritised transparency. We designed the AI Assistant using a Retrieval Augmented Generation (RAG) approach, ensuring it pulled from the organisations’ own databases and processed all data in this secure environment. More importantly, the AI Assistant clearly displayed which sources informed each output. This technical solution provided traceability and accountability and built confidence in the tool.

What if we don't have the capacity or resources?

Limited funding and time are everyday realities for NGOs and public organisations alike. Launching a large-scale AI initiative is unrealistic. Recognising this, we explored collaborative funding models to reduce the barrier to entry.

War Child, Rutgers and ADC combined their resources to fund the development of the AI Assistant. Pooling funding proved to have many benefits:

  • It enabled the creation of a high-quality, tailored solution which neither of the organisations could have afforded on their own.
  • It enriched the use cases as each organisation had different perspectives on the same problem.
  • It offered an opportunity to network and learn with a peer organisation, catalysing new potential collaborations even after the project is done.

At the end of the day, the solution we developed saves time, money and provides localised solutions, all of which are priorities in the sector.

What if our data isn't good enough?

Concerns around data quality and structure emerged during the design of the tool – particularly in relation to how it would select relevant information. It became clear to us that connecting the AI Assistant to an organisation’s entire SharePoint or database would likely cause issues, including conflicting document versions, low-quality content, or access restrictions.

So, we decided to be pragmatic. Rather than waiting for “perfect” data, we worked with what was available. The RAG solution was based on a user-selected database, allowing teams to decide which documents should be included. By ensuring that the AI’s outputs were based on carefully chosen, reliable sources, we improved both the quality and relevance of the results. At the same time, it’s important to emphasise that even with the flexibility of LLMs to handle unstructured inputs, the old adage of “garbage in, garbage out” still holds true. Good governance, clear data management, and accessible databases are essential foundations for any AI project.

What if our organisation just isn’t ready for AI?

Finally, we reflected on a broader challenge we see across many public and non-profit organisations: the question of where AI should live within the organisation. The project sparked a valuable conversation about AI ownership. Too many organisations see AI development as the responsibility of the IT department rather than seeing it as a strategic priority which cuts across all departments.

To counter this, we worked to frame AI as a capability which should be present in all teams and one that requires clear ownership across the organisation. AI needs to be part of all strategic discussions, with involvement from all relevant teams; from programme management to operations, and fundraising. When leaders understood that AI wasn’t just a technical upgrade but a catalyst for broader innovation and organisational change, momentum built more quickly.

And finally, perhaps one of the best results: The development of this first AI tool, unlocked the creativity of colleagues, and spawned a number of new and highly relevant use cases. From AI-curious to AI-innovators in a matter of weeks.

Key takeaways for leaders

  • You don’t need perfect data or deep technical expertise to start.
  • Start small, stay close to users, and build transparency into your solutions.
  • Co-creation builds trust, skills, and adoption.
  • Pooling resources can make innovation accessible and sustainable.
  • AI intrapreneurs can emerge from all parts of your organisations.
  • Leadership should treat AI as a strategic lever, not an IT side project.

Finally...

The greater risk isn’t experimenting with AI. It’s staying still, waiting for the perfect moment, and never gaining the hands-on experience and understanding that only comes through practical use.

You don’t have to know everything before you begin. Learning happens by doing, preferably in small, controlled environments where teams can build understanding and confidence through experience. In every organisation, there are individuals ready to explore new ground. Supporting those AI-curious pioneers can be the spark that turns interest into impact. Start small, stay intentional, and let your teams lead the way.

Continue the Conversation

Would you like to learn more about implementing AI in your organisation? Or do you have questions regarding the content of this article? Reach out to Elianne Anemaat to discuss.

Send message

Stay Updated

Interested in the latest case studies, insightful blog articles, and upcoming events? Subscribe to our monthly data & AI newsletter below!

Gallery of ADC