Dalberg uses cookies and related technologies to improve the way the site functions. A cookie is a text file that is stored on your device. We use these text files for functionality such as to analyze our traffic or to personalize content. You can easily control how we use cookies on your device by adjusting the settings below, and you may also change those settings at any time by visiting our privacy policy page.
In April, Dalberg and IDinsight co-hosted the first of a three-part webinar series, ‘Accelerating Your Mission‘, on how social impact organizations are moving from AI curiosity to AI capability. We were joined by Ben Brockman (Head of AI, Clinton Health Access Initiative) and Jeannie Annan (Senior Vice President for Research & Innovation, AI, and People & Culture, International Rescue Committee)—two leaders actively embedding AI inside large, mission-driven organizations.
The premise of the series is simple: there is no shortage of high-level commentary on AI in our sector, but there is limited practical guidance on how change actually happens inside mission-driven organizations. This first session focused on the earliest part of that journey: getting started, navigating skepticism and resistance, and finding credible entry points. The blog below distills what we took away.
Below are five practices we have heard about repeatedly throughout both the webinar and our broader work advising impact-oriented organizations to initiate effective AI transformations.
1. Write guardrails simple enough to fit on an index card
On a recent survey, 80% of nonprofits reported having no AI-acceptable use policy.1 This poses a significant risk not only to their reputation but also to program delivery. However, for those who have an AI policy in place, the most common mistake we see is an excess of rules. Organizations sense the risk, draft a 30-page AI policy, and then watch it sit unread while staff continue to paste sensitive data into the free version of ChatGPT. Ben from CHAI offered some useful advice on this: “You only need three or four bright red lines that you could fit on an index card.” The idea is a light-touch, but firm set of guardrails that strike the right balance between minimizing risk and encouraging innovation and experimentation.
For example, one of those bright red lines internally at Dalberg has been: “You should never put something out that has not been vetted by a human first.” What this means in practice is that the responsibility for an output never transfers to the AI. It is a simple but powerful rule. On the contrary, heavy policies feel responsible, but we have seen how they can slow learning and push experimentation underground. Light, memorable guardrails let an organization move quickly while keeping the genuinely high-risk behaviors clearly out of bounds.
2. Find your champions and water the seeds where they’re already growing
We have found that the single biggest predictor of whether AI sticks in an organization is not the technology budget. It is whether a small group of credible people inside the organization has the time and mandate to move this work forward. As one of the panelists memorably put it: “water the seeds where they’re already growing.” This was a consistent thread across the panel, and it matters most for organizations that cannot hire a dedicated AI team.
Don’t try to launch transformation everywhere at once. Find the teams already pulling on the thread, give them resources and recognition, and let their wins create permission and curiosity in the rest of the organization. Three things make champions effective in our experience:
– Protected time. This is the one that gets cut first and matters most. If your AI champion is doing this on top of a full load, they will burn out and the work will stall. Even 10–20% of someone’s time, formally protected, changes the trajectory.
– A direct line to leadership. Champions need to be able to escalate blockers such as procurement, data access, etc., without going through five layers. This is where the bottom-up model needs the top-down to meet it.
– Permission to fail visibly. Champions are running experiments. If the cultural cost of a failed experiment is high, they stop trying anything ambitious, and the organization loses the upside.
3. Set up the enabling environment and pick one AI system to converge on
To build an AI-ready organization, you first need to set up the right enabling environment for successful AI use to flourish. IRC’s approach was to identify the enablers explicitly and resource each one: a clear policy, accelerated data transformation, enterprise-grade tools, and an upskilling program. They also set up an internal AI accelerator fund with money raised from a donor specifically to seed cross-functional use cases like knowledge management and business development.
Secondly, both panelists agreed on the importance of converging on a single primary enterprise AI tool. This does not mean an organization should only ever use one tool— different workflows will always call for specialized applications, and that is fine. But having a shared primary platform is what transforms AI from a collection of individual habits into a genuine organizational infrastructure with prompts, bots, workflows, and institutional knowledge that become shared assets that compound over time.
4. Make AI use non-optional and formally integrate it into your systems
At some point, an organization has to move from “AI is encouraged” to “AI fluency is part of how we work here.” This was one of the strongest signals from the conversation, and it matters because the alternative— letting AI remain a personal preference— quietly entrenches the very inequality between staff (and between organizations) that the sector should be trying to close. Jeannie shared that IRC requires every department to set key metrics around AI use for driving efficiency. They didn’t dictate what those metrics had to be, as each department defined how it would use AI to improve its own effectiveness, but the requirement to have a target was non-negotiable.
Ben also offered a useful analogy: “In the year 2026, it’s not possible to just opt out of the internet in a professional white-collar environment. AI is heading that direction.” An organization cannot guarantee that staff who are 100% resistant to AI will not see their roles affected as the technology becomes core to knowledge work. However, mandating AI use without genuinely investing in training, time, and psychological safety is a recipe for disaster. The “AI is not optional” message needs to be paired with “and we are giving you the support you need to get good at it”.
5. Lead with the mission, including in the harder conversations about the negative impacts of AI
Two skepticisms come up reliably in social sector AI conversations: the climate footprint of AI and the risk of job displacement. Both deserve direct, honest engagement, not deflection. Both panelists’ approach was to ground the response in the organization’s mission and values, which, in our view, is exactly right. AI is, for our sector, a potential force multiplier on the social and environmental missions we already hold.
The question is not whether AI has costs and risks—it does— but whether we can deploy it responsibly enough that the gains for the people we serve outweigh those costs. This is what Jeannie from IRC meant when she said, “We are not just trying to use AI because a donor wants us to or because it’s shiny. We are trying to use it to amplify the impact that we have.” Mission-grounding turns out to be more than a slogan; it is the operational filter that decides which use cases to invest in, which risks are worth running, and how to talk to staff about the trade-offs honestly.
Moving forward
The next webinar in this series, ‘From familiarity to fluency’, will explore how organizations move from initial adoption to programmatic use. Ultimately, we plan to look at scaling AI for long-term impact and the funding architecture needed to sustain it.
If your organization is somewhere along this journey—whether at the “we should probably start” stage or the “we’ve started and we’re stuck” stage—we’d love to hear what you’re learning. The collective intelligence of this sector is one of its underused assets, and we are at a moment that rewards sharing.
Watch episode 2 and read insights from the webinar.
- 1. TechSoup and Tapp Network, ‘The State of AI in Nonprofits’, 2025. ↩︎