Recently, Dalberg and IDinsight co-hosted the second session of our three-part series, ‘Accelerating Your Mission‘, on how social impact organizations are moving AI from internal experimentation into program delivery. We were joined by Maureen Trantham (COO, GiveDirectly) and Rikin Gandhi (CEO and co-founder, Digital Green)—two leaders building AI-enabled programs that reach people directly, in some of the most resource-constrained contexts in the world. 

In the first of this series, we focused on how to get started with AI for impact, focused on internal upskilling, capacity building, and organizational transformation. This session asked the question that often comes alongside internal concerns: how do social impact organizations identify where AI can meaningfully improve program delivery, and how do they test those ideas responsibly in the real world? 

While 92% of nonprofits say they use AI in some form, only 7% report changes that meaningfully shift what their teams can deliver.1 The pattern behind those numbers is that most adoption is in back-office functions—drafting, summarizing, and grant-writing. Yet, the social sector’s biggest constraints are not in its back office, they are in delivery. Programmatic applications require deeper investment and more thoughtful adaptation than off-the-shelf tools can provide, but they are also where the most distinctive value lies. 

Where the shift is already happening 

AI-fluent organizations are already finding innovative ways to integrate this technology into program delivery across very different fields, from helping farmers access more timely, tailored advice, to getting resources to families before a crisis has fully unfolded. GiveDirectly and Digital Green offer two early examples of what this shift looks like in practice. 

For GiveDirectly, AI is being used to test faster and more anticipatory models of cash delivery. Instead of relying only on slower, in-person processes to identify and enroll recipients, GiveDirectly is exploring how data and machine learning can help locate need, support remote enrollment, and move resources before a crisis has fully played out. The innovation seeks to hasten assistance to prevent losses, rather than only compensate for them. 

For Digital Green, AI is changing the shape of agricultural extension. FarmerChat gives farmers a way to ask questions in local languages, using voice, text, or images, and receive advice that is more tailored to their crops, conditions, and timing. The innovation is not just lowering the cost of advisory support; it is shifting from a one-way model of information delivery to a more responsive, farmer-led model. 

These examples show that programmatic AI is already working in the social sector. For organizations looking to bring AI into program delivery, the practices below—shared by both GiveDirectly and Digital Green—lay a strong foundation for making sure AI applications are useful, safe, and scalable. 

1. Start with the problem, not the technology 

The most consistent thread of the conversation was also the simplest: do not start with AI. Start with a delivery problem that limits impact, then ask whether AI is actually the right tool to solve it. 

For GiveDirectly, that problem was speed. “In many of the places we work, it still takes a really long time for cash to reach people in crisis,” Maureen said. Those delays have consequences: families sell livestock, take on debt, or go without food while support is still making its way through the system. Digital Green’s journey followed a similar trajectory. After years of using farmer-produced videos to spread advice, they still faced the challenge of making this information more accessible, timely, and responsive to the diversity of farmers’ needs.  

2. Invest in field context and center users from the start 

Utilizing AI in the social sector is not as simple as taking a powerful model off the shelf and placing it into an impact program. Maureen warned that models can “systematically exclude or miss vulnerable populations”, the very people social impact organizations are trying to reach. Rikin described a similar gap: frontier models are often trained on data that does not represent the farmers, languages, and agroecologies our sector serves. Across domains, generic AI struggles with low-resource languages, local terminology, and nonstandard units.  

That is where mission-driven organizations have a distinctive role to play. Major technology providers are improving rapidly, but they are not optimizing for the type of users at the center of social programs. Impact organizations bring proximity, field knowledge, user relationships, implementation capacity, and human oversight needed to make AI useful in those realities, as well as to build the language and data infrastructure the market is unlikely to prioritize on its own. 

3. Unlock partnerships  

Programmatic AI is rarely a single-organization endeavor. GiveDirectly’s anticipatory flood response in Nigeria made this visible. Google brought satellite, weather, and vulnerability data to help predict where flooding was likely to hit. GiveDirectly brought the implementation capacity: the field experience, relationships, and payment infrastructure needed to pre-enroll households and trigger cash before the flood arrived. As Maureen put it, “neither of us could do it on our own.” 

Digital Green’s work shows a similar pattern in agriculture. Its AI tools are built around partnerships with Ministries of Agriculture, NGOs, and extension agents who already support farmers on the ground. Those partners help tune the models to local realities and create a bridge of trust before tools move more directly to farmers. Partnerships also matter for access: Rikin described working with telecom operators to zero-rate data costs so farmers are not paying to submit voice notes or photos.  

4. Invest in product design, onboarding, and iteration 

When AI sits in front of an end user, usefulness has to be obvious quickly. Early interest is not enough; people need a reason to come back. Maureen described this as one of GiveDirectly’s clearest lessons from early chatbot pilots. Recipients asked substantive, practical questions, indicating real demand for guidance, but sustained engagement was harder. In Malawi, fewer than 10% of users returned for repeat interactions, prompting them to rethink the product experience to focus on stronger onboarding, voice-led UX, and user journeys that give people a concrete reason to return. 

The broader lesson is that programmatic AI cannot be launched like a static tool. As Maureen put it, “you can’t necessarily just build it and then set it free.” The experience must keep proving its value: helping users by making the interaction feel relevant and giving them a reason to return when the next decision or problem arises. Rikin made a similar point from Digital Green’s work, noting that suggested questions from nearby farmers and peer examples helped users move beyond the blank text box and see the tool as connected to their own reality. 

5. Build ethical practices in from the start 

The risks of programmatic AI are different from the risks of internal use. When AI shapes who receives support, what guidance people get, or how sensitive data is used, the stakes are higher, especially for communities that may already have reason to mistrust institutions. As an example, GiveDirectly’s responsible AI framework focuses on four risks: bias and exclusion, transparency and trust, over-automation, and data privacy. Their approach consists of using multiple data sources to reduce the risk of missing vulnerable households; explaining consent and data use in plain language; and keeping humans responsible for final decisions. Responsible AI does not have to mean slow AI. As Maureen puts it, this allows GiveDirectly “to move forward with more confidence.” 

Moving forward 

We aim to use these insights to continue sharing lessons across the nonprofit and global development ecosystem. If your organization is taking AI into a programmatic context or has started and is working through the same questions Maureen and Rikin shared, we would love to hear what you are learning. 

Watch episode 1 and read insights from the webinar.

  1. 1. Virtuous, Fundraising.AI, ‘The State of AI Adoption & Transformation for Nonprofits’, 2026.  ↩︎

AUTHORS

Dalberg uses cookies and related technologies to improve the way the site functions. A cookie is a text file that is stored on your device. We use these text files for functionality such as to analyze our traffic or to personalize content. You can easily control how we use cookies on your device by adjusting the settings below, and you may also change those settings at any time by visiting our privacy policy page.