Dalberg uses cookies and related technologies to improve the way the site functions. A cookie is a text file that is stored on your device. We use these text files for functionality such as to analyze our traffic or to personalize content. You can easily control how we use cookies on your device by adjusting the settings below, and you may also change those settings at any time by visiting our privacy policy page.
Generic AI models fail in the field. Not because the technology is at fault, but because the models are deployed without local language or context.
This is one of the takeaways from the second episode of a three-part webinar by Dalberg and IDinsight on how social impact organizations are using AI.
In this instalment, co-hosts and moderators Mallika Sobti (Director, Internal Learning and Strategy, IDinsight) and Sana Ziccardi (Associate Partner, Dalberg advisors), and guests Maureen Trantham (COO, GiveDirectly) and Rikin Gandhi (CEO and Co-founder, Digital Green) discuss how AI is being applied in program delivery, where it’s working and the kind of guardrails being put in place to ensure that the work is delivering the intended impact.
A number of insights emerged from the conversation:
-It’s important that AI is used to solve a specific societal problem, not be deployed for its own sake.
-Human escalation is critical; without oversight, biased data can exclude the most vulnerable people.
-Access barriers—electricity access, data costs, digital literacy—need to be considered as they are among the factors that determine the adoption of AI tools in the global development space.
-Evaluation at every level is key, from assessing the model and checking whether the product is doing what it is meant to do, to ensuring that the user experience is true to the design.
Watch the first episode here.