Grant funding AI initiatives
Bryan Pon
As the new coin of the realm, AI is everywhere, including in grantmaking programs.
Philanthropic organizations across the spectrum have spun up funding programs aimed at supporting AI initiatives that could impact their mission. And suddenly, lots of organizations happen to be using AI in some way .
Unsurprisingly, many AI technology companies—e.g., OpenAI and Google—have launched grant programs supporting AI, but so have many more generalist organizations, from the Gates Foundation to the GitLab Foundation. While many of these AI-focused grants have different priorities or focus areas, they are often broadly scoped and aimed at general positive social impact, or, in the Google case, “harnessing the power of generative AI to unlock potential for everyone, everywhere” (https://impactchallenge.withgoogle.com/genaiaccelerator/).
New capital for the social impact sector is always welcome, but I worry that this approach often confuses the desired outcomes of impact and experimentation. Put another way, while all of these programs seek impact, I believe many also seek technical experimentation, i.e., exploration of the solution space of AI. That tension is not new and not confined to AI, but the generalist nature of how AI can be used makes it really difficult to reconcile that tension in a grant program.
Which initiative would you fund?
Some specifics will help here. Let’s take two interventions as examples: One, a digital credit product aimed at financially underserved customers, adopts generative AI in its chatbot to improve the customer service experience for its 5 million users. Two, a new startup is testing a mobile application using AI computer vision to help smallholder farmers identify and remediate pests and disease; it has 500 users.
All else being equal (e.g., the cost or grant amount being equivalent), how should a funder evaluate the potential impact of these interventions? In the first case, the new AI chatbot measurably improves the user experience for millions of customers, while also reducing costs for the business. If we make the assumption that those cost savings help the business reach more users, improve the product, or reduce its fees, the total positive impact could be really significant. Yet the application of AI in this instance is not core to the product or service offering; I would call this an ancillary application of AI.
In the second case, AI is a fundamental part of how the product works and delivers value to the user; I would call this a core application of AI. Yet in this example, the scale of the user base means its overall impact is quite limited, especially in an ROI comparison with the first intervention.
The problem I’ve seen in my work advising foundations running these programs is that oftentimes the intervention that is already at scale will win the funding award—despite the fact that it’s adopting AI in an ancillary fashion only—because it is typically touching more lives, better able to quantify benefits, or has a better cost-impact ratio.
But I would argue that awarding grants to these initiatives simply because they’re adopting AI in an ancillary fashion misses the point.
Grant funding emerging tech
Funders create domain-specific programs (climate, social justice, etc.) because they have a specific mission, and that usually comes with a specific theory of change for that domain. Technology-specific programs—mobile, cloud, big data, blockchain, and now AI have all recently enjoyed the spotlight in grant programming—aren’t always as well-designed, but arguably should also carry a technology-specific theory of change. With AI, I think we’re still so early, and the solution space is so wide, that articulating any change or impact hypothesis to “AI” in general is really hard.
But more importantly, and the reason I think funding initiatives with only ancillary usage of the technology are missing the point, no technology is emergent forever. The reason we no longer have funds for “mobile” or “cloud” isn’t because they’ve disappeared, but because they’ve just been absorbed into the general technical tapestry of modern products/services and the economy at large. And so it will be with AI as well.
The implication is that every transformative technology goes through an emergent phase where we are exploring what it can and can’t do well. Markets do a great job of that up to a point, but disregard equity, leaving us to rely on the state and philanthropy to drive more inclusive outcomes. And this gets at the role of philanthropic capital in general—it should be de-risking initiatives that are focused on positive social outcomes and therefore can’t attract market capital. If you apply that lens to technology-specific programming, that really means focusing on early stage explorations of the technology and how it can be used for impact.
Back to our examples: An incremental reduction in customer service costs is great, but what does that tell us about the transformative potential of the AI? We can learn much more by investing in those initiatives that are using AI as a core component of the product or service, i.e., the AI is critical to the value proposition for users. Yes, many of these initiatives will fail completely. Yes, even those that don’t fail will result in less present-day impact compared to growth-stage interventions. And yes, this doesn’t compensate for a theory of change for AI in social change. But the fastest way for us to understand the deep, transformative potential of any emerging technology in the work that we do is by supporting those interventions that are pushing the boundaries of what can be done.
I’m not against technology-specific programming. Especially when sector-agnostic, it can offer a new tent pole to bring together organizations and funders from different domains, helping to cross-pollinate learning, form new partnerships, and identify common causes.
But funders need to think carefully about the outcomes they seek for these funds, and be very clear in evaluating the application of the technology in the interventions they consider funding.