Bryan Pon Bryan Pon

Grant-Funding AI initiatives

Bryan Pon

As the new coin of the realm, AI is everywhere, including in grantmaking programs. 

Philanthropic organizations across the spectrum have spun up funding programs aimed at supporting AI initiatives that could impact their mission. And suddenly, lots of organizations happen to be using AI in some way . 

Unsurprisingly, many AI technology companies—e.g., OpenAI and Google—have launched grant programs supporting AI, but so have many more generalist organizations, from the Gates Foundation to the GitLab Foundation. While many of these AI-focused grants have different priorities or focus areas, they are often broadly scoped and aimed at general positive social impact, or, in the Google case, “harnessing the power of generative AI to unlock potential for everyone, everywhere” (https://impactchallenge.withgoogle.com/genaiaccelerator/).

New capital for the social impact sector is always welcome, but I worry that this approach often confuses the desired outcomes of impact and experimentation. Put another way, while all of these programs seek impact, I believe many also seek technical experimentation, i.e., exploration of the solution space of AI. That tension is not new and not confined to AI, but the generalist nature of how AI can be used makes it really difficult to reconcile that tension in a grant program. 

Which initiative would you fund?

Some specifics will help here. Let’s take two interventions as examples: One, a digital credit product aimed at financially underserved customers, adopts generative AI in its chatbot to improve the customer service experience for its 5 million users. Two, a new startup is testing a mobile application using AI computer vision to help smallholder farmers identify and remediate pests and disease; it has 500 users. 

All else being equal (e.g., the cost or grant amount being equivalent), how should a funder evaluate the potential impact of these interventions? In the first case, the new AI chatbot measurably improves the user experience for millions of customers, while also reducing costs for the business. If we make the assumption that those cost savings help the business reach more users, improve the product, or reduce its fees, the total positive impact could be really significant. Yet the application of AI in this instance is not core to the product or service offering; I would call this an ancillary application of AI. 

In the second case, AI is a fundamental part of how the product works and delivers value to the user; I would call this a core application of AI.  Yet in this example, the scale of the user base means its overall impact is quite limited, especially in an ROI comparison with the first intervention.

The problem I’ve seen in my work advising foundations running these programs is that oftentimes the intervention that is already at scale will win the funding award—despite the fact that it’s adopting AI in an ancillary fashion only—because it is typically touching more lives, better able to quantify benefits, or has a better cost-impact ratio.

But I would argue that awarding grants to these initiatives simply because they’re adopting AI in an ancillary fashion misses the point. 

Grant funding emerging tech

Funders create domain-specific programs (climate, social justice, etc.) because they have a specific mission, and that usually comes with a specific theory of change for that domain. Technology-specific programs—mobile, cloud, big data, blockchain, and now AI have all recently enjoyed the spotlight in grant programming—aren’t always as well-designed, but arguably should also carry a technology-specific theory of change. With AI, I think we’re still so early, and the solution space is so wide, that articulating any change or impact hypothesis to “AI” in general is really hard. 

But more importantly, and the reason I think funding initiatives with only ancillary usage of the technology are missing the point, no technology is emergent forever. The reason we no longer have funds for “mobile” or “cloud” isn’t because they’ve disappeared, but because they’ve just been absorbed into the general technical tapestry of modern products/services and the economy at large. And so it will be with AI as well. 

The implication is that every transformative technology goes through an emergent phase where we are exploring what it can and can’t do well. Markets do a great job of that up to a point, but disregard equity, leaving us to rely on the state and philanthropy to drive more inclusive outcomes. And this gets at the role of philanthropic capital in general—it should be de-risking initiatives that are focused on positive social outcomes and therefore can’t attract market capital. If you apply that lens to technology-specific programming, that really means focusing on early stage explorations of the technology and how it can be used for impact. 

Back to our examples: An incremental reduction in customer service costs is great, but what does that tell us about the transformative potential of the AI? We can learn much more by investing in those initiatives that are using AI as a core component of the product or service, i.e., the AI is critical to the value proposition for users. Yes, many of these initiatives will fail completely. Yes, even those that don’t fail will result in less present-day impact compared to growth-stage interventions. And yes, this doesn’t compensate for a theory of change for AI in social change. But the fastest way for us to understand the deep, transformative potential of any emerging technology in the work that we do is by supporting those interventions that are pushing the boundaries of what can be done. 

I’m not against technology-specific programming. Especially when sector-agnostic, it can offer a new tent pole to bring together organizations and funders from different domains, helping to cross-pollinate learning, form new partnerships, and identify common causes.

But funders need to think carefully about the outcomes they seek for these funds, and be very clear in evaluating the application of the technology in the interventions they consider funding. 

Read More
Bryan Pon Bryan Pon

The Coming Shift: From Social to Agentic Web — and What It Means for the Poor

From a social to agentic web

Marissa Dean

We are standing at the edge of another major shift in how the world connects, learns, and earns. For more than two decades, the internet has evolved through distinct eras, each one defined by what people could do within it.

In the beginning, Web 1.0 allowed us to read — to access information that had never before been so widely available. Web 2.0 invited us to write and share, ushering in the age of social media and user-generated content. Web 3.0 introduced the idea of ownership through decentralization, giving rise to blockchain and digital assets.

Now, we are entering what many describe as Web 4.0 — a world where users can execute actions through intelligent agents that understand intent and carry it out on their behalf.

AI, Commerce, and the Reinvention of WhatsApp

Meta is rapidly transforming WhatsApp from a messaging platform into an integrated, AI-enabled ecosystem for communication, commerce, and connection. At the same time, foundational model developers, including Meta, OpenAI and Anthropic, are expanding what chat interfaces can do at remarkable speed.

The interface of the future will not be a website or an app; it will be a conversation. Instead of typing into search engines, people will ask questions, give instructions and trust digital agents to act. Soon, many of us will simply say, “Order more toothpaste,” and the AI will find the product, compare prices, and complete the purchase — all within a chat thread presented visually or audibly.

The experience will feel seamless and intuitive. Yet, for those of us working in digital inclusion and livelihoods, it raises critical questions about who stands to benefit and who may be left behind as the web becomes increasingly agentic.

When Chat Becomes the Marketplace

Inside the chat-based platforms that now anchor daily life for millions, something new is taking shape. As AI integrates more deeply, commerce is being rebuilt around conversation. Product recommendations, comparisons, and purchases are no longer separate steps in separate apps. They happen within the same thread — in real time, guided by algorithms that learn what we want before we even know to ask.

This is what some have begun calling shoppable AI: a blend of conversation, personalization, and transaction. For users with stable connectivity and purchasing power, this promises convenience and time saved. For low-income users or microentrepreneurs, it may create both opportunity and risk.

Learning from the Past

We have been here before. Web 2.0 connected the world, but it also concentrated control and profit in a handful of platforms that monetized our attention and data. As AI becomes the organizing layer of the new Web, that pattern could repeat. This new layer could also unintentionally exclude those already at the margins: low-income microentrepreneurs who rely on visibility within WhatsApp to reach customers or non-English speakers navigating systems trained on English data, as examples.

For low-income users, particularly women and informal workers, this could take several forms:

  • Exclusion from visibility. Agentic responses may prioritize large or paying merchants, pushing smaller sellers further to the margins.

  • Rising costs of participation. Agentic features may begin to rely on heavier data use. For users with prepaid bundles or limited connectivity, participation in the “AI-rich” web may simply be unaffordable.

  • Digital dependency. As monetization opportunities expand within WhatsApp, changes to platform policies or pricing could alter the economics for millions of livelihoods overnight.

  • Loss of data privacy. This risk extends to all users, but it carriers sharper consequences for those with limited recourse. AI-driven chat interfaces collect rich behavioral and contextual data — tone, timing, emotional cues, and purchasing patterns—that can easily be monetized or misused without consent or understanding.

These are not abstract possibilities. They are near-term realities for people who use WhatsApp as their storefront, classroom, and community hub.

Stepping into the Next Chapter with Intention

The most hopeful part of this story is that we are still early enough to shape it. The decisions being made today — by AI labs, regulators, investors, and global development actors — will determine whether this next web empowers or excludes.

To build a more inclusive digital future, we will need to:

  • Strengthen digital and AI literacy so that users understand what they are agreeing to and what data they are sharing.

  • Advocate for fair visibility and discoverability for small sellers and women-led enterprises.

  • Push for affordability in both data and tools so that adoption is not limited to those who can pay for premium access.

  • Insist on transparency in how AI systems recommend, rank, and reward.

If we can get this right, the agentic web could open extraordinary opportunities — not just for convenience, but for empowerment. It could allow individuals, especially those historically left behind, to delegate routine tasks and focus more energy on creativity, care, and connection.

The challenge before us is not simply to innovate, but to do so with integrity and foresight. The web has always reflected human intention. The question is whether, this time, we will be intentional enough to ensure it serves everyone.

Read More
Bryan Pon Bryan Pon

When Caution Is Risky

It all begins with an idea.

Bryan Pon

Many of the social-impact organizations we work with are taking a deliberate, cautious approach to adopting AI. Their concerns about model bias, data privacy, output errors, environmental impacts (and more) are absolutely valid, and we typically encourage our clients to take a conservative approach—after they establish an AI usage and governance policy.

Because if your careful approach to AI adoption is delaying you from establishing usage guidelines, that caution is actually creating a lot of risk.

While you wait, your staff are invariably already using AI tools, and that usage is growing week by week. Avoidance only creates a vacuum of best practices and practical guidance, which not only leaves your staff high and dry in terms of the training and policies they need, but also leaves you with no liability coverage or recourse were one of your staff to mishandle data with an AI tool.

For most organizations, taking a “wait-and-see” approach to large scale technological transformation makes sense. The move to cloud computing was a slow, inexorable transition that didn’t really punish laggards. But staff weren’t experimenting with cloud infra on the side without you knowing, putting your data and reputation at risk. In the AI era, the safe assumption is that all staff are using AI personally and probably professionally, whether explicitly or not.

With this in mind, one of the most important ways to reduce risk is to get out ahead of these practices with formal AI usage and governance policies that can support your employees and protect your organization’s most important assets.

Read More