When Caution Is Risky

Bryan Pon

Many of the social-impact organizations we work with are taking a deliberate, cautious approach to adopting AI. Their concerns about model bias, data privacy, output errors, environmental impacts (and more) are absolutely valid, and we typically encourage our clients to take a conservative approach—after they establish an AI usage and governance policy.

Because if your careful approach to AI adoption is delaying you from establishing usage guidelines, that caution is actually creating a lot of risk.

While you wait, your staff are invariably already using AI tools, and that usage is growing week by week. Avoidance only creates a vacuum of best practices and practical guidance, which not only leaves your staff high and dry in terms of the training and policies they need, but also leaves you with no liability coverage or recourse were one of your staff to mishandle data with an AI tool.

For most organizations, taking a “wait-and-see” approach to large scale technological transformation makes sense. The move to cloud computing was a slow, inexorable transition that didn’t really punish laggards. But staff weren’t experimenting with cloud infra on the side without you knowing, putting your data and reputation at risk. In the AI era, the safe assumption is that all staff are using AI personally and probably professionally, whether explicitly or not.

With this in mind, one of the most important ways to reduce risk is to get out ahead of these practices with formal AI usage and governance policies that can support your employees and protect your organization’s most important assets.

Previous
Previous

The Coming Shift: From Social to Agentic Web — and What It Means for the Poor