What I Got Wrong About AI and Nonprofits

And what I've learned since.

 

I'll be honest with you. For a long time, I filed AI under "not for us."

 

Not because I thought it was bad. Just because everything I read about it seemed to be written for tech companies, venture-backed startups, or corporations with full IT departments and six-figure software budgets. None of that was my world, and I suspected it wasn't yours either.

 

But over the past couple of years, something shifted. I started paying closer attention to what was actually happening inside nonprofits. Not the headlines about AI taking over the world, but the quieter stories: the development coordinator who stopped dreading grant season. The executive director who finally felt like she could keep up with her inbox. The five-person team that started showing up to meetings less frantic and more focused.

 

These weren't tech companies. They were organizations exactly like the ones most of us work in or lead. And they weren't doing anything radical, they were just using AI in small, practical ways to get some breathing room back.

 

That's when I started to change my mind.

 

The problem was never the technology. It was the framing.

 

Here's what I think went wrong with how AI got introduced to the nonprofit sector: it was framed as a revolution. Something that would transform everything, disrupt all the things, make human jobs obsolete.

 

That framing made a lot of nonprofit leaders, myself included, for a while, not know what to do with it.

 

Because here's the thing about mission-driven work: it's relational. It runs on trust, community, and human connection. The idea of automating that felt not just impractical but wrong.

 

But that's not actually what AI does. Not the way nonprofits are using it.

 

What AI does, in the hands of a thoughtful nonprofit team, is handle the parts of the work that drain energy without delivering impact. The first draft of the grant narrative that nobody wants to stare at. The summary of a 50-page funder report that needs to be read before Friday. The donor thank-you letter that somehow has to sound personal even though you're writing 200 of them.

 

Those tasks matter. But they don't have to eat your best hours.

 

A few things I've seen work

 

I want to be specific here, because vague AI enthusiasm isn't useful to anyone.

 

The use cases I've seen resonate most with nonprofit teams are unglamorous. They're not the things you'd put in a press release. But they're the things that, when they're off your plate, make your whole week feel different.

 

Communications drafting. When your team uses AI to write the first version of a newsletter, event announcement, or donor update, they're not outsourcing their voice. They're eliminating the blank-page problem. The ideas and the heart still come from your team. AI just builds the scaffolding.

 

Grant narrative support. This is the number one thing nonprofit leaders tell me they want help with. Sixty percent of nonprofits say AI for grant writing is their top priority. And it makes sense. Grant writing is time-intensive, high-stakes, and formulaic enough that AI can genuinely help without replacing the expertise and relationships that win grants.

 

Meeting documentation. Tools like Otter.ai can transcribe and summarize a board meeting or team check-in in real time, so someone doesn't have to spend an hour turning scattered notes into minutes. It sounds small. It isn't.

 

What I've learned about where it breaks down

 

I want to be honest about this part too, because I think it's important.

 

AI breaks down when it's used as a shortcut for things that require genuine human judgment. When someone pastes an AI-generated donor appeal into an email without reading it, and it goes out sounding nothing like your organization. When a program narrative gets submitted that's technically coherent but doesn't capture the real story of your community. When automation replaces a phone call that would have meant something.

 

AI is a tool. Like any tool, it works well when the person using it is paying attention and bringing their own knowledge and values to the work.

 

The best nonprofit leaders I see using AI aren't abdicating to it. They're directing it; giving it specific jobs, reviewing its outputs critically, and staying close to the work that only humans can do.

 

That distinction matters.

 

Where to start if you've been on the fence

 

If this resonates but you're not sure where to begin, here's my honest advice: don't start with a strategy. Start with one task.

 

Pick something your team does every week that feels tedious and time-consuming and try using an AI tool for it. Just once. See what happens.

 

That's it. You don't need a budget line or a consultant. You need curiosity and fifteen minutes.

 

The nonprofit leaders I've watched move fastest with this aren't the most tech-savvy. They're the ones willing to try something, see what it produces, and decide from there.

 

A note on why I'm writing about this

 

I started talking publicly about AI for nonprofits because I kept seeing the same thing: leaders who were genuinely open to it, but overwhelmed by where to start. There's a lot of AI content out there, and most of it isn't written for someone running a $400,000 food bank or a two-person advocacy shop.

 

I want to be a resource for those people. Not because AI is going to save the nonprofit sector, but because I think it can give you back some time and energy to do the work you actually came here to do.

 

That's worth talking about.

 

If you're ready to take a first step, I've put together the AI Literacy Starter Kit for Nonprofits: a practical, jargon-free guide designed for nonprofit leaders and their teams. It covers the essentials, the tools worth knowing, and how to start building an AI-confident culture without the overwhelm.

 

Download the AI Literacy Starter Kit →