Let's Talk Fundraising
Welcome to "Let's Talk Fundraising" with Keith Greer, CFRE! This podcast is your go-to resource for mastering the essentials of fundraising while discovering how innovative tools and technology can supercharge your efforts. Whether you're a new fundraiser looking to level up your skills or a seasoned professional seeking timely reminders and fresh insights, each episode is packed with practical advice, creative ideas, and inspiring stories.
Join Keith as he explores the core principles that drive successful fundraising and uncovers the latest strategies to make your job easier, more enjoyable, and incredibly impactful. From relationship-building and storytelling to leveraging the newest tech, "Let's Talk Fundraising" is here to help you transform your approach and achieve remarkable results for your organization.
Subscribe now and be part of a community dedicated to elevating the art and science of fundraising. Together, we'll make a bigger impact, one episode at a time.
Let's Talk Fundraising
When Competence Becomes Common, Taste Becomes Strategy
Remember when donor newsletters read like dissertations and first-time givers needed a decoder ring? We take a clear-eyed look at how AI has quietly raised the baseline for clarity in fundraising and communications—and why that shift changes expectations, not the essence of our work. The conversation moves past hype and eye-rolls to focus on what’s actually changing: standards, roles, and the line between acceptable and excellent.
We unpack the two most common reactions to AI—quiet fear about speed and scale, and easy dismissal of flat outputs—and show how both point to the same underlying movement. With faster orientation, basic competence becomes more common across writing, program design, data conversations, and leadership. That makes sloppy or confusing work stand out more, not less. The payoff for teams is a new starting point; the differentiators become human judgment, editorial taste, and ethical responsibility. You’ll hear the cautionary intern story of a 500-piece appeal that yielded a single $45 gift, and how today’s tools could prevent that kind of collapse without replacing the growth that builds wisdom.
We also tackle risks head-on: hallucinated facts, claims that outrun evidence, and the temptation to accept clean prose as finished thinking. You’ll get a practical workflow for using AI to reach baseline faster—then slowing down for rigorous human review, verification, and audience-centered refinement. Speed should buy time to think, not replace it. If you care about donor trust, clear calls to action, and work you can stand behind, this is a guide to working well at a higher floor where competence is common and excellence still belongs to people.
If this conversation resonated, follow the show, share it with a colleague who writes appeals, and leave a quick review to help more fundraisers find it. Your feedback shapes what we explore next.
Enter to win a signed copy of Signs of a Great Resume by Scott Vedder. No cost to enter.
💡 Want to take the next small step?
→ Free Download: 12 Fundraising Prompts You'll Actually Use
→ Course: The Fundraiser's AI Starter Suite
Do you remember back before AI, and I know it was what, like three whole years ago, when our sector was collectively losing its mind over bad newsletters and poorly written appeals? I remember it vividly. There were listsserv threads, conference sessions, social posts, entire conversations devoted to how confusing our communications had become. We weren't frustrated because things weren't poetic. We were frustrated because donors didn't know what we were trying to say or what we were asking them to do. It drove us nuts. I remember opening newsletters and feeling like I'd missed a prerequisite college course, appeals that assumed the reader already understood the organization's theory of change, funding model, and strategic priorities. Writing that made perfect sense if you'd been inside the organization for five years, sat through every board meeting, and lived with the work day in and day out. But most readers, our donors, hadn't. Someone would make their very first gift, maybe to help feed a person experiencing homelessness for a week. And the follow-up communication they received, well, it read like a doctoral dissertation. They weren't met with a warm welcome and thanks. They were met with dense academic explanations of how the organization was addressing systemic food insecurity by intervening in multilayered food distribution ecosystems shaped by historical patterns of exclusion, economic marginalization, and structural inequities that had produced a highly strisfied urban environment with disparate access to resources. We were so immersed in the work that we forgot what it was like to encounter it for the first time. We wrote as if shared understanding was a given rather than something we needed to earn. And because budgets were tight, we'd sometimes rely on fresh-faced interns to do our communications for us. I've supervised a lot of early career fundraisers over the years, and almost without exception, their first drafts were rough. Grammar issues, punctuation mistakes, awkward phrasing, writing that hadn't quite found its voice yet. And that wasn't surprising. Writing is a skill you develop by doing it badly for a while. Spellcheck helped with some of that. It caught the obvious things, it cleaned up a lot of surface-level mistakes and made drafts readable much faster than they otherwise would have been. And still, every once in a while something slipped out the door. A comma in the wrong place, a misspelled word, a sentence that technically passed review but didn't quite land. And I remember getting calls from donors, well-intentioned, sometimes amused, sometimes irritated, letting us know we'd made a mistake. Not about the mission and not about the strategy, but about a typo. Ugh, those moments were humbling. Not because they mattered enormously on their own, but because they reminded us how closely some people were paying attention. Spellcheck never addressed the deeper issue, though. It didn't help an intern understand how much context a reader actually had. It didn't tell them when a paragraph assumed too much familiarity with the work, or when an explanation had skipped a step that really mattered. So we spent a lot of time translating, teaching people how to back up, how to introduce ideas more carefully, how to notice when they were writing for themselves instead of for the person on the other side of that page. The learning curve took time, often years, which is why the shift we're living through now feels so different to me. It's become much easier to catch both surface level issues and early signs of confusion before something ever reaches a donor. I can test language quickly and I can sense check whether an explanation makes sense to someone who doesn't live inside the work. I can see where I'm assuming knowledge that hasn't been earned by the writing yet. The tool doesn't hold our mission or our relationships. It gives us an earlier mirror. And when that mirror is easier to access, expectations change. The kinds of mistakes we once tolerated start to feel avoidable, partly because the path to clarity has shortened. And I think that's why reactions to AI are so mixed. Some people feel unsettled by how much faster the learning curve seems to move now. Others feel unimpressed because the output still lacks judgment, taste, or lived experience. And both reactions feel familiar to me. What I want to do in this episode is sit with what's actually changing underneath those reactions, not in terms of hype or fear, but in terms of standards and what that means for how we show up in our work. So let's talk fundraising. When I talk with people about AI, I notice two reactions come up again and again. Sometimes they show up in different people, and sometimes they show up in the same person depending on the day or the part of the conversation that we're in. One reaction is fear, and it's not loud, dramatic fear like there's a shooter in the building. It's a quiet sense of unease, the feeling that something fundamental is changing faster than any of us expected. I hear it in questions about job security, our own relevance, and whether years of hard-earned skill are about to be compressed into a prompt and an output box. It often sounds like, what happens to me if this keeps getting better? And the other reaction is dismissal. People look at what AI produces and feel underwhelmed. The writing feels flat, the thinking feels shallow, the output doesn't reflect the nuance or the care they know the work requires. So they decide it isn't useful. What's interesting is that both reactions are responding to the same underlying shift, just from different angles. Fear tends to focus on speed and scale, and dismissal tends to focus on quality and depth. Neither one tells the whole story by itself, though. In my experience, these early stage reactions to a tool that's still being figured out, not just technologically, but culturally and professionally. Over time, the more useful questions usually land somewhere else. Standards, roles, and expectations. The reality is that AI is already reshaping our work, especially in the for-profit sector. And some people are paying real costs for that. At the same time, there's another shift that's happening, and it's quieter, but it affects how we evaluate work and how we support people who are learning. As the baseline for clarity and competency moves, it changes what feels reasonable to expect, not just from ourselves, but from our teams and from the work we put out into the world. One of the things I keep coming back to is where people begin. Every field has a baseline, and invisible threshold you have to cross before you can really participate. In our work, that baseline has always depended on time and access and familiarity with the language of the sector. You didn't just need interest, you needed context. And getting that context often took a while. If you didn't already know how things worked, the first step was usually research. You'd open a web browser and you'd search for a term, click through article after article, and try to piece together an understanding from whatever sources you could find. Even reaching a basic level of orientation required persistence and enough knowledge to know what to be looking for. Remember when we would just tell our teams to Google it, as if that were a neutral instruction? What it usually meant was hours of sorting and skimming and guessing which information was reliable enough to trust. What's different now is how quickly that initial orientation can happen. Someone can ask a question and get a reasonably coherent explanation almost immediately. It's not going to be a final answer and it's not deep expertise, but they could get enough context to understand what they're looking at and how ideas relate to one another. That shift in starting point is what I mean when I talk about AI raising the floor of our work. And once I started noticing that, it reframed a lot of my own early experiences in the field, especially one I keep coming back to. Early in my career, I was a college intern working in a development office. And the director who had hired me left their position the week before I started. And for a period of time, there was no one else in the department who really knew fundraising. At some point, someone looked at me and said, We need to get an appeal out. And I had never written one before. There was no training that I received. There was no review process for the work that I put together. And there was no example for me to work from. I cared deeply about the mission, but I didn't yet understand how to translate that care into language that made sense to someone outside of our organization. So I did what I could. I wrote something and we sent it to about 500 people. We received one gift. It was$45. And looking back now, I can kind of laugh about it, but at the time it felt devastating. And I also have a lot of compassion for that version of me. I wasn't careless. I was inexperienced. I didn't yet have the baseline I needed, and I didn't have anyone helping me build it. And that's the floor I'm talking about. Not ignorance as a moral failure, but ignorance is a starting condition. If I were that intern today, still inexperienced, still caring deeply, but still lacking judgment, I could at least begin with a rough framework. I could pop into an AI like ChatGPT or Google Gemini and ask what an annual appeal is supposed to accomplish. I could look at examples. I could test language and notice where it breaks down. I could ask why certain structures tend to work better than others. With a basic prompt, the AI tool could even produce something dramatically better than what I sent out back then. It's not going to be award-winning and it's not going to be deeply human, but it could do something competent and coherent enough that it wouldn't collapse under its own confusion. And that matters because competence is becoming more common. The barrier to producing something that makes basic sense has dropped. And that doesn't make experience irrelevant. It changes what we treat as a normal starting point. Once you start looking for it, the raised floor shows up in far more places than writing, too. Communications are often where we notice at first because the output is public and it's immediate. But the shift is happening across the entire sector in programs and operations, data, strategy, and leadership. Anywhere there used to be a long gap between not knowing where to start and being able to participate meaningfully, that gap is narrowing. As it becomes easier to get oriented, organizations start to expect orientation, not expertise on day one, but a basic grasp of what's going on and why it matters. And this is why incoherent writing, poorly structured content, and obvious confusion stands out more sharply than they used to. The same thing shows up in program design, data conversations, and leadership decisions. The introductory understanding that used to take hours to assemble is easier to surface, which changes what feels acceptable to send out into the world. That shift doesn't flatten expertise. It makes expertise easier to recognize. When basic competence becomes more common, the differentiators move upstream taste, judgment, editorial skill, the ability to decide what belongs, what doesn't, and why. AI can close the gap between terrible and okay in a lot of areas. The distance between good and great still belongs to people. It lives in choices. What you emphasize, what you leave out, what you verify, what you take responsibility for. It's worth pausing here to talk about the risks that come with using AI, because pretending they don't exist doesn't make anyone more prepared to deal with them. If you've spent real time with these tools, you've seen how convincing they can be. The language flows, the structure holds together, the answers often feel complete, and still there are moments when something is simply wrong. Hallucinations are a part of this. A model can invent details, sources, or explanations that fit the shape of an answer without being grounded in reality. In other fields, we've already seen what that looks like in practice. Lawyers have been sanctioned for filing briefs that include AI-generated citations to cases that don't exist. And fundraising, the equivalent risk is quieter, but just as real. A statistic that sounds right, but isn't, an impact claim that drifts beyond what you can prove, a confident explanation that skips an important nuance, donors deserve to understand. The more fluent the output becomes, the easier it is to miss those problems if you aren't looking for them. There's also a slower risk that shows up over time. When something reads cleanly and looks finished, it's easy to move on too quickly. To accept the output is good enough without applying the scrutiny you would if you'd written it yourself from scratch. And work that involves trust and persuasion and ethical responsibility, that shortcut matters. Words shape decisions. Decisions shape relationships. And unexamined output can quietly undermine both. At the same time, it helps to keep perspective on where we are in the broader arc of this technology. The systems we're using today are evolving rapidly. They're shaped by constant interaction, correction, and feedback, and the limitations we're seeing now are part of an ongoing process of refinement. That doesn't make the risks disappear, it means they change as the tools improve. As outputs become more fluent and more polished, the responsibility to slow down and think carefully becomes even more important. And this is where tool literacy starts to matter in a deeper way. Literacy isn't just knowing which prompt to use, it's developing the habit of questioning what you're seeing, checking sources, testing assumptions, and noticing when an answer doesn't fit the context you're working in. And as these tools continue to evolve, that practice becomes more central to the work. The people who navigate this moment well will be the ones who stay engaged with their own thinking, even as the tools around them become more capable. Once you accept that the floor has moved, the question becomes how to work from that new starting point without losing what actually makes your work meaningful. For me, the first shift is using AI to get oriented faster. When I'm facing something unfamiliar or underdeveloped, I let the tool help me reach baseline competence more quickly, an overview, a few common frameworks, a rough structure that gives me something to work from. Then I slow down. Anything that comes out of an AI tool gets read as if it were written by someone else, because functionally it was. That's where human review becomes non-negotiable. I also treat AI output as a first acceptable draft, coherent and usable, but really unfinished. From there, the real work begins. Refining language, tightening structure, stress testing assumptions, and asking whether the piece actually serves the person reading it. This is where taste comes in. Taste is built over time through exposure, reflection, and feedback. It's the sense you develop for what belongs in a moment, what needs more care, and what you can actually stand behind. This is also where the editor role from the last episode becomes essential. Editors don't just clean things up. They decide what stays. They decide what needs more work. They take responsibility for the final result. Working well at the new floor means letting speed buy you time for thinking rather than letting speed replace your thinking. As we wrap up, I keep thinking about that earlier version of me, the intern trying to write an appeal without any real sense of how the work actually functioned. I don't feel embarrassed about that story anymore. I feel compassion for it. I was doing the best I could with the tools and the understanding I had at the time. And most of us learned this work by stumbling through it, making mistakes in public, and slowly figuring things out as we went. The opportunity in front of us isn't about skipping growth. It's about starting with a little more strength than we used to have. Fewer avoidable missteps, less flailing at the beginning, more room to focus on the decisions that actually shape the work. AI doesn't replace the process of becoming good at what you do. Judgment, taste, and wisdom still come from experience, reflection, and paying attention over time. What AI can do is help you enter the conversation sooner, oriented enough to start making better decisions from the get go. The floor has risen. What you build on top of it is still up to you. I'm glad you spent this time with me. I'm looking forward to continuing the conversation in the next episode. Take care, my friends.