Let's Talk Fundraising
Welcome to "Let's Talk Fundraising" with Keith Greer, CFRE! This podcast is your go-to resource for mastering the essentials of fundraising while discovering how innovative tools and technology can supercharge your efforts. Whether you're a new fundraiser looking to level up your skills or a seasoned professional seeking timely reminders and fresh insights, each episode is packed with practical advice, creative ideas, and inspiring stories.
Join Keith as he explores the core principles that drive successful fundraising and uncovers the latest strategies to make your job easier, more enjoyable, and incredibly impactful. From relationship-building and storytelling to leveraging the newest tech, "Let's Talk Fundraising" is here to help you transform your approach and achieve remarkable results for your organization.
Subscribe now and be part of a community dedicated to elevating the art and science of fundraising. Together, we'll make a bigger impact, one episode at a time.
Let's Talk Fundraising
Stop Making AI Slop, and Actually Leverage What It's Good For
Download the Big Goals Worksheet
The problem with so much “polished” AI content isn’t grammar, it’s judgment. We unpack why clean sentences can still feel hollow, and how fundraisers can use AI without outsourcing the thinking that protects the quality of your work. Instead of arguing for or against tools, we reframe the choice: Which role do you assign to AI, and which role do you keep for yourself?
We walk through four distinct modes of AI use—retrieval and generation, editing and refinement, sense making and synthesis, and critical reflection and adversarial reasoning—and explain why most people get stuck in the shallow, fast lane. The real gains live in the slower modes that make you engage: connecting ideas, testing assumptions, inviting objections, and making the hard calls. You’ll learn the difference between scaffolding and replacement, how automation bias erodes confidence, and why fluent prose is not the same as understanding.
Then we shift to practice. Think like an editor: let AI draft if you want, but you decide if the argument holds, where tension is missing, and what isn’t ready. We share a concrete coaching example from a goal-setting session that raised the quality of decisions without surrendering authorship. You’ll get simple prompts that start with your own thinking and invite critique—questions that keep you present, clarify your voice, and safeguard relationships. If you wouldn’t outsource the relationship, don’t outsource the thinking.
Want support trying this approach? Grab the goal-setting worksheets and access to the AI coach at my Resources Page. If this conversation helped you sharpen your standards, follow the show, share it with a colleague, and leave a review so others who care about craft and trust can find us.
💡 Want to take the next small step?
→ Free Download: 12 Fundraising Prompts You'll Actually Use
→ Course: The Fundraiser's AI Starter Suite
There's a certain kind of content you see a lot right now. It looks finished, polished, confident. The sentences are clean, the structure is tight, and nothing is technically wrong. And yet, when you sit with it for more than a few seconds, something feels off. There's no depth, there's no tension between ideas, no sense that anyone actually wrestled with the questions they're answering. It feels like it was written by someone who knows how to write, but didn't really think. That's what people are calling AI slop. And here's the thing: it's easy to point at other people's work and judge it. But if you're honest, you've probably felt this in your own drafts too. You ask for something reasonable. It gives you something fast. And at first glance, it looks fine. But then you start noticing the gaps. The idea you meant to develop isn't really there. Related points never quite connect. Conflicting perspectives are smoothed over instead of examined. You keep thinking, yes, but that's not what I mean. Or there's something missing here, and I can't quite name it. So you tweak a sentence, then another, then another, and eventually you're not sure whether the problem is the tool or whether the problem is how the tool is being used. Some people look at that moment and decide, see, this is why AI is bad. And others push through it and think, I just need a better prompt. But neither of those reactions really gets to the heart of what's happening. Because what's missing in that hollow, shiny content isn't better wording, it's judgment, it's synthesis, it's someone taking responsibility for the thinking. And that hollowness, it isn't a writing problem, it's a thinking problem. So let's talk fundraising. At this point, I've noticed there are usually two very different starting places when it comes to AI. And depending on which one you're in, the conversation can feel either exciting or exhausting. Uh, the first group sounds something like this. I'm using AI, it's helpful, I'm saving time, but something feels off. You might not even be able to articulate it at first. You're producing more, things are moving faster, your inbox is a little lighter. And yet your confidence hasn't grown with your efficiency. And sometimes it's the opposite. You look at something, AI helps you create and think, I don't know if this is actually good. Or if someone asked me why this works, I'm not sure I could explain it. You start second-guessing yourself, not because you're doing something wrong, but because the process feels oddly disconnected from your own thinking. If that's you, there's nothing broken about how you're wired. That discomfort is information. And then there's a second group. And this group often sounds more certain on the surface. They say things like, Have you seen what people are putting out there? Everything sounds the same. It's shallow, it's lazy. So they draw a line. I don't want anything to do with that. And honestly, I understand that instinct too. When you care deeply about your work, about your donors, about trust, about integrity, watching the flood of polished but empty content can feel like an ethical red flag. So opting out feels like the safer and more principled choice. Here's what matters though. Both of these starting points are reasonable. Neither one means you're behind, neither one means that you're naive, neither one means you're resisting progress or chasing it blindly. They're just two different reactions to the same underlying problem because the real issue isn't that AI exists, and it isn't even that people are using it badly. The issue is that we've been handed an incredibly powerful tool without much guidance on how to think with it. So people default to the extremes, either let it do the work for me, or I want no part of this. But those aren't the only options, and that's the frame I want to set for this conversation. This episode is not about convincing you that you must use AI, and it's not about telling you that avoiding it makes you wrong or outdated. It's about something more specific and much more important. It's about how AI is used because there's a meaningful difference between using AI to replace your thinking and using AI to strengthen it. Between outsourcing judgment and sharpening judgment, between generating more content and taking responsibility for what you put out into the world. Once you see that distinction clearly, the conversation changes. AI stops being something you either tolerate or reject. It becomes something you can place intentionally, with boundaries, with purpose, with you still firmly in the seat of judgment. So if you've been feeling uneasy while using AI, you're not imagining things. And if you've been watching from the sidelines thinking, I don't want my work to turn into that, you're not wrong either. The good news is this you don't have to choose between speed and depth, or between ethics and usefulness, or between staying human and using powerful tools. The question isn't whether AI belongs in your work. The question is who's thinking when it is involved. And that's where things get interesting. One of the most helpful shifts I've seen in the research, and honestly, in my own experience, is this idea that not all AI use is equal. We tend to talk about AI as if it's one thing. You either use it or you don't. But that flattens what's actually happening. Researchers who study how people work with large language models have noticed something important. People use AI in very different modes, and with those modes come very different payoffs. On one end of the spectrum is the most common use, retrieval and generation. This is where AI acts like a fast search engine or a writing machine. You ask a question, it gives you an answer. You ask for a draft, it produces text. It's quick, it's convenient, and it absolutely saves time. But cognitively, it's pretty shallow. You don't have to engage very much. You don't have to evaluate it very deeply. You can accept what's given and move on. And to be clear, there's nothing wrong with this mode. The problem is when this becomes the only mode. Because as you move further along the spectrum, something changes. The next step is editing and refinement. Here you bring something to the table: a draft, an idea, a rough outline, and you ask AI to help you improve it. Now you're at least involved. You're reading, you're comparing, you're deciding what stays and what goes. There's more engagement, more ownership. Then there's sense making and synthesis. This is where AI stops being a writing tool and starts acting like a thinking aid. You ask it to connect ideas, to surface patterns, to explain trade-offs, to help you see how pieces fit together. This requires more from you. You have to follow the reasoning. You have to decide whether it actually makes sense. You have to notice what feels incomplete or maybe too neat. And then there's the deepest mode, the one very few people use consistently. Critical reflection and adversarial reasoning. This is where you invite AI to challenge you, to poke holes, to raise objections, to surface blind spots. You're not asking it to be right. You're asking it to make your thinking better. And here's the key research finding that matters for this conversation. Most people never get past the first mode. They stay in retrieval and generation, not because they're lazy, not because they're careless, but because that's the easiest way to use the tool and it delivers immediate results. The issue is that the real payoff, better judgment, deeper insight, clearer thinking shows up in later modes, the slower ones, the more reflective ones, the ones that require you to stay actively involved. This is why we're seeing so much output that looks fine, but doesn't actually say much, because efficiency is not the same thing as clarity. And using AI efficiently is very different from using AI well. Once you see these modes clearly, it becomes obvious that the question isn't whether AI is good or bad. The question is which role you're assigning it and which role you're keeping for yourself. And that distinction changes everything. There's a risk with AI that almost never gets talked about clearly. Not because it's dramatic and not because it makes a good headline, but because it's quiet. It doesn't show up as a mistake, it doesn't show up as a scandal, it doesn't even show up as bad work. In fact, on the surface, everything can look fine. The research uses a helpful distinction to explain this, and I want to translate it into plain language. It's the difference between scaffolding and replacement. Scaffolding is when a tool supports your thinking while you're still doing the work. Replacement is when the tool quietly starts doing the thinking for you. When AI is used as scaffolding, it helps you get unstuck. It helps you see options that you hadn't considered. It helps you organize complexity, but you're still engaged. You're still deciding. You're still evaluating, you're still responsible for what comes out the other side. Replacement looks very different. Replacement is when you stop asking, do I agree with this? and start assuming this must be right. It's when fluent language begins to stand in for understanding. And that's where the risk lives. Because when people use AI primarily in shallow ways, fast retrieval, one-shot generation, because when people use AI primarily in shallow ways, fast retrieval, one-shot generation, researchers see a consistent pattern. Cognitive engagement drops. Not all at once and not dramatically, just gradually. People spend less time reasoning through ideas, less time weighing the trade-offs, less time sitting with uncertainty. And because AI outputs sound confident and polished, another pattern shows up automation bias. That's the tendency to trust a system because it sounds authoritative. Even when something feels slightly off, the polish can override your instinct to question it. So you move on. And over time, something subtle happens. Your confidence erodes. Not because you're less capable, but because you're less practiced. You start to feel less certain explaining your own work. You rely more on the tool to justify decisions. You hesitate when someone asks, why did you choose this? Ownership fades, judgment gets a little rusty, and none of this is because AI is bad. And this is important. AI is doing exactly what it's designed to do. Generate fluent, plausible output quickly. The problem isn't the tool, the problem is who's doing the thinking. When the machine does the thinking, your confidence shrinks, even if the output looks great. That line matters, so I'm gonna say it again. When the machine does the thinking, your confidence shrinks, even if the output looks great. This isn't about fear and it's not about banning tools or drawing moral lines. It's about noticing a pattern early before it becomes a habit. Because the same research also shows something hopeful. When people stay actively involved, when they question, challenge, and reflect on what AI produces, those negative effects don't show up in the same way. Judgment stays sharp, understanding deepens, confidence grows instead of shrinking. So this isn't a story about AI replacing humans. It's a story about humans slowly stepping out of the thinking role without even realizing it. And once you see that clearly, the path forward gets much simpler. AI doesn't need to be rejected, it needs to be placed with intention, with boundaries, with you still firmly responsible for the thinking piece of it. I want to ground this in a real experience and not a hypothetical, not a use case I could imagine, but something that actually changed how I think about working with AI. At the end of 2025, I was doing some goal setting for the year ahead. Nothing flashy, no big reveal. I was working through a set of goal setting worksheets, the kind of thing I've done many times before. And to be clear, I could have completed them on my own. I've set goals, I know the questions, I know the frameworks. There was nothing about the worksheets themselves that required AI. But this time, instead of treating AI like a writing assistant, I used it as a coach. Not to generate goals for me, not to tell me what I should want, and not to turn my answers into something prettier. I stayed in the driver's seat the entire time. What changed was the quality of the thinking. Instead of just filling in boxes, I used AI to push back on me. I'd share a rough thought, and instead of saying, great, here's a refined version, it would ask questions. Questions like, what's underneath that? What assumption are you making here? What would change if you were wrong about this part? How does this connect to the other thing you named earlier? And that's where the shift happened. It wasn't giving me answers. It was slowing me down in the right places. It surfaced connections I hadn't fully named yet, patterns across different areas that I'd felt intuitively but hadn't articulated. It challenged places where my thinking was vague or overly safe. And every time that happened, I had to respond. I had to decide: does this resonate? Is this actually true? Do I want to keep this, revise it, or discard it entirely? Nothing moved forward without my judgment. And that's an important detail. The goals that came out of that process are mine, not because I typed them instead of AI, but because I decided what stayed. I owned the trade-offs, I owned the priorities, I owned the final language. AI didn't take responsibility away from me. It actually increased it. And that's the difference between using AI as a replacement and using it as a partner. If I had just asked, write my goals for the year, I would have gotten something clean, reasonable, and completely disconnected from my actual thinking. Instead, I used AI to interrogate my thinking, to test it, to stretch it, to deepen it. And here's the line that keeps coming back to me from that experience. AI didn't give me better answers, it helped me ask better questions. And that's the shift. That's what cognitive partnership actually looks like. If you're curious, I've made the worksheets I used and access to the version of the AI coach available for you. You can find the links in the show notes or head over to my website at letstalkfundraising.com forward slash resources. There's no pressure to use them. I'm sharing them because they reflect the same principles we're talking about here. AI isn't most powerful when it fills in the blanks for you. It's most powerful when it helps you think more clearly about what you want to put on the page. And this is where I want to slow the conversation down and make a turn. Because once you see this clearly, a lot of the confusion around AI just settles. I want you to think about a magazine editor. Not a freelance writer, not a copy assistant, but a magazine editor. Editors don't write every article in the magazine. In many cases, they don't write any of them. But they are absolutely responsible for what goes to print. They decide what belongs, what doesn't, what needs more work, and what's not ready yet. They're the ones who catch errors. They're the ones who push a piece back and say, this isn't clear enough. This argument doesn't hold. This feels thin. And maybe most importantly, they're the ones whose taste determines whether something is publishable. The editor's job isn't speed, it's judgment. Now take that role and place it next to AI, because this is where I think a lot of people get stuck. They assume the only two options are AI writes everything or AI writes nothing. But there's a third role that's far more powerful. AI can draft and you can be the editor. AI can generate language. You decide whether it actually says something. AI can organize ideas and you decide whether the connections are honest and complete. AI can offer confident pros. You decide whether the thinking underneath it holds up. That means you don't just accept what AI gives you. You curate it, you challenge it, you refine it. And yes, you fact-check it ruthlessly. You introduce tension where it smoothed things over. You ask for opposing viewpoints when it went too neat. You demand depth where it stayed safe. This is a very different posture than use AI to create more content. It's a shift from content production to editorial judgment, and that shift matters. Because when you operate as an editor, you stay present, you stay accountable, you stay connected to the work. You're not outsourcing responsibility, you're amplifying your standards. And this is where a lot of the fear around AI starts to dissolve, because the real risk was never that AI would write better than you. The real risk was that you'd stop exercising your judgment. But editors don't disappear when tools get better. If anything, their role becomes more important. When more content is easy to create, taste matters more. When speed increases, discernment becomes the differentiator. When anyone can produce something that looks finished, the editor is the one who decides whether it's actually ready. That's the role you're stepping into when you use AI well. And I want to name this clearly because it's the heart of the episode. AI should make you more present, not less responsible. And I'll say that again. AI should make you more present, not less responsible. If using AI pulls you out of the work, dulls your instincts, or makes you less confident in what you're putting into the world, something is off. Because when you use AI as an editor would, questioning, shaping, rejecting, refining, your thinking gets sharper, not weaker. Your voice gets clearer instead of flatter. And the work starts to feel like yours again. And I want to leave you with one practice you can come back to, not because it's clever, but because it's steady. It's something you can use on a full day, on a thin day, or a day where you just want to get something done without losing yourself in the process. And the shift is simple. Instead of opening AI with a request for output, open it with your thinking. You don't have to make it elegant. You can start with a paragraph that's half formed, a list that's out of order, a thought you haven't quite yet named, and then invite AI into that. You might say, Here's how I'm thinking about this. Where is it weak? Or what's missing that I'm not seeing yet? Or if someone disagreed with this, what would they push back on? Or maybe what trade? Am I glossing over? Or help me connect these ideas more tightly. What matters isn't the exact wording. What matters is the posture. You're not asking AI to take over, you're asking it to sit with you while you think. When you work this way, something subtle but important happens. You stay oriented, you read more slowly, you notice when something feels too neat, you get better at naming what you actually mean, not just what sounds good. Over time, this becomes a rhythm. You bring the raw material. AI helps you examine it and you decide what survives the process. That's a sustainable way to work. It doesn't require perfect prompts and it doesn't require more discipline. It doesn't even require more time. It just asks you to begin in the right place with your thinking on the table. If you try this once this week, just once, notice how it feels. Notice whether you feel more present in the work, more confident in explaining it, more willing to stand behind what you are sending out. That's the signal you're looking for. Not speed and not polish, clarity that you can own. And that's a rhythm worth keeping. And I want to slow us down here for just a moment, because underneath all the tactics and frameworks, there's something quieter going on for a lot of people. It's not a technical concern, it's an identity concern. When I talk with fundraisers about AI, the question I hear most often aren't really about prompts or tools. They're about voice. Will this still sound like me? They're about authorship. Can I honestly say that I wrote this? And they're about trust. What does this mean for my relationship with donors? And those are serious questions, and they deserve to be treated that way. Fundraising is built on trust, not just institutional trust, but personal trust. People give because they believe in the mission, yes, absolutely. But they also give because they trust the judgment, the care, and the integrity of the people who are asking. That's why this conversation can't just be about productivity. It has to be about responsibility. Judgment is the job, not the typing, not the formatting, not producing more words, but judgment. Deciding what to say and what not to say, what needs more care, what doesn't belong yet, and that responsibility doesn't disappear just because a tool can generate language quickly. It actually becomes more important. Because when it's easy to produce something that looks finished, someone still has to decide whether it's true, appropriate, and worthy of the relationship it's entering. That's not something that you can hand off, especially in fundraising. You can ask AI to help you think. You can ask AI to help you draft. You can even ask AI to help you see blind spots. But responsibility cannot be delegated. If a donor is moved, confused, misled, or hurt by something you send, that accountability doesn't belong to the tool. It belongs to you. And that's not a burden. It's a form of leadership. When you stay connected to your judgment, your voice doesn't disappear. It clarifies. Your authorship doesn't weaken, it becomes more intentional. And trust doesn't erode, it's protected because you're still present in the work. There's a line I come back to often, especially when people are trying to sort through the ethical side of this. If you wouldn't outsource the relationship, don't outsource the thinking. That's not anti-technology, it's pro-responsibility. AI can support your work, it can expand your capacity, it can even sharpen your insight. But the role you hold, the one that actually matters, stays the same. You are the one who decides what goes out into the world. You are the one who stands behind it. And when you work that way, using AI doesn't dilute your integrity, it reinforces it because you're not disappearing from the process. You're showing up more deliberately. And that's what donors and you ultimately need. Before we wrap up, I want to bring this back to something simple. AI isn't the problem. And the answer isn't to avoid it, fear it, or chase every new feature either. The real issue we've been circling today is judgment. When judgment quietly slips out of the process, things start to feel hollow, disconnected, hard to stand behind. But when judgment stays centered, AI becomes what it was always meant to be a support, not a substitute. And the good news is this you get to choose the role AI plays in your work. Every time you open it, you can ask it to generate, you can ask it to refine, or you can ask it to challenge you, to help you think more clearly about what you're already bringing to the table. That last role is where the real value lives. So here's one small thing you can try this week. Just one. The next time you're tempted to ask AI to polish something, pause for a moment. Put your thinking down first, even if it's rough, and then ask a question that invites challenge instead of polish. Where is this week? What's missing? What would someone who disagrees push on? Notice how that changes the way you read the output. Notice how it changes how you feel about what you're sending. That's not about being more productive. It's about being more present in your work. If you want some support practicing this, I've shared the goal setting worksheet I mentioned earlier, along with access to an AI coach designed to work this way. You can find both of them in the show notes or at letstalkfundraising.com forward slash resources. They're there if they're helpful. No rush and no pressure. And I'll leave you with the line that grounds all of this for me. AI can draft it. You decide if it's ready. One last thought before we close. AI hasn't raised the ceiling of our work, but it has raised the floor. And that opens up some interesting questions about craft, judgment, and what it really means to do our best work in this moment. We'll explore that together in a future episode, but for now I want to say this. If you're thinking carefully about how you use AI, if you're asking good questions, staying present in the work, and protecting the relationships that matter, you're doing this really well. There's no perfect way to approach these tools. There's only thoughtful one. And you're clearly taking that path. So keep experimenting, keep your judgment closed, keep your standards intact. I'm glad you're here, and I'm grateful for the care you bring to your work. I'll see you next time, my friend.