top of page

AI in Donor Relations: From Curiosity to Responsible Use

  • 5 days ago
  • 6 min read
Green background with a magnifying glass. Text reads "AI in Donor Relations: From Curiosity to Responsible Use."

Artificial intelligence is no longer just theoretical. It’s here. It’s being used. And in many cases, it’s being used without policy.


Recent sector conversations and institutional case studies show a clear trend: while many nonprofit and higher education organizations say they are “not officially using generative AI,” staff members are already experimenting on their own. As one higher education leader described it, when asked whether institutions are using generative AI, most say no—but when asked how staff are using it personally, “they have many examples. It’s the Wild West.”


That gap, between experimentation and governance, is the real risk. (EDUCAUSE AI Landscape Study, 2024; Salesforce Education Playbook, 2025).


So, before we rush into what AI can do, we need to address what it must not do.


What AI Should Not Do in Donor Relations

At DRG, we believe in behavior-based donor relations grounded in trust, transparency, and authentic human connection.


AI should never:

  • Replace genuine gratitude.

  • Fabricate emotion.

  • Generate final donor-facing content without human review.

  • Operate without clear data governance.

Generative AI can draft. It can outline. It can summarize. However, it cannot feel gratitude. It cannot understand the nuance in a donor’s lifetime relationship. And it cannot replace the relational capital built over years of stewardship and relationship building by humans. Industry guidance consistently reinforces that AI should augment, not replace, human relationship-building (Chronicle of Philanthropy, 2025; AFP, 2026).

The institutions that are succeeding in utilizing AI are not allowing AI to “run” their donor communications. They are using it as a copilot.  They are using it to accelerate preparation so humans can focus on meaning.


In donor relations, authenticity is a strategy. AI is infrastructure.

 

What AI Can Do Well in Philanthropy

When used responsibly, AI can strengthen donor engagement in meaningful ways. The most effective cases we’re seeing in higher education and nonprofits fall into three categories:

 

1. Predictive Insight (Informs Strategy)

Predictive AI can:

  • Identify donors likely to lapse.

  • Flag alumni engagement patterns.

  • Surface affinity signals from behavior.

  • Help prioritize outreach based on data.

 

As outlined in Salesforce’s higher education framework, predictive AI “makes predictions based on what it sees in existing data,” helping teams act before issues grow. (Salesforce Connected Education Report, 2025)

This supports retention strategy, a major sector need given declining donor retention rates nationally.

 

2. Generative Support (Accelerates Workflow)

Generative AI assists by:

  • Drafting first versions of stewardship emails.

  • Summarizing meeting notes.

  • Creating impact report outlines.

  • Producing segmented message variations.

  • Generating campaign language based on donor interests.

 

Generative AI can draft first versions of stewardship emails, summarize meeting notes, create impact report outlines, and produce segmented message variations. It delivers “pre-packaged actions to drive value faster,” but only when fueled by trusted, unified data (BWF, 2023; CCS Fundraising, 2023).

Notice the emphasis: draft, summarize, assist.

Not publish. Not replace.

 

3. Agent-Based Efficiency (Scales Responsiveness)

AI-powered virtual engagement officers and chatbots have successfully engaged donors while maintaining transparency and escalation to human staff (Chronicle of Philanthropy, 2025; Illinois Tech News, 2024).

AI agents and chatbots can:

  • Answer routine donor FAQs.

  • Provide 24/7 basic assistance.

  • Route inquiries appropriately.

  • Assist with campaign segmentation.

  • Support alumni engagement outreach.


But even in advanced “virtual engagement officer” pilots, transparency and human oversight remain non-negotiable. The most successful pilots clearly inform donors when they are interacting with artificial intelligence and provide options for human escalation pathways.


That’s donor engagement integrity.

 


The Foundation: Trusted, Unified Data


AI is only as strong as the data infrastructure behind it. Data harmonization, single source of truth CRM systems, and privacy protections are foundational (Salesforce Connected Education Report, 2025). Institutions must ensure compliance with privacy regulations and mitigate bias before deploying AI tools in donor-facing environments (SAP Nonprofit AI Report, 2023; EDUCAUSE AI Landscape Study, 2024).

One of the most critical themes across both of the reports above is this:

AI is only as good as the data it receives.


Before donor relations teams experiment with generative AI tools, institutions must address:

  • Data harmonization across systems.

  • Single source of truth for donor records.

  • Clear governance policies.

  • Defined review workflows.

  • Bias mitigation safeguards.


Without unified data, AI creates risk. With unified data, AI creates leverage.

In donor relations, poor data weakens trust. AI only speeds up whatever data condition is already there—whether good or bad.


 

How to Help Your Team Use AI Responsibly


Experts emphasize that AI performs best when used as a copilot, allowing staff to concentrate on strategic oversight instead of repetitive tasks. (AGB Trusteeship, 2023; Salesforce Connected Education Report, 2025).


It's best to assume your team is already using AI, so if your organization does not yet have an AI policy in place--now is the time.


We know it can be intimidating to create guidelines when new tools and technology appear on the scene. Let us help you get started with a DRG-aligned framework:


1. Establish Clear Guardrails

  • AI may draft; humans must finalize.

  • No personally identifiable donor data is entered into public AI tools.

  • All donor-facing communications require review.

  • Transparency guidelines are defined and easily accessible to all staff members.

 

One question Advancement leaders are beginning to ask is whether donors must consent to the use of artificial intelligence when their information is analyzed or when AI tools support engagement. At present, most privacy and fundraising ethics frameworks focus less on the technology used and more on how donor data is protected and stewarded.


The Association of Fundraising Professionals’ Code of Ethical Standards requires organizations to safeguard donor information and use it only in ways donors would reasonably expect. Similarly, emerging guidance from CASE and national AI governance frameworks emphasizes transparency, human oversight, and strong data protections when AI is used in Advancement operations. In practice, this means institutions typically do not need separate AI consent policies today, but they do need clear governance structures, responsible data-use policies, and transparency about how donor information is handled.


Ultimately, the standard should remain unchanged: donor trust. Advancement has always operated on the premise that donor information is entrusted to institutions rather than owned by them. Whether data is analyzed by a staff member or an AI-enabled tool, the responsibility remains the same: protect donor information, use it only in ways that support mission and relationships, and ensure transparency about how data informs engagement.

 

2. Define Approved Use Cases Start with internal efficiencies:

  • Meeting summaries

  • Report outlines

  • Segmentation brainstorming

  • Process documentation

  • Training materials

Avoid starting with:

  • Direct donor appeals

  • Major gift communications

  • Principal gift proposals

 

3. Prioritize Data Governance

AI maturity begins with data harmonization and reliability. If your CRM data is incomplete, inconsistent, or siloed, fix that first.

 

4. Train Your Team

AI literacy should include:

  • Prompt writing skills.

  • Bias awareness.

  • Ethical considerations.

  • Review standards.

  • Confidentiality best practices.

 

5. Lead with Donor Engagement Values

Ask:

  • Does this enhance donor trust?

  • Does this improve personalization meaningfully?

  • Does this protect the dignity of our donors?

If the answer is no, stop.

 


AI and the Future of Donor Relations


The institutions moving thoughtfully into AI are not trying to replace donor relations. They are strengthening it.


They are:

  • Reducing repetitive workload.

  • Increasing personalization.

  • Improving response times.

  • Freeing staff to focus on strategic stewardship.

 

AI does not and cannot replace human connection. Instead, it reduces friction so advancement professionals can strengthen it. Institutions that approach AI with deliberate governance, ethical clarity, and a donor engagement mindset will be better positioned to build donor trust rather than diminish it. And in a climate of declining retention and increasing donor expectations, that matters. (Chronicle of Philanthropy, 2025; AFP, 2026).

 


Don’t Stay in the Wild West


With nearly half of nonprofits still without formal AI use policies and many staff experimenting independently, governance needs to keep pace with curiosity. This isn’t about resisting innovation; it’s about managing it responsibly.

 

At DRG, we believe AI should support behavior-based donor relations, not undermine it.

 

If you’re ready to move from experimentation to intentional implementation, stay tuned.

Our upcoming AI Lab for Donor Relations Professionals will go deeper and include:

  • Ethical frameworks

  • Practical workflows

  • Real stewardship examples

  • Prompt libraries

  • Governance templates

 

This blog is simply a start to the conversation.


The strategy comes next.


Written by Jenny Jones

 


 

.

 

 
 
bottom of page