skip to Main Content
How Governance Technology Can Help You Create Your Nonprofit’s AI Framework

What charity boards need to know about AI

 

Charity leaders and boards recognise that AI is going to expand to even more uses, and new AI applications beyond ChatGPT will continue to emerge at full steam. To ensure AI is used responsibly within their organisations, these leaders must take steps to understand AI and how they can institute governance policies to minimize its risks.

AI, once thought to be a trendy technology — perhaps even short-lived — is clearly here to stay. Not a day goes by without a news story or article about machine learning’s impact  on people, companies, governments and mission-driven organisations.

While the advantages of machine learning and deep learning are numerous and well-documented, they are not without skepticism. Is ChatGPT biased? Are digital assistants retaining access to personal information? Do Chatbots enhance the customer service process? Will AI really assist with an organisation’s fundraising campaign?

What you’ll learn

  • How to adopt AI responsibly
  • Latest stats on AI and charities
  • The risks of not putting AI governance in place

Our latest infographic looks at the role of AI in charities and nonprofits, how it can be adopted responsibly, and the risks of not creating policies to oversee and govern its use.

Download our infographic in shareable formats here: PDF, JPEG and PPT.

“Advances in AI will transform how social sector organisations fundraise, engage supporters, and inspire generosity. Responsible adoption means building sector capacity to understand both the potentials and risks of AI while keeping pace with this emerging technology.” – Woodrow Rosenbaum, Chief Data Officer, Giving Tuesday

ai and charities8 steps to responsible AI adoption

The Stanford Center on Philanthropy and Civil Society explores eight key steps to adopting AI skillfully and with care. These steps provide a useful framework for nonprofit leaders and board members who see the potential value of AI, but are hesitant to give the full go-ahead.

  1. Be knowledgeable and informed. Take time to understand what AI  can and can’t do. Learn how your staff plans to use features like ChatGPT or Chatbots. Anticipate outcomes and risks before launching full-scale.
  2. Address anxieties and fears. A common fear is that AI will eliminate jobs. If ChatGPT can write a 600-word blog, why retain a copy writer? Leaders can calm employee anxieties with straightforward and honest discussions. Jobs may change in the future, but AI is not likely to eliminate jobs according to the World Economic Forum.
  3. Stay human-centered. Assure your team members that humans will always oversee technology and make the final call on critical decisions.
  4. Use data safely. This may seem obvious, but privacy and permission are especially important in nonprofits. Do we have permission to use written materials from authors discovered in AI searches?
  5. Mitigate risks and biases. Design and implement a methodology to ensure that AI tools are safe and unbiased. Anticipate worst-case scenarios before they occur.
  6. Identify the right use cases. Nonprofit leaders know how time-consuming certain tasks can be, particularly during fundraising seasons. Think about the initiatives that are not getting done because of necessary (but repetitive) jobs.  AI automation can help by allowing more time for more essential activities.
  7. Piloting the use of AI. Test and retest. Before deploying a full-blown AI application, create a small, short-term test and let your team members evaluate the results. Were they accurate? What impact did the AI application have on their jobs and time?
  8. Job redesign and workplace learning.  Be prepared to refresh job descriptions and provide skills training as AI becomes the norm. Produce and circulate an AI handbook with best practices and guidelines unique to your nonprofit.

“Try [AI] and try it on something personal. Treat is as your junior assistant. Give it the information it needs. If you have something complicated, ask it to summarise.” – Ari Ioannides, Board Member, Park City Institute

AI in charities

Charities and nonprofits are seeing the upsides of using AI tools in their organisations. Yet various reports and surveys suggest that while the interest is growing rapidly, the knowledge gap is great. Constant Contact’s Small Business Report reveals that 78% of nonprofit organisations report they are interested in using AI or automation technology.

However, Nathan Chappell of Donor Search states that fewer than 30% of nonprofits have started using or exploring AI. The Charity Digital Skills Report 2023 reports that of the 100 respondents in a flash poll about AI, a majority (78%) think that AI is relevant to their charity, but 73% feel unprepared to respond to the opportunities and challenges AI brings.

Taking a broader look at AI governance policies across industries and in both public and private sectors, Babl AI Inc., using a three-pronged methodology (literature review, surveys and interviews), found that 64% of organisations do not have an AI governance program in place.

“Judgment is key. The governance team needs to have a process in place and a policy for how to use AI.” – Dominique Shelton Leipzig, Partner, Mayer Brown

The risks of not putting AI policies in place

It is easy to be enticed by the advantages of AI. Who wouldn’t want to use a lightning-speed application to research giving trends from major donors? Or launch a fundraising newsletter in hours rather than days? While the benefits are many, so too are the risks without organisational policies and procedures to govern AI use. Here are some of the key risks of not putting governance in place:

  • Ethical concerns over biased algorithms.  AI algorithms may be biased, and when they are, they can further validate prejudice and inequality.
  • Privacy issues. The protection of personal and institutional data is fundamental with the proliferation of deep fakes, disinformation and hate speech.
  • Loss of donor trust.  When donors lose trust in organisations for any reason, including the misuse of AI, regaining that trust and financial support can be impossible.
  • Security vulnerabilities. Cybersecurity incidents can derail nonprofits wreaking havoc on operations, call centers and data management.
  • Misalignment with mission. Mission-driven organisations must do everything possible to safeguard their reason for being. Unchecked AI that produces divergent points of view or goals can hinder this.
  • Legal and regulatory compliance issues. Charities and not-for-profits are bound by legal and regulatory compliance requirements. As AI continues to permeate every aspect of society, more regulations will emerge requiring organisations to comply.
  • Lack of oversight and accountability. The possibility of chaos from data security breaches to flawed decisions and consequences — all based on unverified AI information — can ruin a nonprofit without accountability and oversight.

AI and risk indicators for charities

The Charities Excellence Framework suggests considering these 6 indicators to help determine if your charity is ready for AI, or at risk of AI disruption:

  1. Stagnation — your charity largely operates exactly as it did 5 years ago
  2. Ignoring AI — your charity does not believe AI will affect the way you work
  3. Tech avoidance — your charity is slow to adopt new technologies and ways of working
  4. Uniqueness — your charity delivers services or information easily obtainable elsewhere
  5. Paying members — people who pay for your services may find AI helps them save on those costs
  6. Face-to-face interaction — AI use, or a blended AI/human experience, may provide more effective service

Using technology to help oversee AI governance

Governance technology tailored to the distinct needs of mission-driven organisations, including charities and those focused on social and environmental causes, can support a systematic and responsible approach to AI governance. As we have noted, while 78% of nonprofit organisations are interested in using AI, fewer than 30% have started to explore or use it.

BoardEffect, a Diligent brand, supports mission-driven organisations. Our board management software streamlines board processes, enhances communication and promotes accountability. This helps volunteer boards support their organisation’s leaders as they develop points of view about AI and governance. BoardEffect provides a foundation for leaders and boards to promote a positive and ethical AI environment in key areas:

  • Policy development and oversight
  • Risk assessment
  • Stakeholder engagement
  • Training and awareness programs
  • Continuous improvement

Whether it’s implementing, reviewing or updating AI polices or training board members on the value and risks of AI use, BoardEffect allows charities and nonprofits to do so in a secure platform with optimal control.

See how BoardEffect, a Diligent brand, can help you implement AI governance in your organisation. Request a demo today.

Ed Rees

Ed is a seasoned professional with over 12 years of experience in the Governance space, where he has collaborated with a diverse range of organizations. His passion lies in empowering these entities to optimize their operations through the strategic integration of technology, particularly in the realms of Governance, Risk, and Compliance (GRC).

Back To Top