As generative AI becomes more widely accepted, many leaders in the nonprofit and charity sector are no longer asking whether they should use it, but rather where else they can use it. After all, as a transformative technology that can be employed inexpensively, artificial intelligence has enormous appeal to leaders looking at limits both in budget and staff and volunteer time.
But identifying potential uses is just one element of responsible AI governance. With questions around ethics, how does AI align with the mission and values of your organization? How can leaders ensure the technology is truly being used for the greater good?
To find answers, we asked experts from multiple fields to share their thoughts on how mission-driven organizations can responsibly govern the use of artificial intelligence.
Expert thoughts on AI in mission-driven organizations
AI absolutely can advance the organization’s mission, but how? Consider these musings and tips for bringing AI into your organization mindfully.
Understand where and how AI will be used. “It’s about understanding the use cases in your organization and how are you going to have that oversight.” — Nonie Dalton, Vice President of Product Management, Diligent
Find inspiration in how other organizations strategically incorporate AI into their work, including fundraising, communication, customer service and data analysis.
AI requires supervision. “Think of AI as a junior assistant. You would never let someone only two years out of college loose on your organization without oversight.” — Ari Ioannides, board member at Park City Institute and founder of BoardDocs
Generative AI presents a risk of repeating the same errors leaders made in the early days of social media: either ignoring the opportunity or leaving it in the hands of less-seasoned staff. Precedents for organization use can be established at a grassroots level, but always with C-suite awareness and approval.
“Think of AI as a junior assistant. You would never let someone only two years out of college loose on your organization without oversight.” — Ari Ioannides, board member at Park City Institute and founder of BoardDocs
AI’s potential and ethical risks cross industries, including healthcare… “Artificial intelligence in healthcare marks a paradigm shift, bringing forth groundbreaking possibilities in patient care and disease management. Its potential to transform healthcare into a proactive, patient-focused system is immense. However, these advancements carry profound ethical implications, necessitating thoughtful consideration and guidance. Our objective as healthcare professionals is not to restrict AI’s revolutionary potential but to steer its course responsibly, in a way that honors and upholds the core values of medical practice. As AI becomes increasingly integral to healthcare, it is imperative to ensure its use aligns with Hippocratic principles and responds adaptively to the evolving landscape of patient care.” — Hon. Prof. Tom Chittenden, Ph.D., Digital Environment Research Institute, Queen Mary University of London
…and education. “AI has the potential to drive learning, instruction and administrative change in education. Education has been rich in data for decades; the power of AI can transform how students learn with adaptive instruction, offer insights to teachers about their students’ learning and understanding, and enable them to make more accurate decisions for programmatic improvement, funding and connecting families to additional services servicing the whole child. As with any technology, we must protect our students’, staff’s and teachers’ personal information. A solid governance framework built around three pillars — data, IT and privacy and security — proves paramount with the use of AI in education. Leaders in schools, LEAs and SEAs must be the voice to craft and coordinate the design and implementation of sound practices and policies for all stakeholders in education, from service providers to teachers to technology staff.” — Jill Abbott, CEO, Abbott Advisor Group
NSBA adds, “Education authorities, school leaders and teachers need thoughtful guidance to help their communities realize the potential benefits of incorporating artificial intelligence in primary and secondary education while understanding and mitigating the potential risks.”
Issues around generative AI are rife in every industry, but there are enough parallels to see trends in both opportunities and cautionary tales. No matter your organization’s area of service, there are best practices for approaching the technology wisely.
Start with an AI framework. “If your board/leadership does not have an AI framework in place, they should. This should include strategy as well as policies. Get some training, get some help. Take a look at the NIST (National Institute of Standards and Technology) framework — it has a seven-page playbook with an AI framework you can use as a starting point.” — Richard Barber, CEO and board director, Mind Tech Group
A solid AI framework will provide guidance on how to implement the opportunities and avoid the risks of the technology, and you won’t need to start from scratch. Another framework to consider is Diligent’s 7 Steps to AI Governance for Mission-Driven Organizations, which offers a shortcut to reasonable adoption of generative technology.
“If your board/leadership does not have an AI framework in place, they should. This should include strategy as well as policies. Get some training, get some help. Take a look at the NIST (National Institute of Standards and Technology) framework — it has a seven-page playbook with an AI framework you can use as a starting point.” — Richard Barber, CEO and board director, Mind Tech Group
The time to establish policies is now. “Judgment is key. The governance team needs to have a process in place and a policy for how to use AI. You want to make sure that there’s a policy in place and there is a procedure for how to treat these tools, because it’s not intuitive.” — Dominique Shelton Leipzig, Partner, Mayer Brown
“A lot of times I think we’re scared to put a policy in place because we don’t want to get it wrong. But having a policy in place that you can revise provides guidance, and that’s better than a vacuum.” — Matt Miller, educational speaker and author
Today, the transition from introduction to adoption to saturation of new technology is actually a speed run, so many boards are familiar with the need to establish new policies for these tools. Ensure your board’s policy process is efficient and reflects current and future needs. A robust board management platform can help simplify the policy development and approval process.
Expert help is available. “Making sound, ethical decisions on artificial intelligence for your organization is imperative. Our certification program helps board members and executives like you navigate the ethical and technological issues inherent in AI, so you can steer your organization toward sustainable, trustworthy practices.” — Dottie Schindlinger, Executive Director at Diligent Institute and founding team member of BoardEffect
As with any new technology, engaging a partner can be the best course of action for an organization. Look for a partner that has extensive experience with organization needs and offers secure software that supports collaboration — like Diligent.
Embrace the possibilities. “Because of artificial intelligence we are living in a science fiction book right now. We have the opportunity to define what that science fiction book becomes … It’s a time when we can’t be passive.” — Sal Khan, educator and founder, Khan Academy
So, when is the right time to start building your organization’s governance for generative AI?
The benefits of proactive AI governance for nonprofits
It’s likely AI has already touched some aspect of your organization’s work. But not only is it not too late to develop guidance around the technology, it’s imperative. The use of generative AI has significant ethical implications, from introducing risk of data misuse to potentially running afoul of global privacy regulations.
By establishing a framework and thoughtful policies around use now, nonprofit leaders can ensure AI aligns with their missions, and their organizations can maintain the trust of members, donors, volunteers, staff and other stakeholders.
We at Diligent have been exploring the possibilities of AI from day one and are excited for the opportunities it presents mission-driven organizations. We’ve developed BoardEffect with the needs of volunteer boards in mind as they collaborate strategically and thoughtfully around this new technology.