AI regulations: what nonprofit boards need to know
In all likelihood, your board and staff are using AI tools already, to automate routine tasks like data entry, compile information for grant reports and summarize long documents. It’s helping them be more efficient, engaged and effective in their work.
But how well is your organization managing AI’s risks, like inaccuracies and “hallucinations,” assessing its impact, protecting sensitive data, like donor and beneficiary information?
As nonprofit organizations embed AI more and more into their operations, their boards must stay informed about a growing web of AI-related regulations. Ignoring them can result in significant exposure to serious risks: regulatory violations, fines, reputational damage and erosion of community trust.
On the flip side, being informed about AI regulations enables nonprofit administrators and board members to make strategic decisions about AI adoption, ensure compliance with relevant laws and guidelines and, ultimately, maintain the trust of their stakeholders.
Nonprofit boards have a responsibility to ensure their AI use is ethical, compliant and aligned with their mission.
To help leaders maintain regulatory compliance and strengthen AI governance overall, we’ve rounded up regulations and official guidance from around the world and compiled them into a user-friendly guide that offers:
- A region-by-region overview of laws and frameworks to know, plus agencies and issues to watch, in the AI regulatory landscape.
- Overall tips and guidance, from peers who are going through the same journey.
- Ways modern governance software — including tools that use AI itself — can help.
AI regulations by region
Data privacy, intellectual property, disclosure, discrimination — AI law covers a wide and ever-expanding range of areas.
When navigating AI governance for your own organization, we recommend consulting with your legal team and doing in-depth research on specific agencies, laws and issues. But this high-level overview of select countries — the agencies involved, what they’re focused on, who they’re partnering with and specific policies (both enacted and proposed) — is a powerful start to understanding the landscape.
The Americas
United States
After the White House released an AI Bill of Rights through its Office of Science and Technology in 2022 and issued an Executive Order one year later, AI regulation in the United States has evolved through a patchwork of federal, state and industry frameworks.
At the state level, Colorado became the first US state to enact comprehensive AI legislation in May 2024, with a focus on algorithmic discrimination and systems in essential areas like housing, healthcare, education and employment. California also has been a forerunner in AI regulation, addressing areas like increasing business accountability, combatting discrimination and regulating how businesses use data.
Consumer data privacy laws have since sprouted up at the state level nationwide, as have dozens of enacted and proposed regulations in areas such as:
- Notifying people that they’re interacting with AI systems or AI-generated content.
- Using algorithms to determine employment, services and housing.
- Offering ways to opt out of data collection and profiling.
- Testing AI systems for discrimination and bias.
- Taking measures to monitor, mitigate and disclose the potential risk and impact of AI applications like automated decision tools and bots for purposes like mental health services.
Canada
At the federal level, the proposed Artificial Intelligence and Data Act (AIDA) was Canada’s first attempt at comprehensive AI legislation. Although AIDA was paused in early 2025 due to parliamentary changes, its principles — including risk-based governance, transparency, and accountability — continue to shape voluntary codes and sector-specific guidance.
In the absence of binding federal law, the Department of Innovation, Science and Economic Development (ISED) has introduced a Voluntary Code of Conduct for Generative AI, which encourages organizations to uphold standards around fairness, safety, human oversight, and transparency when deploying advanced AI systems.
Québec’s Law 25 imposes strict data privacy and automated decision-making disclosure requirements, including mandatory notification when decisions are made solely by AI.
Ontario’s Bill 194 requires public sector entities to disclose AI use, manage associated risks, and implement accountability frameworks.
For nonprofit organizations, Canada has also launched the Responsible AI Adoption for Social Impact (RAISE) initiative in 2025, a national program supporting nonprofits in adopting AI responsibly, with a focus on ethics, equity and long-term capacity building. Led by the Human Feedback Foundation, The Dais at Toronto Metropolitan University, and Creative Destruction Lab, RAISE includes:
- A governance framework tailored to nonprofit missions
- AI literacy and training for 500 nonprofit professionals
- An accelerator program for large nonprofits like CAMH Foundation and CanadaHelps
Europe
UK
In the UK, AI oversight happens through the lens of safety, security, transparency, accountability and fairness and is handled by existing regulatory agencies like the Financial Conduct Authority, which published updated guidance related to these five principles in September 2025.
The UK government is also exploring AI’s impact in areas like copyright laws and permissions for online content and updated its Charity Digital Code of Practice in 2025 to weave in AI considerations throughout.
Ireland/EU
Under the EU AI Act, the European Union has united its member nations under a common set of overarching regulations. The Act outlines four levels of AI risk, rigorous transparency and data governance obligations for AI providers, and detailed compliance and monitoring protocols for those deploying AI systems. (Our EU AI Act Cheat Sheet is a good place to start for getting familiar with it all.)
Guidance under the Act is continually evolving. In September 2025, for example, the European Commission opened consultation on the development of transparency-focused guidelines, including requirements to notify users if they are interacting with an AI system and label AI-generated content.
Implementation of the Act happens at the member level. In September 2025, Ireland announced the designation of 15 National Competent Authorities, along with plans for a National AI Office by August 2026, for this purpose.
Switzerland
While Switzerland does not fall within the EU AI Act’s jurisdiction, the Swiss Federal Council signed the Council of Europe’s Convention on Artificial Intelligence in March 2025, which “might require Switzerland to adopt provisions equivalent to those regarding high-risk AI systems under the EU AI Act,” according to analysis by Chambers and Partners.
As in Ireland, implementation on the ground is a priority. For example, 2026 plans by the Federal Council include potential expansion of a federal coordination office, to establish a unified strategic framework and enhance coordination across federal agencies, in the use of AI systems
Middle East
United Arab Emirates
The UAE governs AI through its National Strategy for Artificial Intelligence 2031 and a national charter that articulates 12 key principles for responsible AI development and deployment. The Dubai International Financial Center has integrated AI into its data protection regulations, and the Abu Dhabi Global Market contains rules and standards that apply to AI systems and AI in data processing.
Recent developments include a Regulatory Intelligence Office that will use AI to analyze government data, track the impact of regulatory measures and generate recommendations for legislative updates.
Saudi Arabia
Organizations doing work with donors or beneficiaries in this nation should be aware of the Saudi Data and Artificial Intelligence Authority (SDAIA) and the Kingdom of Saudi Arabia’s Personal Data Protection Law (PDPL), which “applies to all entities processing personal data of individuals residing in the KSA regardless of the physical location of the data processing activities,” according to Morgan Lewis.
Qatar
Qatar governs AI development through its National AI Strategy, launched in 2019, and frameworks like the Guidelines for the Secure Adoption and Use of Artificial Intelligence, released by the National Cybersecurity Agency in 2024.
Bahrain
In July 2025, Bahrain announced its National Policy for the Use of Artificial Intelligence, which aligns with national laws and frameworks including its Personal Data Protection Law, its Open Data Policy, and the GCC Guiding Manual on the Ethical Use of AI.
Africa
South Africa
In South Africa, regulatory guidance takes place primarily through a National AI Plan in 2024, followed by a National AI Policy Framework in 2025. Both are the product of the nation’s Department of Communications and Digital Technologies (DCDT).
Nigeria
As Nigeria drafts its National AI Policy, oversight over the development and use of AI currently take place through a variety of other regulations, including the Nigeria Data Protection Act, the Copyright Act and the Nigerian Communication Commission Art.
International partnership is also a priority. Nigeria is a signatory to the Bletchley Declaration and was among the 95 countries participating in the 2025 Global Privacy Assembly to tackle AI and data protection.
Ghana
AI oversight here, along with governance of digital technology overall, takes place through a variety of agencies, including Ministry of Communication, Digital Technology and Innovations, the National Communications Authority and the Cyber Security Authority. In May 2025, the Minister for Communication, Digital Technology and Innovation announced the development of a National Digital Transformation and Emerging Technology Strategy, with a strong focus on AI.
Tanzania
Tanzania worked with UNESCO to conduct a National AI Readiness Assessment. Released in 2025, it goes into great detail about regulatory frameworks, data protection, privacy laws and public engagement and trust and lays the groundwork for a national AI strategy.
Botswana
Botswana was also among the nations who consulted with UNESCO on AI development and readiness. “In May 2025, these consultations gained real traction,” UNESCO reported. “The Ministry of Communication and Innovation opened the floor to dynamic exchanges on the role of policy, law and infrastructure in AI development.”
Lesotho
Lesotho has released a draft strategy for AI governance and is working with nations like Ghana to align efforts and accelerate progress.
Zimbabwe
Zimbabwe has completed its national AI policy framework, focused on secure data storage and data sovereignty, and plans to launch it in October 2025 at its new parliament building.
Namibia
Here multiple pieces of legislation under development — a data protection bill, a cyber crime bill and an AI bill — are expected to form the foundation of a broader digital technology strategy.
Uganda
Uganda is in the process of drafting its first national AI policy, with a focus on responsible AI use, data privacy and equipping the country to benefit from global advancements in AI technology.
Malawi
Malawi has been another participant in UNESCO’s AI initiatives. The nation’s Ministry of Education, Science and Technology met with UNESCO to talk about shaping an ethical AI ecosystem, looking at areas like regulatory frameworks, digital infrastructure and cybersecurity readiness.
Asia Pacific
Singapore
Singapore is a front-runner in national AI oversight, becoming the first nation in the world to launch a Model AI Governance Framework in 2019. Recent efforts focus on data protection, with the Singapore Personal Data Protection Commission launching three new initiatives: the Singapore Standard for Data Protection, a “sandbox” for AI assurance and the PET Guide for adopting privacy-enhancing technologies.
Australia
In Australia’s current environment of voluntary AI guardrails, nonprofit leaders would be wise to keep a few specific areas on their radar. These include intellectual property issues (like the impact of copyright law on AI model training) and data protection. Privacy legislation passed in late 2024 includes AI in its scope, including transparency requirements for automated decision tools.
New Zealand
New Zealand was among the last OECD countries to publish a national AI strategy. The nation takes a “light-touch, proportionate, and risk-based approach to AI regulation,” using existing laws as safeguards and introducing new laws only when necessary.
Practical tips and resources for regulatory compliance
Knowing the latest AI and data protection laws is only half the battle. Now you need to put that knowledge to work by figuring out how your organization will comply with them.
This is where an AI governance framework comes in, setting guardrails, guidelines and expectations, as well as outlining what’s acceptable and what’s prohibited for board members and staff using AI tools. Such a framework will help your organization maximize AI’s benefits while minimizing exposure to risk. (You can download your free guide to AI governance here.)
Consider the nuts and bolts of how your organization will:
- Oversee the use of AI tools, along with their development and model training if applicable.
- Monitor and evaluate their performance.
- Mitigate key risks, such as potential biases.
- Ensure that AI is used responsibly and ethically, in addition to “ticking the right regulatory boxes”.
When creating and implementing an AI governance framework:
- Ensure that AI policy aligns with your mission and core values and supports your ability to reach institutional goals.
- Define the roles and responsibilities of individuals, board committees and staff members responsible for oversight and implementation.
- Include training for the board and staff in AI ethics, responsible AI practices and your organization’s AI policy.
- Have a plan for how the organization will address concerns about AI initiative and engage with your community, including donors, volunteers, funders and vendors.
“You want to make sure that there’s a policy in place and there is a procedure for how to treat these tools, because it’s not intuitive,” Dominique Shelton Leipzig, a partner with Mayer Brown, advises.
Discover more expert thoughts on AI governance. Read our full AI Governance Checklist and A detailed blog on generative AI and governance frameworks. And dive even deeper into the subject with AI Ethics & Board Oversight Certification by the Diligent Institute.
“Making sound, ethical decisions on artificial intelligence for your organization is imperative. Our certification program helps board members and executives like you navigate the ethical and technological issues inherent in AI, so you can steer your organization toward sustainable, trustworthy practices.” - Dottie Schindlinger, Executive Director at Diligent Institute and founding team member of BoardEffect
BoardEffect: Bringing efficiency, effectiveness and ease to AI governance
Just like the technology itself, AI regulations are evolving fast — as are the related opportunities and risks. BoardEffect helps busy nonprofit boards keep up by:
- Putting reports, policies and educational resources in one secure, easily accessible, searchable and updatable online repository, along with the latest news items and action items.
- Centralizing and streamlining AI-related discussions via a platform board members and support teams can access anytime, anywhere.
- Empowering AI committees and work groups with their own digital workrooms.
- Enabling effective collaborating even more with annotation tools for board materials, like “sticky notes,” drawing and highlighting.
- Equipping administrators with built-in surveys, schedulers, polls and opinions so they can easily gather opinions and get everyone on the same page.
What’s more, BoardEffect features like AI Smart Summary and AI Smart Minutes offer hands-on experience with AI-powered tools, so nonprofit boards can see its power in action in important board work while building familiarity and mastery.
By delivering the right data to the right people at the right place and time, BoardEffect makes AI governance more efficient, effective and engaging. Schedule a demo today.
Mark Wilson is an Account Manager at BoardEffect which is a division of Diligent Corporation. In his role, Mark works with a range of organisations from government departments, HEIs, Healthcare, schools, and charities across UK & Ireland. Having been working within Governance for over 7 years, Mark understands how BoardEffect’s governance platform can be used to achieve an organisation’s governance strategic aims. Mark has over two decades of experience working in the technology sector.
