Draft:AI Governance
Policies, standards, and oversight of AI systems
From Wikipedia, the free encyclopedia
AI governance encompasses the policies, institutional arrangements, standards, and accountability mechanisms that direct how artificial intelligence systems are developed, deployed, and overseen. A 2025 systematic literature review in AI and Ethics found that the field addresses questions of who is accountable for AI systems, what elements are governed, when governance occurs within the development lifecycle, and how it is implemented through frameworks, tools, or models.[1] No single, universally adopted definition exists; instead, the concept operates across overlapping domains of management, risk, ethics, and law.[2]
| Submission declined on 15 March 2026 by SnowyRiver28 (talk). This draft appears to be generated by a large language model (such as ChatGPT). You should not use LLMs to write articles from scratch.
LLM-generated pages with the below issues may be deleted without notice. These tools are prone to specific issues that violate our policies:
Instead, only summarize in your own words a range of independent, reliable, published sources that discuss the subject. See the advice page on large language models for more information.
Where to get help
How to improve a draft
You can also browse Wikipedia:Featured articles and Wikipedia:Good articles to find examples of Wikipedia's best writing on topics similar to your proposed article. Improving your odds of a speedy review To improve your odds of a faster review, tag your draft with relevant WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags. Editor resources
|
Background
The intellectual roots of AI governance lie in earlier work on machine ethics. Mitchell Waldrop introduced the term "machine ethics" in a 1987 AI Magazine article, and in 2005 the AAAI held its first symposium dedicated to the topic.[3] A broader wave of AI ethics principles followed. Anna Jobin, Marcello Ienca, and Effy Vayena identified 84 sets of ethical guidelines published worldwide by 2019, with 88 percent released after 2016. Their analysis found convergence around five principles (transparency, justice, non-maleficence, responsibility, and privacy) but significant divergence in interpretation and implementation.[4]
Institutional activity accelerated in 2016 with the founding of the Partnership on AI, which brought together researchers from Apple, Amazon, Google, Facebook, IBM, and Microsoft.[5] The IEEE launched its Global Initiative on Ethics of Autonomous and Intelligent Systems the same year.[6] Corporate AI principles followed in 2018, when companies including Google, Microsoft, and IBM published their own responsible AI commitments.[7]
Terminological distinctions
The International Organization for Standardization (ISO) draws an explicit line between ethical AI, which it describes as rooted in societal values, and responsible AI, which it characterises as more tactical and concerned with development and deployment practices.[8] Some analysts have proposed that these terms form a sequential relationship: ethical AI sets the normative foundation, responsible AI translates those norms into engineering practice, and AI governance adds the accountability and decision authority layer.[9] However, Luciano Floridi and Simon Cowan have argued that moving from principles to enforceable procedures remains a persistent challenge, with many organisations adopting ethical language without building operational accountability.[10]
Major frameworks and standards
Intergovernmental frameworks
The Organisation for Economic Co-operation and Development (OECD) adopted its Recommendation on Artificial Intelligence in 2019, the first intergovernmental standard addressing AI governance. Updated in 2024 to cover generative AI and systems that evolve after deployment, the OECD framework centres on inclusive growth, human-centred values, transparency, robustness, and accountability. More than 40 countries have adopted its principles, and the EU AI Act uses its definition of AI systems.[11]
UNESCO adopted its Recommendation on the Ethics of Artificial Intelligence in 2021, endorsed by its 193 member states. It provides a normative framework addressing human rights, dignity, inclusion, fairness, and environmental sustainability.[12]
Risk management
The National Institute of Standards and Technology (NIST) released its AI Risk Management Framework (AI RMF 1.0) in January 2023. The framework uses four functions — GOVERN, MAP, MEASURE, and MANAGE — with GOVERN operating as a cross-cutting function that sets organisational culture and accountability for AI oversight across all stages of the lifecycle.[13] NIST published a supplementary Generative AI Profile (AI 600-1) in 2024.[14]
Management and governance standards
ISO/IEC 42001:2023 is the first internationally certifiable standard for an AI management system (AIMS). It covers risk assessment, control implementation, and lifecycle oversight within organisations.[15] Within ISO's own architecture, 42001 addresses management processes rather than governance authority. Governance of organisations is covered by ISO 37000:2021, which defines governance as the foundation for fulfilling organisational purpose in an effective, responsible, and ethical manner.[16] ISO/IEC 38507:2022 separately addresses the governance implications of AI use, including maintaining accountability, governance of decision-making, and governance of data.[17]
Binding regulation
The Artificial Intelligence Act (Regulation 2024/1689), adopted by the European Union in 2024, is the first comprehensive binding AI law. It classifies AI applications by risk level and imposes obligations on providers and deployers. Article 14 requires that high-risk AI systems be designed so that natural persons can effectively oversee them during use.[18]
The Framework Convention on Artificial Intelligence, opened for signature on 5 September 2024 under the Council of Europe, is the first binding international treaty addressing AI governance. It establishes principles including transparency, accountability, non-discrimination, and human rights protection. As of January 2026, 19 parties had signed the convention, including the European Union.[19]
Regional and national approaches
Singapore's Model AI Governance Framework, first published in 2019 and updated with a generative AI addendum in 2024, provides a voluntary, sector-specific approach to AI oversight.[20] China has introduced binding rules on algorithmic recommendation (2022), deepfakes (2023), and generative AI services (2023), making it one of the earliest jurisdictions to impose specific AI governance obligations.[5] In the United States, governance has proceeded through executive orders and sectoral regulation rather than comprehensive federal legislation. Executive Order 14110 (October 2023) directed federal agencies to manage AI risks, though it was later rescinded by Executive Order 14148 (January 2025).[21]
Human oversight
Human oversight requirements appear across multiple governance frameworks. The EU AI Act distinguishes several models: human-in-command, human-in-the-loop, human-on-the-loop, and human-over-the-loop, each representing a different degree of human authority over system outputs.[18] The NIST framework positions oversight within its GOVERN function.[13] ISO 37000 holds governing bodies ultimately accountable for organisational actions and omissions.[16]
International cooperation
A series of international summits has addressed AI governance since 2023, beginning with the AI Safety Summit at Bletchley Park (November 2023), followed by the AI Seoul Summit (2024), the AI Action Summit in Paris (2025), and the India AI Impact Summit in New Delhi (February 2026). The India summit was the first in the series hosted by a Global South nation, and discussions included challenges of scaling governance standards in emerging economies alongside Western regulatory models.[5]
The Global Partnership on AI (GPAI), the OECD AI Policy Observatory, and the United Nations High-Level Advisory Body on Artificial Intelligence serve as multilateral coordination mechanisms.[5]
Criticisms and limitations
AI governance frameworks have faced criticism on several fronts. Implementation remains a persistent gap: many organisations adopt governance principles on paper without building operational enforcement mechanisms, a phenomenon sometimes described as "governance washing."[10] A 2019 analysis found that 88 percent of published AI ethics guidelines came from Europe and North America, raising concerns that global governance norms reflect a narrow set of cultural assumptions.[4] Emmanuel Goffi has argued that dominant AI governance narratives risk embedding Western-centric universalism without genuine engagement with diverse philosophical traditions.[22]
The relationship between voluntary principles and binding enforcement also draws scrutiny. Corporate responsible AI pledges often lack external audit mechanisms, and the gap between published principles and measurable compliance remains wide.[7] The Batool, Zowghi, and Bano systematic review noted that existing governance solutions vary significantly in scope, operating at team, organisation, industry, national, and international levels, with limited interoperability across these layers.[1]
In the United States, the absence of comprehensive federal AI legislation has left governance dependent on executive orders, which can be rescinded by subsequent administrations, and on existing sectoral regulators (FDA, FTC, financial agencies) whose mandates were not designed for AI-specific risks.[21]
See also
Further reading
- Batool, A., Zowghi, D., & Bano, M. (2025). "AI governance: a systematic literature review." AI and Ethics, 5, 3265–3279.
- Jobin, A., Ienca, M., & Vayena, E. (2019). "The global landscape of AI ethics guidelines." Nature Machine Intelligence, 1, 389–399.
- Stix, C. (2022). "Artificial intelligence by any other name: a brief history of the conceptualization of 'trustworthy artificial intelligence.'" Discover Artificial Intelligence, 2(26).
- Maas, M. M. (2025). Architectures of Global AI Governance: From Technological Change to Human Choice. Oxford Academic.
- Floridi, L., & Cowan, S. (2025). "Operationalizing accountability in AI governance: From principles to procedures." AI and Ethics, 5(2).

- summarize secondary sources: do not offer your own analysis or arguments;
- be written from a neutral point of view: represent the subject without bias, avoiding praise, criticism, or persuasive or promotional language;
- not contain original research: do not include new theories, unpublished ideas, or personal experiences.
Instead, only summarize in your own words a range of independent, reliable, published sources that discuss the subject.