Governance Resources
A curated reference library of frameworks, standards, and regulatory guidance for professionals building responsible AI governance programs.
26 resources across 10 categories
Principles
Foundational principles guiding responsible AI development and deployment.
OECD AI Principles
Endorsed by 47 governments including the United States, the OECD Recommendation on Artificial Intelligence (2019, revised 2024) establishes five principles for responsible AI: inclusive growth, human-centered values and fairness, transparency and explainability, robustness and safety, and accountability. Updated in 2023 and 2024 to address developments including generative AI.
View resourceUNESCO Recommendation on the Ethics of Artificial Intelligence
Adopted by 193 member states in November 2021, this is the first global normative instrument on AI ethics. It addresses values, principles, and policy areas including proportionality, safety, fairness, sustainability, privacy, human oversight, transparency, and accountability.
View resourceGovernance and Risk Management Frameworks
Comprehensive frameworks for establishing and managing AI governance programs.
Standards
International and industry standards for AI management systems and processes.
ISO/IEC 42001:2023: Artificial Intelligence Management System
The international certifiable standard for AI management systems. Provides requirements for establishing, implementing, maintaining, and continually improving an AI management system within organizations. Addresses AI policy, governance, risk assessment, data governance, the AI system lifecycle, supplier relationships, and stakeholder communication.
View resourceISO/IEC 38507:2022: Governance Implications of the Use of AI
International standard providing guidance for governing bodies on the governance implications of AI. Addresses accountability for AI-assisted decisions, stakeholder involvement, adaptive AI systems, data governance, supplier relationships, regulatory compliance, and the responsibilities of governing bodies in overseeing AI.
View resourceISO/IEC 23894:2023: Guidance on Risk Management for AI
International standard providing guidance on managing risk related to AI systems throughout the lifecycle. Builds on the ISO 31000 risk management framework and addresses AI-specific risk considerations including monitoring, review, and ongoing assessment.
View resourceISO/IEC 22989:2022: AI Concepts and Terminology
International standard establishing common terminology and concepts for AI. Provides definitions and a shared vocabulary for organizations working to understand, develop, and govern AI systems.
View resourceLaws and Regulations
Enacted laws and binding regulations governing AI use across jurisdictions.
EU AI Act: Full Text and Navigable Reference
The European Union’s comprehensive regulation on artificial intelligence, entered into force in 2024 with phased implementation. Establishes risk-based classification of AI systems with corresponding obligations including transparency, human oversight, conformity assessment, and documentation requirements. Particularly relevant provisions include Articles 4 (AI literacy), 9 (risk management), 10 (data governance), 11 (technical documentation), 13 (transparency for deployers), 14 (human oversight), 50 (transparency for certain AI systems), 85 (right to remedy), and 86 (right to explanation). Annex III defines high-risk AI categories.
View resourceEU AI Act: Article 4: AI Literacy
Specific provision of the EU AI Act imposing AI literacy obligations on providers and deployers of AI systems, requiring that staff and other persons dealing with the operation and use of AI systems have sufficient AI literacy.
View resourcePending Legislation
Proposed legislation and regulatory initiatives under consideration.
Regulatory Guidance
Non-binding guidance and recommendations from regulatory bodies.
Intellectual Property and AI
Resources addressing copyright, patent, and IP issues in AI development.
U.S. Copyright Office: AI Reports and Guidance
The U.S. Copyright Office has issued a series of reports analyzing copyright questions raised by AI. Part 1 (Digital Replica, 2024) addresses AI-generated representations of individuals. Part 2 (Copyrightability, 2023) addresses whether and when AI-generated content qualifies for copyright protection. Part 3 (Training, 2025) analyzes whether the use of copyrighted works to train AI systems constitutes fair use. These reports are essential reading for assessment organizations whose copyrighted content may be used in AI training or whose operations produce AI-generated outputs.
View resourceWIPO: Generative AI: Navigating Intellectual Property
The World Intellectual Property Organization’s guidance on intellectual property considerations related to generative AI. Addresses topics including the protection of confidential information in AI prompts, understanding what data AI tools have been trained on, and reviewing AI outputs for potential IP infringement. Particularly relevant for organizations with significant copyrighted content portfolios.
View resourceIndustry-Specific Resources
Sector-specific guidance for AI governance in regulated industries.
Association of Test Publishers: AI Principles
Industry-specific AI principles published by ATP in January 2022, addressing the responsible use of AI in the testing and assessment industry. Provides a framework for assessment organizations to align AI governance with professional standards and stakeholder expectations.
View resourceAssociation of Test Publishers
The professional association representing the testing and assessment industry. Provides resources, advocacy, and professional development opportunities for organizations that develop, deliver, and use assessments.
View resourceInternational Association of Privacy Professionals (IAPP): AI Governance Center
Resources at the intersection of data privacy and AI governance, including analysis of privacy implications of AI systems, regulatory developments, and practical guidance for privacy professionals managing AI-related risks.
View resourceResearch and Reports
Analysis and research from leading organizations on AI governance trends.
Berkman Klein Center: Principled Artificial Intelligence
A foundational 2020 analysis by Harvard's Berkman Klein Center (full title: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI) that examined 36 prominent AI principles documents and identified eight themes appearing in the majority: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values. Authored by Jessica Fjeld, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar.
View resourceStanford University: AI Index Report
An annual report from Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) tracking AI developments across research, industry, policy, governance, and public perception. Provides data-driven analysis of AI trends relevant to governance and risk assessment.
View resourceFuture of Life Institute: AI Policy and Governance Resources
Resources on AI governance, policy, and safety from a research perspective. Includes analysis of regulatory developments, AI risk research, and governance frameworks.
View resourceMIT AI Risk Repository
A comprehensive database of AI risks developed by the MIT FutureTech group. Categorizes and catalogs risks associated with AI systems, providing a structured taxonomy useful for organizations conducting risk assessments.
View resourceCongressional Research Service: AI and Copyright
The Congressional Research Service has published multiple reports analyzing the intersection of copyright law and AI, including copyright protection for AI-generated works and copyright implications of AI training data. Search for "artificial intelligence copyright" for current reports.
View resourceToolkits and Implementation Guides
Practical tools and step-by-step guides for implementing AI governance.
OECD AI Policy Observatory
A comprehensive platform tracking AI policies, strategies, and initiatives across OECD member countries and beyond. Provides comparative analysis of national AI strategies, policy tools, and regulatory approaches.
View resourceNIST AI RMF Playbook
A companion to the AI RMF providing suggested actions and references for each sub-function. The Govern section is particularly relevant to organizational governance, policy development, accountability structures, and workforce training.
View resourceResponsible AI Institute
An independent organization providing certification, tools, and resources for responsible AI. Offers practical guidance on AI governance implementation, conformity assessment, and alignment with international standards.
View resourceAI Verify Foundation
An international initiative supporting AI testing and governance tools, including open-source AI governance testing frameworks. Provides practical tools for organizations conducting AI system assessments.
View resourcePartnership on AI
A multi-stakeholder organization bringing together industry, civil society, academia, and media to advance responsible AI practices. Publishes research and practical guidance on AI governance topics including fairness, transparency, and accountability.
View resourceAssess Your AI Risk Posture
Use our AI Risk Assessment tool to evaluate your organization's governance readiness against established frameworks.
Take the AI Risk Assessment