Resources, not vendors.
94 non-commercial entries useful for AI Act compliance work: regulator portals, papers, legal whitepapers, templates, open-source tools, datasets, and adjacent directories. Curatorial, no pay-to-rank.
Other resources
Langfuse
Open-source LLM observability platform for monitoring AI performance and managing evaluation datasets.
ABOUT ML Reference Document
Framework for documenting ML system transparency, accountability, and governance requirements.
Adversarial Model Analysis
Open-source toolkit for adversarial testing and model interpretability.
AI Act Conformity Tool (European DIGITAL SME Alliance)
EU AI Act compliance checker for organizations mapping risk levels and obligations.
AI Alliance affiliated project
Framework for responsible prompt engineering and LLM governance.
AI Snake Oil
Exposes AI hype and provides practical guidance for responsible AI deployment.
AI Vulnerability Database
Open database and tools for identifying and managing AI vulnerabilities and risks.
ALEPlot
R package for Accumulated Local Effects plots to interpret ML model predictions.
Atlas of AI Risks
Structured risk taxonomy and mapping for AI system governance.
Auditing Guidelines for Artificial Intelligence
Guidelines for auditing AI systems and governance controls.
C2PA: Coalition for Content Provenance and Authenticity
Content provenance and authenticity for AI-generated media governance.
CEN-CENELEC JTC 21
EU standards body developing AI governance and compliance frameworks.
Distill
Interactive visualizations for understanding and debugging machine learning models.
FATML Principles and Best Practices
Community-driven principles and practices for fair, transparent, accountable ML.
Getting a Window into your Black Box Model
Reason codes for NFL models: interpretability for black-box AI systems.
Interpretable Machine Learning using Counterfactuals
Explainable AI through counterfactual examples for model transparency.
Interpreting Machine Learning Models with the iml Package
R package for interpreting and explaining machine learning model predictions.
Llama 2 Responsible Use Guide
Meta's framework for responsible deployment and use of Llama 2 models.
OECD.AI Policy Observatory
OECD intelligence on AI policy, governance, and regulation implementation.
Sample AI Incident Response Checklist
Structured checklist for responding to and documenting AI incidents.
University of British Columbia, Resources (Generative AI)
Open resource hub for responsible AI governance and compliance practices.
AI Safety Map
Visual landscape mapping tool for AI safety governance and compliance navigation.
Model Transparency Ratings
Ratings system for AI model transparency and accountability across deployments.
OSD Bias Bounty
Crowdsourced bias detection for AI systems through structured bounty programs.
Papers & research
Neuronpedia
Interpretability platform for understanding neural network behavior and safety.
`draft-marques-asqav-compliance-receipts`
IETF standard for cryptographic compliance receipts in AI systems.
A Living and Curated Collection of Explainable AI Methods
Curated reference collection of XAI methods for model transparency and interpretability.
AI FactSheets 360 (IBM)
Open-source toolkit for AI transparency, bias detection, and responsible model development.
AI Safety Camp
Community-driven AI safety education and governance resources.
AIMultiple
Research hub and vendor directory for AI governance and compliance decisions.
BRAID programme
UK-based AI governance framework for responsible AI implementation and compliance.
Extracting Training Data from ChatGPT
Research tool demonstrating training data extraction risks in LLMs.
MadryLab
Adversarial robustness research lab advancing AI security and trustworthiness.
MATS
AI safety research & governance framework for responsible AI development.
Montreal AI Ethics Institute
AI ethics research institute providing governance frameworks and compliance guidance.
Partial Dependence Plots in R
R package for interpreting model predictions through partial dependence visualization.
Tracing the thoughts of a large language model
Interpretability research enabling auditable LLM decision tracing.
What-If Tool (Google)
Interactive tool for testing and understanding ML model behavior and fairness.
Aapti Institute
AI governance and responsible AI framework for organizations in emerging markets.
Apollo Research
AI safety research platform for interpretability and risk assessment.
FAR AI
Formal verification for AI systems to prove safety properties and compliance.
METR
AI safety evaluations and autonomous agent testing for governance.
Redwood Research
AI safety research lab building tools for measuring and improving model alignment.
SynthID-Text
Watermark AI-generated text for transparency and provenance verification.
Learning & non-profits
8 Principles of Responsible ML
Framework for building responsible ML systems with governance principles.
Center for AI and Digital Policy Reports
Research reports on AI governance, policy, and regulatory compliance frameworks.
EU AI Act 90-Day Implementation Playbook (Secure Privacy)
90-day roadmap for EU AI Act compliance and implementation.
EU AI Act Expert Explainer (Ada Lovelace Institute)
Open-source guide demystifying EU AI Act requirements and compliance obligations.
Fairness and Machine Learning: Limitations and Opportunities
Free textbook and resource guide on fairness limitations in machine learning systems.
ForHumanity Body of Knowledge
Open knowledge base for AI governance, risk, and compliance frameworks.
Guide to FRIAs (Danish Institute for Human Rights)
Structured guidance for conducting fundamental rights impact assessments under EU AI Act.
Implementing the European AI Act (Future of Life Institute)
Open-source guidance on EU AI Act requirements and implementation strategies.
Introduction to Responsible Machine Learning
Educational framework for building interpretable, fair, and accountable ML systems.
ML Safety Course
Educational resource for ML safety and responsible AI practices.
Responsible AI Institute
Open-source framework for building and auditing responsible AI systems.
AI Policy Exchange
Exchange AI policy knowledge and compliance strategies across organizations.
Templates & checklists
Taskade: AI Audit PBC Request Checklist Template
Audit checklist template for AI governance and compliance documentation.
A checklist for auditing AI systems
Structured checklist framework for systematic AI system auditing and compliance assessment.
Deon (DrivenData)
Checklist-driven framework for building ethical AI systems with governance.
Foundation Model Development Cheatsheet
Quick reference guide for foundation model developers on compliance and responsible practices.
Free FRIA Template (KLA Digital)
Runtime policy enforcement and compliance evidence capture for AI systems.
FRIA Guide (ECNL & Danish Institute)
Practical guide for conducting fundamental rights impact assessments on AI systems.
Guidelines for AI in parliaments
AI governance framework for parliamentary institutions and legislators.
TensorFlow Extended (TFX)
Production ML pipeline framework with model governance and monitoring capabilities.
Open-source tools
Debugging Machine Learning Models
Debug ML models to understand failures and improve transparency.
FairLearn
Open-source toolkit for detecting and mitigating AI model bias and fairness issues.
Have I Been Trained?
Check if your training data was used to train AI models without consent.
IML
Open-source ML interpretability library for understanding model decisions.
ResponsibleAI
Open-source toolkit for responsible AI development and model explainability.
TensorBoard Projector
Interactive visualization tool for understanding neural network embeddings and model behavior.
Trust-LLM-Benchmark Leaderboard
Benchmark suite evaluating LLM trustworthiness across safety, fairness, and robustness.
Vocabulary of AI Risks
Structured vocabulary for identifying and categorizing AI risks in systems.
Regulators & official portals
Advanced AI evaluations at AISI: May update
Advanced AI evaluation framework for systematic risk assessment and compliance testing.
AESIA Official Guides (Spain)
Spanish official EU AI Act implementation guides and compliance resources.
AI Verify Foundation
Open-source AI governance testing toolkit for compliance and responsible AI
Algorithmic Impact Assessment tool
Government-backed framework for assessing algorithmic systems' impact and risks.
Guidance on AI and Data Protection (ICO)
ICO guidance on AI systems and UK GDPR compliance requirements
RAI Toolkit
Open-source toolkit for responsible AI development and bias assessment.
Understanding Responsibilities in AI Practices
Framework for defining AI accountability roles and organizational responsibilities.
Adjacent directories
AI Badness: An open catalog of generative AI badness
Open catalog documenting generative AI failure modes and risks.
AI Governance and Regulatory Archive
Archive and governance tool for AI regulatory compliance documentation.
AIAAIC
AI incident tracking and governance resource hub for organizations.
Tracking international legislation relevant to AI at work
Track AI legislation globally to stay compliant across jurisdictions.
Global AI Governance Tracker
Track and benchmark AI governance policies across global regulations.
The Ethical AI Database
Centralized database for AI governance policies, risk frameworks, and compliance auditing.
Datasets & benchmarks
COMPAS Recidivism Risk Score Data and Analysis
Public dataset exposing bias in criminal risk assessment AI systems.
ML.ENERGY Leaderboard
Track ML model energy consumption and environmental impact.
Real Toxicity Prompts - Allen Institute for AI
Dataset for testing language models against toxic outputs and unsafe behavior.
GEM
Benchmark suite for evaluating AI model risks and bias across governance frameworks.
Legal whitepapers
EU AI Act Handbook (White & Case)
Legal guidance for EU AI Act compliance and organizational AI governance.
EU AI Act Q&A (CMS)
EU AI Act Q&A reference guide for compliance interpretation and implementation.
GPAI Guidelines Analysis (DLA Piper)
EU AI Act GPAI compliance guidance from DLA Piper legal experts.
AI Law Center (Orrick)
Legal guidance and governance resources for AI compliance and risk management.