Directory / Resources
Reference shelf

Resources, not vendors.

94 non-commercial entries useful for AI Act compliance work: regulator portals, papers, legal whitepapers, templates, open-source tools, datasets, and adjacent directories. Curatorial, no pay-to-rank.

Other resources

Langfuse

Open-source LLM observability platform for monitoring AI performance and managing evaluation datasets.

Other resources

ABOUT ML Reference Document

Framework for documenting ML system transparency, accountability, and governance requirements.

Other resources

Adversarial Model Analysis

Open-source toolkit for adversarial testing and model interpretability.

Other resources

AI Act Conformity Tool (European DIGITAL SME Alliance)

EU AI Act compliance checker for organizations mapping risk levels and obligations.

Other resources

AI Alliance affiliated project

Framework for responsible prompt engineering and LLM governance.

Other resources

AI Snake Oil

Exposes AI hype and provides practical guidance for responsible AI deployment.

Other resources

AI Vulnerability Database

Open database and tools for identifying and managing AI vulnerabilities and risks.

Other resources

ALEPlot

R package for Accumulated Local Effects plots to interpret ML model predictions.

Other resources

Atlas of AI Risks

Structured risk taxonomy and mapping for AI system governance.

Other resources

Auditing Guidelines for Artificial Intelligence

Guidelines for auditing AI systems and governance controls.

Other resources

C2PA: Coalition for Content Provenance and Authenticity

Content provenance and authenticity for AI-generated media governance.

Other resources

CEN-CENELEC JTC 21

EU standards body developing AI governance and compliance frameworks.

Other resources

Distill

Interactive visualizations for understanding and debugging machine learning models.

Other resources

FATML Principles and Best Practices

Community-driven principles and practices for fair, transparent, accountable ML.

Other resources

Getting a Window into your Black Box Model

Reason codes for NFL models: interpretability for black-box AI systems.

Other resources

Interpretable Machine Learning using Counterfactuals

Explainable AI through counterfactual examples for model transparency.

Other resources

Interpreting Machine Learning Models with the iml Package

R package for interpreting and explaining machine learning model predictions.

Other resources

Llama 2 Responsible Use Guide

Meta's framework for responsible deployment and use of Llama 2 models.

Other resources

OECD.AI Policy Observatory

OECD intelligence on AI policy, governance, and regulation implementation.

Other resources

Sample AI Incident Response Checklist

Structured checklist for responding to and documenting AI incidents.

Other resources

University of British Columbia, Resources (Generative AI)

Open resource hub for responsible AI governance and compliance practices.

Other resources

AI Safety Map

Visual landscape mapping tool for AI safety governance and compliance navigation.

Other resources

Model Transparency Ratings

Ratings system for AI model transparency and accountability across deployments.

Other resources

OSD Bias Bounty

Crowdsourced bias detection for AI systems through structured bounty programs.

Other resources

Papers & research

Neuronpedia

Interpretability platform for understanding neural network behavior and safety.

Papers & research

`draft-marques-asqav-compliance-receipts`

IETF standard for cryptographic compliance receipts in AI systems.

Papers & research

A Living and Curated Collection of Explainable AI Methods

Curated reference collection of XAI methods for model transparency and interpretability.

Papers & research

AI FactSheets 360 (IBM)

Open-source toolkit for AI transparency, bias detection, and responsible model development.

Papers & research

AI Safety Camp

Community-driven AI safety education and governance resources.

Papers & research

AIMultiple

Research hub and vendor directory for AI governance and compliance decisions.

Papers & research

BRAID programme

UK-based AI governance framework for responsible AI implementation and compliance.

Papers & research

Extracting Training Data from ChatGPT

Research tool demonstrating training data extraction risks in LLMs.

Papers & research

MadryLab

Adversarial robustness research lab advancing AI security and trustworthiness.

Papers & research

MATS

AI safety research & governance framework for responsible AI development.

Papers & research

Montreal AI Ethics Institute

AI ethics research institute providing governance frameworks and compliance guidance.

Papers & research

Partial Dependence Plots in R

R package for interpreting model predictions through partial dependence visualization.

Papers & research

Tracing the thoughts of a large language model

Interpretability research enabling auditable LLM decision tracing.

Papers & research

What-If Tool (Google)

Interactive tool for testing and understanding ML model behavior and fairness.

Papers & research

Aapti Institute

AI governance and responsible AI framework for organizations in emerging markets.

Papers & research

Apollo Research

AI safety research platform for interpretability and risk assessment.

Papers & research

FAR AI

Formal verification for AI systems to prove safety properties and compliance.

Papers & research

METR

AI safety evaluations and autonomous agent testing for governance.

Papers & research

Redwood Research

AI safety research lab building tools for measuring and improving model alignment.

Papers & research

SynthID-Text

Watermark AI-generated text for transparency and provenance verification.

Papers & research

Learning & non-profits

8 Principles of Responsible ML

Framework for building responsible ML systems with governance principles.

Learning & non-profits

Center for AI and Digital Policy Reports

Research reports on AI governance, policy, and regulatory compliance frameworks.

Learning & non-profits

EU AI Act 90-Day Implementation Playbook (Secure Privacy)

90-day roadmap for EU AI Act compliance and implementation.

Learning & non-profits

EU AI Act Expert Explainer (Ada Lovelace Institute)

Open-source guide demystifying EU AI Act requirements and compliance obligations.

Learning & non-profits

Fairness and Machine Learning: Limitations and Opportunities

Free textbook and resource guide on fairness limitations in machine learning systems.

Learning & non-profits

ForHumanity Body of Knowledge

Open knowledge base for AI governance, risk, and compliance frameworks.

Learning & non-profits

Guide to FRIAs (Danish Institute for Human Rights)

Structured guidance for conducting fundamental rights impact assessments under EU AI Act.

Learning & non-profits

Implementing the European AI Act (Future of Life Institute)

Open-source guidance on EU AI Act requirements and implementation strategies.

Learning & non-profits

Introduction to Responsible Machine Learning

Educational framework for building interpretable, fair, and accountable ML systems.

Learning & non-profits

ML Safety Course

Educational resource for ML safety and responsible AI practices.

Learning & non-profits

Responsible AI Institute

Open-source framework for building and auditing responsible AI systems.

Learning & non-profits

AI Policy Exchange

Exchange AI policy knowledge and compliance strategies across organizations.

Learning & non-profits

Templates & checklists

Open-source tools

Regulators & official portals

Adjacent directories

Datasets & benchmarks

Incident databases