EU AI Act Compliance Tools
183 vendors with explicit EU AI Act coverage. Curated, no pay-to-rank.
AI Governance & Compliance (EY Global)
Enterprise AI governance and compliance framework aligned with EU AI Act requirements.
AI Trust Services (KPMG)
KPMG's trusted AI framework for governance, risk, and compliance.
Aporia
Monitor, test, and safeguard LLMs in production with observability and guardrails.
Atlan
Data lineage and governance for AI systems with policy enforcement.
Centraleyes
AI-powered risk register and policy management for EU AI Act compliance.
Certa
AI-driven third-party risk assessments and compliance management.
Collibra
Enterprise data and AI governance platform for compliance and risk management.
Credo AI
Map AI initiatives to regulatory frameworks with compliance scoring.
Dataiku EU AI Act Readiness
Platform helping organizations assess and manage EU AI Act compliance risks.
DataRobot
Real-time AI governance, monitoring and compliance platform for enterprises.
Drata
GRC automation for SOC 2, ISO 27001, and EU AI Act compliance.
Earthian AI
Enterprise risk management platform purpose-built for AI systems.
Ethyca
Privacy and AI governance platform with regulatory enforcement coverage.
Hyperproof
Compliance automation platform for AI governance and regulatory auditing.
IBM watsonx.governance
Unified AI governance platform for model lifecycle management and compliance tracking.
Lakera
LLM security and guardrails for enterprise AI deployment risk management.
LogicGate
No-code platform automating governance, risk, and compliance workflows across organizations.
Lumenova AI
Enterprise platform automating AI governance, risk assessment, and fairness monitoring.
MetricStream
Enterprise GRC platform with dedicated AI risk and compliance modules.
ModelOp
AI ethics platform for model monitoring, bias detection, and governance.
OneTrust AI Governance
AI governance platform for discovery, inventory, and compliance automation.
Proliance
AI governance platform helping organizations manage compliance and risk.
Robust Intelligence
AI security platform detecting adversarial vulnerabilities and model failures.
Sardine
AI risk management for fraud detection with governance oversight.
Secureframe
GRC automation platform with dedicated AI governance and compliance frameworks.
SmartSuite
GRC platform with CRI AI Risk Management Framework for financial institutions.
Sprinto
Compliance automation platform with AI governance and continuous regulatory oversight.
Teramind
Endpoint AI monitoring for insider threat detection and shadow AI governance.
Theta Lake
Monitor AI risks in enterprise collaboration platforms with compliance automation.
TrustArc
Privacy and AI governance compliance platform for EU AI Act readiness.
Usercentrics
Consent and AI governance platform built for GDPR and EU AI Act compliance.
ValidMind
AI validation platform for model governance, risk assessment, and compliance documentation.
Vanta EU AI Act
Multi-framework compliance platform spanning SOC 2, ISO 27001, GDPR, and EU AI Act.
ZenGRC
GRC platform automating AI compliance workflows and governance processes.
Azure AI Content Safety
Content moderation API detecting harmful AI outputs in real-time.
Arize AI
Monitor LLM and ML model performance, detect drift, and debug issues in production.
Evidently AI
ML monitoring and testing platform for model performance, bias, and data drift.
Langfuse Documentation
Open-source LLM observability platform for monitoring AI performance and managing evaluation datasets.
LangSmith
Trace, debug, and monitor LLM applications for transparency and risk control.
NannyML
Post-deployment ML monitoring for data drift, performance degradation, and model behavior.
Neuronpedia
Interpretability platform for understanding neural network behavior and safety.
Oso
Policy-as-code framework for defining and enforcing AI authorization rules.
Pangea
API-first security guardrails for AI applications and compliance.
Taskade: AI Audit PBC Request Checklist Template
Audit checklist template for AI governance and compliance documentation.
Weights & Biases
ML experiment tracking and model monitoring for governance and compliance.
WhyLabs
Monitor AI models in production for data drift, quality issues, and performance degradation.
8 Principles of Responsible ML
Framework for building responsible ML systems with governance principles.
`draft-marques-asqav-compliance-receipts`
IETF standard for cryptographic compliance receipts in AI systems.
A checklist for auditing AI systems
Structured checklist framework for systematic AI system auditing and compliance assessment.
A Living and Curated Collection of Explainable AI Methods
Curated reference collection of XAI methods for model transparency and interpretability.
ABOUT ML Reference Document
Framework for documenting ML system transparency, accountability, and governance requirements.
Advanced AI evaluations at AISI: May update
Advanced AI evaluation framework for systematic risk assessment and compliance testing.
Adversarial Model Analysis
Open-source toolkit for adversarial testing and model interpretability.
AESIA Official Guides (Spain)
Spanish official EU AI Act implementation guides and compliance resources.
AI Act Conformity Tool (European DIGITAL SME Alliance)
EU AI Act compliance checker for organizations mapping risk levels and obligations.
AI Alliance affiliated project
Framework for responsible prompt engineering and LLM governance.
AI Badness: An open catalog of generative AI badness
Open catalog documenting generative AI failure modes and risks.
AI FactSheets 360 (IBM)
Open-source toolkit for AI transparency, bias detection, and responsible model development.
AI Governance and Regulatory Archive
Archive and governance tool for AI regulatory compliance documentation.
AI Safety Camp
Community-driven AI safety education and governance resources.
AI Snake Oil
Exposes AI hype and provides practical guidance for responsible AI deployment.
AI Verify Foundation
Open-source AI governance testing toolkit for compliance and responsible AI
AI Vulnerability Database
Open database and tools for identifying and managing AI vulnerabilities and risks.
AIAAIC
AI incident tracking and governance resource hub for organizations.
AIMultiple
Research hub and vendor directory for AI governance and compliance decisions.
Algorithmic Impact Assessment tool
Government-backed framework for assessing algorithmic systems' impact and risks.
Atlas of AI Risks
Structured risk taxonomy and mapping for AI system governance.
Auditing Guidelines for Artificial Intelligence
Guidelines for auditing AI systems and governance controls.
BRAID programme
UK-based AI governance framework for responsible AI implementation and compliance.
C2PA: Coalition for Content Provenance and Authenticity
Content provenance and authenticity for AI-generated media governance.
CEN-CENELEC JTC 21
EU standards body developing AI governance and compliance frameworks.
Center for AI and Digital Policy Reports
Research reports on AI governance, policy, and regulatory compliance frameworks.
COMPAS Recidivism Risk Score Data and Analysis
Public dataset exposing bias in criminal risk assessment AI systems.
Data Use Policy
Framework for organizations to define and implement responsible data use policies.
Debugging Machine Learning Models
Debug ML models to understand failures and improve transparency.
Deon (DrivenData)
Checklist-driven framework for building ethical AI systems with governance.
Distill
Interactive visualizations for understanding and debugging machine learning models.
EU AI Act 90-Day Implementation Playbook (Secure Privacy)
90-day roadmap for EU AI Act compliance and implementation.
EU AI Act Expert Explainer (Ada Lovelace Institute)
Open-source guide demystifying EU AI Act requirements and compliance obligations.
EU AI Act Handbook (White & Case)
Legal guidance for EU AI Act compliance and organizational AI governance.
EU AI Act Q&A (CMS)
EU AI Act Q&A reference guide for compliance interpretation and implementation.
Extracting Training Data from ChatGPT
Research tool demonstrating training data extraction risks in LLMs.
FairLearn
Open-source toolkit for detecting and mitigating AI model bias and fairness issues.
Fairness and Machine Learning: Limitations and Opportunities
Free textbook and resource guide on fairness limitations in machine learning systems.
FATML Principles and Best Practices
Community-driven principles and practices for fair, transparent, accountable ML.
ForHumanity Body of Knowledge
Open knowledge base for AI governance, risk, and compliance frameworks.
Foundation Model Development Cheatsheet
Quick reference guide for foundation model developers on compliance and responsible practices.
Free FRIA Template (KLA Digital)
Runtime policy enforcement and compliance evidence capture for AI systems.
FRIA Guide (ECNL & Danish Institute)
Practical guide for conducting fundamental rights impact assessments on AI systems.
Getting a Window into your Black Box Model
Reason codes for NFL models: interpretability for black-box AI systems.
GPAI Guidelines Analysis (DLA Piper)
EU AI Act GPAI compliance guidance from DLA Piper legal experts.
Guide to FRIAs (Danish Institute for Human Rights)
Structured guidance for conducting fundamental rights impact assessments under EU AI Act.
Guidelines for AI in parliaments
AI governance framework for parliamentary institutions and legislators.
Have I Been Trained?
Check if your training data was used to train AI models without consent.
IML
Open-source ML interpretability library for understanding model decisions.
Implementing the European AI Act (Future of Life Institute)
Open-source guidance on EU AI Act requirements and implementation strategies.
Interpretable Machine Learning using Counterfactuals
Explainable AI through counterfactual examples for model transparency.
Interpreting Machine Learning Models with the iml Package
R package for interpreting and explaining machine learning model predictions.
Introduction to Responsible Machine Learning
Educational framework for building interpretable, fair, and accountable ML systems.
Llama 2 Responsible Use Guide
Meta's framework for responsible deployment and use of Llama 2 models.
MadryLab
Adversarial robustness research lab advancing AI security and trustworthiness.
MATS
AI safety research & governance framework for responsible AI development.
ML Safety Course
Educational resource for ML safety and responsible AI practices.
Montreal AI Ethics Institute
AI ethics research institute providing governance frameworks and compliance guidance.
OECD.AI Policy Observatory
OECD intelligence on AI policy, governance, and regulation implementation.
OWASP AI Testing Guide
Open-source testing methodology for AI security, bias, and compliance risks.
Partial Dependence Plots in R
R package for interpreting model predictions through partial dependence visualization.
production website
Document and analyze AI incidents for governance and risk mitigation.
RAI Toolkit
Open-source toolkit for responsible AI development and bias assessment.
Real Toxicity Prompts - Allen Institute for AI
Dataset for testing language models against toxic outputs and unsafe behavior.
Resemble.AI Deepfake Incident Database
Deepfake incident tracking database for AI risk monitoring and governance.
Responsible AI Institute
Open-source framework for building and auditing responsible AI systems.
ResponsibleAI
Open-source toolkit for responsible AI development and model explainability.
Sample AI Incident Response Checklist
Structured checklist for responding to and documenting AI incidents.
TensorFlow Extended (TFX)
Production ML pipeline framework with model governance and monitoring capabilities.
Tracing the thoughts of a large language model
Interpretability research enabling auditable LLM decision tracing.
Tracking international legislation relevant to AI at work
Track AI legislation globally to stay compliant across jurisdictions.
Trust-LLM-Benchmark Leaderboard
Benchmark suite evaluating LLM trustworthiness across safety, fairness, and robustness.
Understanding Responsibilities in AI Practices
Framework for defining AI accountability roles and organizational responsibilities.
University of British Columbia, Resources (Generative AI)
Open resource hub for responsible AI governance and compliance practices.
Verica Open Incident Database
Open database of AI incidents for learning from real-world failures.
Vocabulary of AI Risks
Structured vocabulary for identifying and categorizing AI risks in systems.
What-If Tool (Google)
Interactive tool for testing and understanding ML model behavior and fairness.
생성형 인공지능(AI) 개발·활용을 위한 개인정보 처리 안내서(안)
Korean guide for personal data handling in generative AI development and deployment
Aapti Institute
AI governance and responsible AI framework for organizations in emerging markets.
AccuKnox
AI risk management platform securing and governing AI systems in production.
AI Act Trained Professional / AIActTPro (Cyber Risk GmbH)
EU AI Act compliance training and certification for governance professionals.
AI Disclosure Kit
Toolkit for documenting and disclosing AI systems to meet regulatory requirements.
AI Ethics Lab
AI ethics framework and governance tools for responsible AI deployment.
AI Law Center (Orrick)
Legal guidance and governance resources for AI compliance and risk management.
AI Policy Exchange
Exchange AI policy knowledge and compliance strategies across organizations.
AI Risk Database
Centralized database for tracking and managing AI risks across organizations.
AI Safety Map
Visual landscape mapping tool for AI safety governance and compliance navigation.
AI Transparency Institute
Transparency and accountability tools for AI systems governance and compliance.
Aiceberg
AI workflow management platform with integrated governance controls.
AIM Security
AI security posture management and risk tracking for enterprises.
Airia
AI deployment platform with built-in governance and compliance controls.
Apollo Research
AI safety research platform for interpretability and risk assessment.
Asenion
End-to-end AI model governance and development lifecycle management platform.
AuditOne
EU AI Act compliance assessment and audit tool for organizations.
Bigeye
Data observability platform ensuring AI model reliability and data quality.
caralegal
Lawyer-led AI governance platform for DACH enterprises navigating EU AI Act.
ComplyACT AI
EU AI Act compliance management platform for organizations navigating regulatory requirements.
ComplyCloud
Automate AI risk assessment, asset mapping, and compliance documentation for EU AI Act.
Cranium
Security and compliance posture management for AI/ML environments.
Daiki
Centralized EU AI Act compliance, risk management, and monitoring platform
DataGuard
Automate EU AI Act compliance workflows with pre-built frameworks.
Difinity
AI governance platform helping enterprises compare and select compliant tools.
Dynamo AI
Privacy-first platform for secure AI development, deployment, and real-time monitoring.
Enactia
Automate EU AI Act compliance workflows and governance frameworks.
FairNow
Continuous fairness monitoring and bias remediation for high-stakes AI systems.
FAR AI
Formal verification for AI systems to prove safety properties and compliance.
Fiddler AI
Monitor model drift, detect bias, and explain ML/LLM decisions in production.
GEM
Benchmark suite for evaluating AI model risks and bias across governance frameworks.
Global AI Governance Tracker
Track and benchmark AI governance policies across global regulations.
Harmonic Security
Identifies and secures sensitive data exposure in GenAI applications.
HiddenLayer
Adversarial attack detection and ML model security for compliance-required risk management.
Holistic AI
AI governance platform auditing systems against regulatory frameworks
Keyed
AI governance platform built for German/DACH legal compliance.
Knostic
AI governance platform with bias detection and compliance dashboards.
Kobalt Labs
Automate AI compliance workflows for regulated industries.
Kovrr
Automated evidence collection and Article mapping for EU AI Act compliance.
Lasso Security
Runtime security and guardrails for LLM applications in production.
Maxim AI
AI evaluation and observability platform for governance in production.
METR
AI safety evaluations and autonomous agent testing for governance.
MineOS
Automate data privacy and compliance workflows for AI systems.
Model Transparency Ratings
Ratings system for AI model transparency and accountability across deployments.
Modulos
EU AI Act compliance toolkit: risk classification, governance workflows, readiness assessment.
Monitaur
Model monitoring with audit trails for regulated AI systems.
Optro
GRC platform automating AI audit, risk and compliance workflows.
OSD Bias Bounty
Crowdsourced bias detection for AI systems through structured bounty programs.
Prompt Security
Detects and prevents prompt injection attacks and data leakage in AI applications.
Redwood Research
AI safety research lab building tools for measuring and improving model alignment.
Relyance AI
Automated compliance management for data privacy and AI governance.
Rizkly
Automate EU AI Act compliance with pre-built controls library.
SolasAI
Detect algorithmic bias and ensure fairness compliance in AI decisions.
SynthID-Text
Watermark AI-generated text for transparency and provenance verification.
The Ethical AI Database
Centralized database for AI governance policies, risk frameworks, and compliance auditing.
trail-ml
EU AI Act compliance and full lifecycle AI governance platform
TrustWorks
Privacy and AI risk identification with automated mitigation workflows.
Truyo
Privacy-first AI governance with integrated risk assessment for compliance.
VerifyWise
AI governance platform enabling safe, compliant business AI deployment
WitnessAI
Control shadow AI, runtime governance, and agentic systems across your organization.