Responsible AI Institute
Open-source framework for building and auditing responsible AI systems.
Responsible AI Institute provides open-source tools and frameworks enabling organizations to implement responsible AI practices across model development, testing, and deployment. Supports bias detection, fairness assessment, and governance workflows. Used by data science teams, compliance officers, and AI practitioners seeking practical responsible AI implementation aligned with regulatory expectations.
Adjacent tooling.
AI Governance & Compliance (EY Global)
Enterprise AI governance and compliance framework aligned with EU AI Act requirements.
AI Trust Services (KPMG)
KPMG's trusted AI framework for governance, risk, and compliance.
Aporia
Monitor, test, and safeguard LLMs in production with observability and guardrails.
Atlan
Data lineage and governance for AI systems with policy enforcement.
Centraleyes
AI-powered risk register and policy management for EU AI Act compliance.
Certa
AI-driven third-party risk assessments and compliance management.