RAI Toolkit
Open-source toolkit for responsible AI development and bias assessment.
RAI Toolkit provides practical tools for implementing responsible AI practices throughout model development. Organizations use it to assess model fairness, detect bias, and document governance decisions. The toolkit emphasizes open-source accessibility and integration with existing ML workflows, making responsible AI practices actionable for data science teams.
Adjacent tooling.
AI Trust Services (KPMG)
KPMG's trusted AI framework for governance, risk, and compliance.
Aporia
Monitor, test, and safeguard LLMs in production with observability and guardrails.
Dataiku EU AI Act Readiness
Platform helping organizations assess and manage EU AI Act compliance risks.
DataRobot
Real-time AI governance, monitoring and compliance platform for enterprises.
Earthian AI
Enterprise risk management platform purpose-built for AI systems.
IBM watsonx.governance
Unified AI governance platform for model lifecycle management and compliance tracking.