ResponsibleAI
Open-source toolkit for responsible AI development and model explainability.
ResponsibleAI provides open-source tools for building interpretable and fair AI systems. It focuses on model explainability, bias detection, and responsible practices throughout the ML lifecycle. Designed for data scientists and ML engineers implementing governance standards, it enables organizations to document model behavior and fairness metrics for compliance and audit purposes.
Adjacent tooling.
AI Trust Services (KPMG)
KPMG's trusted AI framework for governance, risk, and compliance.
Aporia
Monitor, test, and safeguard LLMs in production with observability and guardrails.
Lumenova AI
Enterprise platform automating AI governance, risk assessment, and fairness monitoring.
ModelOp
AI ethics platform for model monitoring, bias detection, and governance.
Robust Intelligence
AI security platform detecting adversarial vulnerabilities and model failures.
Sardine
AI risk management for fraud detection with governance oversight.