Interpretable Machine Learning using Counterfactuals
Explainable AI through counterfactual examples for model transparency.
Alibi provides counterfactual explanations for machine learning models, helping organizations understand and justify AI decisions. Used by data scientists and compliance teams to interpret model behavior and identify potential biases. Generates human-readable 'what-if' scenarios explaining why models make specific predictions.
Adjacent tooling.
AI Trust Services (KPMG)
KPMG's trusted AI framework for governance, risk, and compliance.
Aporia
Monitor, test, and safeguard LLMs in production with observability and guardrails.
Lumenova AI
Enterprise platform automating AI governance, risk assessment, and fairness monitoring.
ModelOp
AI ethics platform for model monitoring, bias detection, and governance.
Robust Intelligence
AI security platform detecting adversarial vulnerabilities and model failures.
Sardine
AI risk management for fraud detection with governance oversight.