FairLearn
Open-source toolkit for detecting and mitigating AI model bias and fairness issues.
FairLearn is an open-source Python toolkit for assessing and improving fairness in machine learning models. It provides algorithms for bias detection, mitigation strategies, and fairness metrics across classification and regression tasks. Data scientists and ML engineers use it to identify disparities in model predictions across demographic groups and implement fairness constraints during model development.
Adjacent tooling.
AI Trust Services (KPMG)
KPMG's trusted AI framework for governance, risk, and compliance.
Aporia
Monitor, test, and safeguard LLMs in production with observability and guardrails.
Lumenova AI
Enterprise platform automating AI governance, risk assessment, and fairness monitoring.
ModelOp
AI ethics platform for model monitoring, bias detection, and governance.
Robust Intelligence
AI security platform detecting adversarial vulnerabilities and model failures.
Sardine
AI risk management for fraud detection with governance oversight.