Bias & Fairness Testing
33 vendors curated. Independent ranking, no paid placement.
Lumenova AI
Enterprise platform automating AI governance, risk assessment, and fairness monitoring.
ModelOp
AI ethics platform for model monitoring, bias detection, and governance.
Sardine
AI risk management for fraud detection with governance oversight.
Evidently AI
ML monitoring and testing platform for model performance, bias, and data drift.
Adversarial Model Analysis
Open-source toolkit for adversarial testing and model interpretability.
AI Badness: An open catalog of generative AI badness
Open catalog documenting generative AI failure modes and risks.
AI FactSheets 360 (IBM)
Open-source toolkit for AI transparency, bias detection, and responsible model development.
AI Snake Oil
Exposes AI hype and provides practical guidance for responsible AI deployment.
AI Vulnerability Database
Open database and tools for identifying and managing AI vulnerabilities and risks.
COMPAS Recidivism Risk Score Data and Analysis
Public dataset exposing bias in criminal risk assessment AI systems.
FairLearn
Open-source toolkit for detecting and mitigating AI model bias and fairness issues.
Fairness and Machine Learning: Limitations and Opportunities
Free textbook and resource guide on fairness limitations in machine learning systems.
Have I Been Trained?
Check if your training data was used to train AI models without consent.
Interpretable Machine Learning using Counterfactuals
Explainable AI through counterfactual examples for model transparency.
Introduction to Responsible Machine Learning
Educational framework for building interpretable, fair, and accountable ML systems.
MadryLab
Adversarial robustness research lab advancing AI security and trustworthiness.
OWASP AI Testing Guide
Open-source testing methodology for AI security, bias, and compliance risks.
RAI Toolkit
Open-source toolkit for responsible AI development and bias assessment.
Real Toxicity Prompts - Allen Institute for AI
Dataset for testing language models against toxic outputs and unsafe behavior.
Responsible AI Institute
Open-source framework for building and auditing responsible AI systems.
ResponsibleAI
Open-source toolkit for responsible AI development and model explainability.
Trust-LLM-Benchmark Leaderboard
Benchmark suite evaluating LLM trustworthiness across safety, fairness, and robustness.
What-If Tool (Google)
Interactive tool for testing and understanding ML model behavior and fairness.
AI Ethics Lab
AI ethics framework and governance tools for responsible AI deployment.
FairNow
Continuous fairness monitoring and bias remediation for high-stakes AI systems.
Fiddler AI
Monitor model drift, detect bias, and explain ML/LLM decisions in production.
GEM
Benchmark suite for evaluating AI model risks and bias across governance frameworks.
Holistic AI
AI governance platform auditing systems against regulatory frameworks
Knostic
AI governance platform with bias detection and compliance dashboards.
OSD Bias Bounty
Crowdsourced bias detection for AI systems through structured bounty programs.
Redwood Research
AI safety research lab building tools for measuring and improving model alignment.
SolasAI
Detect algorithmic bias and ensure fairness compliance in AI decisions.
SynthID-Text
Watermark AI-generated text for transparency and provenance verification.