Adversarial Model Analysis
Open-source toolkit for adversarial testing and model interpretability.
Adversarial Model Analysis (AMA) provides automated tools for stress-testing ML models against adversarial inputs and explaining model behavior. Used by data scientists and ML engineers to identify model vulnerabilities and ensure robustness. The toolkit emphasizes practical adversarial attack methods and model debugging for high-stakes applications.
Adjacent tooling.
Aporia
Monitor, test, and safeguard LLMs in production with observability and guardrails.
Dataiku EU AI Act Readiness
Platform helping organizations assess and manage EU AI Act compliance risks.
DataRobot
Real-time AI governance, monitoring and compliance platform for enterprises.
Earthian AI
Enterprise risk management platform purpose-built for AI systems.
IBM watsonx.governance
Unified AI governance platform for model lifecycle management and compliance tracking.
Lakera
LLM security and guardrails for enterprise AI deployment risk management.