Distill
Interactive visualizations for understanding and debugging machine learning models.
Distill publishes research on interpretability and understanding of neural networks through interactive articles and visualizations. Researchers and ML practitioners use it to understand model behavior, feature importance, and decision-making processes. Differentiator: peer-reviewed, interactive educational content rather than vendor tools; focuses on fundamental explainability research applicable to compliance auditing.
Adjacent tooling.
AI Trust Services (KPMG)
KPMG's trusted AI framework for governance, risk, and compliance.
Aporia
Monitor, test, and safeguard LLMs in production with observability and guardrails.
Robust Intelligence
AI security platform detecting adversarial vulnerabilities and model failures.
Azure AI Content Safety
Content moderation API detecting harmful AI outputs in real-time.
Arize AI
Monitor LLM and ML model performance, detect drift, and debug issues in production.
LangSmith
Trace, debug, and monitor LLM applications for transparency and risk control.