Extracting Training Data from ChatGPT
Research tool demonstrating training data extraction risks in LLMs.
Academic research project that identifies and demonstrates memorization vulnerabilities in large language models like ChatGPT. Helps organizations understand data privacy risks in LLMs by showing how training data can be extracted. Valuable for AI auditing, compliance assessments, and responsible AI evaluation of generative models.
Adjacent tooling.
AI Governance & Compliance (EY Global)
Enterprise AI governance and compliance framework aligned with EU AI Act requirements.
AI Trust Services (KPMG)
KPMG's trusted AI framework for governance, risk, and compliance.
Aporia
Monitor, test, and safeguard LLMs in production with observability and guardrails.
Atlan
Data lineage and governance for AI systems with policy enforcement.
Centraleyes
AI-powered risk register and policy management for EU AI Act compliance.
Certa
AI-driven third-party risk assessments and compliance management.