← Back to Compliance
42001

ISO 42001:2023

Purpose-built AI governance standard addressing accountability, transparency, data governance, and human oversight for AI systems. Organisations pursuing certification need an evidence trail for AI tool usage across their operations.

AI management system · Published December 2023

Clause 6.1 Actions to address risks and opportunities

Organisations must identify AI-related risks and document controls to address them. Unmonitored commercial AI tool usage creating uncontrolled data transfer to external models is an AI risk that must be identified and controlled.

Audit trail provides evidence that AI data flow risks have been identified and are under active monitoring and policy control — satisfying the risk treatment evidence requirement for Clause 6.1.

Clause 8.4 AI system impact assessment

Organisations must assess the impact of AI systems on individuals and society, including documenting data flows and processing activities associated with AI system operation.

Data flow capture and categorisation provides the evidence base for AI system impact assessments — documenting what data each AI system accessed, what categories were involved, and what governance decisions were applied.

Clause 9.1 Monitoring, measurement, analysis and evaluation

Organisations must monitor and measure AI management system performance, including the effectiveness of controls over AI data flows and policy compliance.

Dashboard metrics — policy trigger rates, blocked call volumes, data category distributions, MCP server activity — provide the quantitative evidence base that ISO 42001 Clause 9.1 monitoring requires.

Build your ISO 42001 evidence library

See how Svalin produces the audit evidence ISO 42001 auditors look for.

Request a Demo