← Back to Compliance
NIST

NIST AI Risk Management Framework

Voluntary framework published by the US National Institute of Standards and Technology, structured around four functions: Govern, Map, Measure, Manage. Widely adopted internationally — particularly by organisations with US operations, US enterprise customers, or those aligning to SOC 2. Complements ISO 42001 and EU AI Act obligations rather than replacing them.

AI RMF 1.0 · Published January 2023 · US-origin, globally adopted

GOVERN 1.1 Policies and procedures for AI risk management

Organisational policies, processes, procedures, and practices are in place to address AI risk. Accountability and oversight mechanisms are defined and documented for all AI systems in use, including commercial AI tools used by employees.

Svalin's policy engine provides the technical implementation of AI governance policies — defining which data categories can flow through which AI tool connections, with a full audit trail of every policy decision and change. Demonstrates that governance policies are operationally active, not just documented.

GOVERN 6.1 Third-party AI risk policies and procedures

Policies, processes, procedures, and practices are in place for mapping, measuring, and managing risks associated with third-party AI systems and AI supply chains, including commercial AI providers receiving organisational data.

Svalin continuously monitors what data is transferred to each third-party AI provider — Anthropic, Google, OpenAI — providing the ongoing evidence of third-party AI risk management that GOVERN 6.1 requires. Turns a static vendor register into a live operational control.

MAP 2.3 AI risk context — data flows and dependencies

Scientific findings and organisational factors are used to identify contexts in which AI risks may manifest. Includes understanding data flows into and out of AI systems, dependencies on external AI models, and the data supply chain for AI tool usage.

Svalin maps every MCP server tool call and AI provider data flow across the organisation — providing a live, queryable inventory of AI data dependencies that satisfies the contextualisation requirement of MAP 2.3. Turns an unknown risk surface into a documented and monitored one.

MEASURE 2.5 Effectiveness of AI risk controls — ongoing evaluation

The effectiveness of AI risk management controls is evaluated on a defined frequency and after significant changes. Evidence of control effectiveness is documented and available for review.

Svalin's dashboard provides continuous quantitative evidence of AI governance control effectiveness — policy trigger rates, blocked data transfers, anomalous activity patterns, data category distributions over time. Audit-exportable for periodic MEASURE 2.5 evaluations without manual evidence gathering.

MANAGE 2.4 Risk treatment documentation and evidence

Risk treatments, including controls implemented to address identified AI risks, are documented. Evidence of their effectiveness is maintained and available for audit and accountability purposes.

Every Svalin policy decision, blocked call, and governance action is logged with full context — user, data categories, AI provider, timestamp, policy applied. Provides the complete MANAGE 2.4 evidence trail: what risk was identified, what control was applied, and proof it functioned as intended.

Build your NIST AI RMF evidence library

See how Svalin continuously produces the Govern, Map, Measure, and Manage evidence NIST AI RMF requires.

Request a Demo