NIST AI Risk Management Framework
Voluntary framework published by the US National Institute of Standards and Technology, structured around four functions: Govern, Map, Measure, Manage. Widely adopted internationally — particularly by organisations with US operations, US enterprise customers, or those aligning to SOC 2. Complements ISO 42001 and EU AI Act obligations rather than replacing them.
Organisational policies, processes, procedures, and practices are in place to address AI risk. Accountability and oversight mechanisms are defined and documented for all AI systems in use, including commercial AI tools used by employees.
How Svalin addresses itSvalin's policy engine provides the technical implementation of AI governance policies — defining which data categories can flow through which AI tool connections, with a full audit trail of every policy decision and change. Demonstrates that governance policies are operationally active, not just documented.
Policies, processes, procedures, and practices are in place for mapping, measuring, and managing risks associated with third-party AI systems and AI supply chains, including commercial AI providers receiving organisational data.
How Svalin addresses itSvalin continuously monitors what data is transferred to each third-party AI provider — Anthropic, Google, OpenAI — providing the ongoing evidence of third-party AI risk management that GOVERN 6.1 requires. Turns a static vendor register into a live operational control.
Scientific findings and organisational factors are used to identify contexts in which AI risks may manifest. Includes understanding data flows into and out of AI systems, dependencies on external AI models, and the data supply chain for AI tool usage.
How Svalin addresses itSvalin maps every MCP server tool call and AI provider data flow across the organisation — providing a live, queryable inventory of AI data dependencies that satisfies the contextualisation requirement of MAP 2.3. Turns an unknown risk surface into a documented and monitored one.
The effectiveness of AI risk management controls is evaluated on a defined frequency and after significant changes. Evidence of control effectiveness is documented and available for review.
How Svalin addresses itSvalin's dashboard provides continuous quantitative evidence of AI governance control effectiveness — policy trigger rates, blocked data transfers, anomalous activity patterns, data category distributions over time. Audit-exportable for periodic MEASURE 2.5 evaluations without manual evidence gathering.
Risk treatments, including controls implemented to address identified AI risks, are documented. Evidence of their effectiveness is maintained and available for audit and accountability purposes.
How Svalin addresses itEvery Svalin policy decision, blocked call, and governance action is logged with full context — user, data categories, AI provider, timestamp, policy applied. Provides the complete MANAGE 2.4 evidence trail: what risk was identified, what control was applied, and proof it functioned as intended.
Build your NIST AI RMF evidence library
See how Svalin continuously produces the Govern, Map, Measure, and Manage evidence NIST AI RMF requires.
Request a Demo