TL;DR: The NIST AI Risk Management Framework provides a structured approach to AI governance through four functions: Govern, Map, Measure, and Manage. Here's what each means in practice.
The National Institute of Standards and Technology released the AI RMF in January 2023. Unlike prescriptive regulations, it's a voluntary framework—but it's rapidly becoming the baseline that regulators, customers, and partners expect.
The Four Functions
flowchart TB
subgraph GOVERN["GOVERN"]
G1[Policies]
G2[Roles]
G3[Culture]
end
subgraph MAP["MAP"]
M1[Context]
M2[Risks]
M3[Impacts]
end
subgraph MEASURE["MEASURE"]
ME1[Metrics]
ME2[Testing]
ME3[Monitoring]
end
subgraph MANAGE["MANAGE"]
MA1[Mitigate]
MA2[Document]
MA3[Communicate]
end
GOVERN --> MAP --> MEASURE --> MANAGE --> GOVERN
style GOVERN fill:#10b98115,stroke:#10b981
style MAP fill:#3b82f615,stroke:#3b82f6
style MEASURE fill:#a855f715,stroke:#a855f7
style MANAGE fill:#f59e0b15,stroke:#f59e0b
1. GOVERN: Establish the Foundation
The Govern function creates the organizational infrastructure for AI risk management. This isn't about technology—it's about people and processes.
• Risk tolerance thresholds
• Roles and responsibilities
• Escalation procedures
• AI governance committee
• Risk owners per system
• Cross-functional review board
Key output: A written AI governance policy that defines how your organization approaches AI risk.
2. MAP: Understand Your AI Landscape
Mapping is about understanding what AI you have, where it's used, and what could go wrong. You can't manage risks you haven't identified.
The AI Inventory
| Field | Question | Example |
|---|---|---|
| Purpose | What decision or task does it support? | Customer churn prediction |
| Data | What data does it use? Where from? | CRM data, purchase history |
| Stakeholders | Who's affected by its outputs? | Sales team, customers |
| Integration | How is it integrated? | Salesforce webhook |
| Autonomy | Does it recommend, decide, or act? | Recommend (human approves) |
Risk Identification
Key output: An AI system inventory with risk assessments for each system.
3. MEASURE: Quantify the Risks
Measurement is where most organizations struggle. NIST emphasizes both pre-deployment testing and ongoing monitoring.
Pre-Deployment Testing
Before any AI system goes live:
- Bias testing across protected classes
- Performance testing on edge cases
- Adversarial robustness testing
- Explainability verification
Ongoing Monitoring
Once deployed:
- Model drift detection
- Output distribution monitoring
- Feedback loop analysis
- Incident tracking
Key output: Metrics dashboards and testing reports for each AI system.
4. MANAGE: Act on What You Learn
Management is the action phase—taking what you've learned from measurement and doing something about it.
Risk Mitigation Ladder
Immediate Response Can you adjust thresholds or add guardrails without retraining?
Short-term Fix Can you retrain or fine-tune the model with better data?
Long-term Solution Should you redesign the system architecture entirely?
Documentation Requirements
Every risk decision should be documented:
| Field | Purpose |
|---|---|
| Risk identified | What triggered this action? |
| Action taken | What did you do about it? |
| Approver | Who authorized this response? |
| Outcome | Did it work? What was the impact? |
Key output: Risk registers, incident reports, and stakeholder communications.
Implementation Roadmap
Week 1: Draft an AI governance policy Define your organization's AI risk tolerance and escalation procedures.
Month 1: Inventory your AI systems Catalog every AI system with purpose, data, stakeholders, and autonomy level.
Quarter 1: Implement monitoring for high-risk systems Deploy observability infrastructure for your highest-risk AI systems.
Year 1: Full RMF implementation Extend governance, mapping, measurement, and management across all AI systems.
Why This Framework Works
NIST AI RMF is a framework, not a checklist. The goal isn't compliance—it's building a culture and infrastructure that manages AI risk continuously. Start with Govern, then Map, then Measure, then Manage. Repeat.
Empress implements the MEASURE function automatically. Every AI decision is logged, analyzed, and tracked—giving you the continuous measurement infrastructure NIST requires without building it yourself.