The 4 Functions That Structure AI Governance
The NIST AI Risk Management Framework (AI RMF 1.0) organizes AI governance around **four core functions**: Govern, Map, Measure, and Manage. Together, they form a comprehensive lifecycle approach to AI risk — from policy creation through continuous monitoring.
For Colorado businesses, these aren't academic categories. SB 24-205 grants affirmative defense to businesses that demonstrate alignment with the NIST AI RMF. Each function you implement is a layer of legal protection. Skip one, and you have a gap in your defense.
Related: NIST AI RMF mapping to SB 24-205 · all aspects of the NIST AI RMF · complete NIST AI RMF overview
Function 1: GOVERN — Establish the Foundation
**Purpose:** Create the organizational structures, policies, and culture that enable responsible AI.
GOVERN is the only function that operates across all other functions — it sets the rules of the game. Without governance, the other three functions have no authority or accountability.
**Key activities:**
- **Policies and procedures:** Document your AI risk management policy (SB 24-205 requires this to be publicly available on your website)
- **Roles and responsibilities:** Designate who owns AI risk in your organization — a compliance officer, CTO, or cross-functional committee
- **Risk tolerance:** Define how much AI risk your organization will accept and where the red lines are
- **Compliance mapping:** Track which regulations apply to which AI systems (SB 24-205, NIST, ISO 42001, industry-specific rules)
- **Training and awareness:** Ensure teams understand their AI governance obligations
- **Third-party management:** Govern AI risk from vendors and partners, not just internal systems
**SB 24-205 connection:** Your published AI risk management policy, AG notification procedures, and consumer disclosure processes all live in GOVERN.
Function 2: MAP — Contextualize the Risks
**Purpose:** Identify, categorize, and contextualize AI systems and their associated risks.
MAP is your reconnaissance function. You can't manage risks you haven't identified, and you can't assess impact without understanding context.
**Key activities:**
- **AI system inventory:** Catalog every AI system in your organization, including "hidden" AI in SaaS tools
- **Intended purpose:** Document what each AI system is designed to do
- **Stakeholder mapping:** Identify who is affected by each AI system's decisions
- **Risk classification:** Determine which systems make "consequential decisions" (SB 24-205's trigger)
- **Data provenance:** Track where your training data comes from and what biases it may contain
- **Foreseeable misuse:** Document how each system could be misused or produce unintended harm
**SB 24-205 connection:** Your AI system inventory, impact assessments, and consumer disclosure triggers all come from MAP.
Function 3: MEASURE — Quantify and Test
**Purpose:** Assess, analyze, and track AI risks using quantitative and qualitative methods.
MEASURE is where bias audits, performance testing, and fairness metrics live. This is the function most directly tied to SB 24-205's audit requirements.
**Key activities:**
- **Bias audits:** Test AI decisions across protected classes using disparate impact analysis (four-fifths rule) and statistical significance testing
- **Performance metrics:** Track accuracy, precision, recall, and fairness metrics over time
- **Drift monitoring:** Detect when AI performance degrades or bias levels change as new data flows in
- **Red-teaming:** Adversarial testing to expose edge cases and failure modes
- **Third-party validation:** Independent testing to verify internal audit results
- **Benchmarking:** Compare your AI's performance against industry standards
**SB 24-205 connection:** Bias audit results, four-fifths rule calculations, and remediation triggers all come from MEASURE. This is the function that generates the evidence you need for affirmative defense.
Function 4: MANAGE — Respond and Improve
**Purpose:** Allocate resources, implement controls, and continuously improve AI risk posture.
MANAGE is the action function. Once MEASURE identifies risks, MANAGE handles them.
**Key activities:**
- **Risk mitigation:** Implement controls to reduce identified risks (retraining models, adding human review, adjusting thresholds)
- **Incident response:** Respond to algorithmic discrimination incidents within the 90-day AG notification window
- **Remediation tracking:** Document what you did to fix identified bias and the results of re-testing
- **Resource allocation:** Assign budget, people, and tools to AI risk management
- **Documentation and evidence:** Maintain the compliance evidence trail (SB 24-205 requires 3+ years)
- **Continuous improvement:** Feed lessons learned back into GOVERN, MAP, and MEASURE
**SB 24-205 connection:** AG notifications, remediation documentation, evidence bundles, and your 3-year audit trail all live in MANAGE.
How the 4 Functions Work Together
The four functions aren't sequential — they're interconnected and cyclical:
**GOVERN** sets the rules → **MAP** identifies what's at risk → **MEASURE** quantifies the risk → **MANAGE** responds to findings → insights feed back to **GOVERN** to update policies.
This cycle runs continuously. As your AI systems change, new risks emerge, and regulations evolve, the cycle repeats. The businesses with the strongest legal defense are those that can demonstrate this cycle is ongoing — not a one-time exercise.
CO-AIMS maps every compliance activity to the appropriate AI RMF function automatically. When you run a bias audit, it's tagged as MEASURE. When you generate a consumer disclosure, it's tagged as MAP + GOVERN. Your evidence bundle shows exactly where you stand in each function — a complete picture of AI RMF alignment that regulators and courts expect.
Automate Your Colorado AI Compliance
CO-AIMS handles bias audits, impact assessments, consumer disclosures, and evidence bundles — so you can focus on your business.