Beyond the 4 Functions: AI Trustworthiness Characteristics
While the NIST AI RMF's 4 core functions (Govern, Map, Measure, Manage) describe *what to do*, the framework also defines **trustworthiness characteristics** that describe *what to aim for*. These are the qualities that a well-governed AI system should demonstrate.
The AI RMF identifies **7 key characteristics** of trustworthy AI. Each one addresses a different dimension of risk — and each maps to specific SB 24-205 requirements.
Related: NIST AI RMF mapping to SB 24-205 · the 4 core functions · AI governance and risk management explained
1. Valid and Reliable
**What it means:** The AI system performs accurately and consistently for its intended purpose, across different conditions and over time.
**Key considerations:**
- Does the AI produce accurate results for all user groups?
- Is performance consistent across operating conditions?
- How does the system handle edge cases and out-of-distribution inputs?
- Is performance monitored for drift over time?
**SB 24-205 connection:** Validity underpins everything. An AI system that isn't reliably accurate can't be fairly evaluated for bias. Ongoing performance monitoring is a compliance requirement — not optional.
2. Safe
**What it means:** The AI system does not pose unreasonable risk to human life, health, property, or the environment.
**Key considerations:**
- What is the worst-case scenario if the AI fails?
- Are there safeguards (fallbacks, human override) for high-stakes decisions?
- Has the system been tested for failure modes?
- Are safety-critical decisions documented and auditable?
**SB 24-205 connection:** Safety is especially critical for healthcare AI, autonomous systems, and any AI where failure affects physical wellbeing. High-risk AI systems demand stronger governance documentation.
3. Secure and Resilient
**What it means:** The AI system is protected against adversarial attacks, data poisoning, model theft, and other security threats — and can recover from disruptions.
**Key considerations:**
- Is the AI model protected from adversarial inputs designed to manipulate outputs?
- Is training data protected from tampering (data poisoning)?
- Are there access controls on model parameters and training pipelines?
- Can the system recover from failures without producing harmful outputs?
**SB 24-205 connection:** While SB 24-205 focuses on bias and discrimination, a compromised AI system can produce biased outputs through adversarial manipulation. Security is foundational to compliance integrity.
4. Accountable and Transparent
**What it means:** The AI system's decision-making process can be understood, explained, and attributed to responsible parties.
**Key considerations:**
- Can you explain why the AI made a specific decision?
- Is there a clear chain of accountability (who built it, who deployed it, who maintains it)?
- Are AI-driven decisions logged and auditable?
- Can affected individuals understand how the AI influenced their outcome?
**SB 24-205 connection:** This maps directly to the consumer disclosure requirement. When AI makes a consequential decision, the affected person must be told (1) that AI was involved, (2) what type of information was used, and (3) how to contest the decision. Transparency isn't optional — it's the law.
5. Explainable and Interpretable
**What it means:** The AI system's behavior can be understood by humans at the appropriate level of detail.
**Key considerations:**
- Can the model's logic be described in human-understandable terms?
- Are feature importances available for individual decisions?
- Can you identify which inputs most influenced a specific outcome?
- Is there a difference between global interpretability (how the model works overall) and local interpretability (why this specific decision was made)?
**SB 24-205 connection:** When a consumer disputes an AI-driven decision, you need to explain what happened. "The algorithm decided" is not a defensible answer. Explainability tools and documentation are essential for incident response and AG notification.
6. Privacy-Enhanced
**What it means:** The AI system respects individual privacy and protects personal data throughout the AI lifecycle — from training data collection through deployment.
**Key considerations:**
- Does the training data contain personally identifiable information (PII)?
- Are privacy-preserving techniques used (differential privacy, federated learning, data anonymization)?
- Does the AI system infer sensitive information that wasn't explicitly provided?
- How long is personal data retained, and who has access?
**SB 24-205 connection:** AI systems often process sensitive personal data to make consequential decisions. Privacy protections overlap with bias prevention — if demographic data is collected for bias auditing, it must be protected. CO-AIMS handles this balance automatically.
7. Fair — With Harmful Bias Managed
**What it means:** The AI system is designed to promote fairness and not perpetuate harmful bias against individuals or groups.
**Key considerations:**
- Has the AI been tested for bias across all protected classes?
- Are disparate impact levels within acceptable thresholds (four-fifths rule)?
- Is there ongoing monitoring for emerging bias patterns?
- Are remediation processes in place when bias is detected?
- Is fairness defined and documented for each AI system?
**SB 24-205 connection:** This is the characteristic most directly addressed by Colorado law. SB 24-205's entire enforcement mechanism — bias audits, AG notification, consumer disclosures, affirmative defense — exists to ensure AI fairness. Every other characteristic supports this one.
How These Characteristics Connect
The 7 characteristics aren't independent — they interact and sometimes tension with each other:
- **Fairness vs. Privacy:** Bias testing often requires demographic data, which creates privacy concerns
- **Explainability vs. Accuracy:** Simpler, more explainable models may sacrifice some predictive power
- **Security vs. Transparency:** Full transparency about model architecture can create attack vectors
- **Safety vs. Innovation:** Safety constraints may slow deployment of beneficial AI
The NIST AI RMF acknowledges these tradeoffs and asks organizations to document their decisions. There's no single "right" balance — but there must be a *documented* balance.
CO-AIMS helps you navigate these tradeoffs by automating the documentation, tracking your decisions, and generating evidence that demonstrates considered, reasonable governance across all 7 characteristics.
Automate Your Colorado AI Compliance
CO-AIMS handles bias audits, impact assessments, consumer disclosures, and evidence bundles — so you can focus on your business.