How to Use This Checklist
This is the most comprehensive SB 24-205 compliance checklist available. It covers every requirement in the law, organized into 6 phases from discovery to ongoing operations.
Each item includes the relevant SB 24-205 section, priority level (Critical/High/Medium), and estimated time. Work through the phases in order — later phases depend on earlier ones.
CO-AIMS automates 80% of these items. For each checklist section, we note which items CO-AIMS handles and which require your direct action.
Related: complete SB 24-205 compliance guide · 90-day compliance buildout · risk management policy template
Phase 1: Discovery and Inventory (Week 1)
**1. [CRITICAL] Identify all AI systems in your organization** (§6-1-1701)
List every tool, platform, or system that uses AI, ML, or automated decision-making. Include SaaS tools with AI features.
Estimate: 2-4 hours | CO-AIMS: Guided inventory wizard
**2. [CRITICAL] Classify each system as "high-risk" or "standard"** (§6-1-1701)
High-risk = makes or substantially assists in consequential decisions (employment, financial, healthcare, housing, education, legal, insurance).
Estimate: 1-2 hours | CO-AIMS: Automated classification
**3. [HIGH] Document the purpose and scope of each AI system** (§6-1-1703)
For each high-risk system: what decisions does it make? What data does it use? Who does it affect? How many people per year?
Estimate: 30 min per system | CO-AIMS: System registration form
**4. [HIGH] Identify affected protected classes per system** (§6-1-1701)
For each high-risk system: which protected classes are potentially affected by its decisions?
Estimate: 15 min per system | CO-AIMS: Protected class mapping
**5. [MEDIUM] Map data sources and inputs for each AI system** (§6-1-1703)
Document where training data comes from, what features the AI uses, and whether any inputs could serve as proxies for protected characteristics.
Estimate: 30 min per system | Your action
**6. [MEDIUM] Identify all AI vendors and their developer obligations** (§6-1-1702)
For third-party AI tools: identify the vendor, request bias testing data, and confirm they meet developer duties.
Estimate: 1-2 hours | Your action (CO-AIMS provides vendor request templates)
Phase 2: Policy and Governance (Week 2)
**7. [CRITICAL] Draft AI risk management policy** (§6-1-1703)
Include: risk management objectives, governance structure, risk assessment methodology, bias testing procedures, incident response plan, consumer disclosure processes.
Estimate: 4-8 hours | CO-AIMS: Policy template generator
**8. [CRITICAL] Publish AI risk management policy on your website** (§6-1-1703)
Must be publicly accessible, not buried in legal docs.
Estimate: 30 min | Your action
**9. [HIGH] Designate an AI risk management owner** (NIST AI RMF - Govern)
Assign a specific person or team responsible for AI governance.
Estimate: 1 hour | Your action
**10. [HIGH] Define risk tolerance levels per AI system** (NIST AI RMF - Govern)
What level of bias is acceptable? What metrics will you use? What triggers remediation?
Estimate: 1-2 hours | CO-AIMS: Threshold configuration
**11. [MEDIUM] Establish a cross-functional AI governance committee** (NIST AI RMF - Govern)
Include legal, compliance, IT, HR, and business unit representatives.
Estimate: 2-4 hours | Your action
**12. [MEDIUM] Create AI vendor governance procedures** (§6-1-1702)
Define requirements for AI vendors: bias testing data sharing, documentation, incident notification.
Estimate: 2-4 hours | Your action
Phase 3: Bias Auditing (Week 3-4)
**13. [CRITICAL] Conduct initial bias audits on all high-risk AI systems** (§6-1-1703)
Test selection/decision rates across all protected classes. Apply four-fifths rule and statistical significance testing.
Estimate: 2-4 hours per system | CO-AIMS: Automated auditing
**14. [CRITICAL] Document all bias audit results** (§6-1-1703)
Record methodology, data used, statistical results, pass/fail determination, and date for each audit.
Estimate: Included in audit | CO-AIMS: Automatic documentation
**15. [HIGH] Investigate and document root causes for any detected bias** (§6-1-1703)
When bias is found: is it training data bias? Proxy variables? Feedback loops? Deployment context?
Estimate: 2-4 hours per finding | CO-AIMS: Root cause analysis guidance
**16. [HIGH] Create remediation plans for detected bias** (§6-1-1703)
Document specific steps to address each bias finding: retrain model, remove features, adjust thresholds, add human review.
Estimate: 1-2 hours per finding | CO-AIMS: Remediation plan generator
**17. [HIGH] Re-test after remediation** (§6-1-1703)
Run bias audit again after implementing fixes. Document before/after comparison.
Estimate: 1-2 hours per system | CO-AIMS: Automated re-testing
**18. [MEDIUM] Establish ongoing audit schedule** (NIST AI RMF - Measure)
Monthly automated checks for high-risk systems. Quarterly full audits. Annual comprehensive review.
Estimate: 1 hour | CO-AIMS: Schedule configuration
**19. [MEDIUM] Configure drift monitoring** (NIST AI RMF - Measure)
Set up alerts for when bias metrics change beyond acceptable thresholds between formal audits.
Estimate: 30 min | CO-AIMS: Automatic drift monitoring
Phase 4: Consumer Disclosures (Week 4-5)
**20. [CRITICAL] Create pre-decision consumer disclosure notices** (§6-1-1703)
For each high-risk system: plain-language notice that AI will be used, what data is processed, and right to human review.
Estimate: 1-2 hours per system | CO-AIMS: Auto-generated notices
**21. [CRITICAL] Create adverse action notices with AI disclosure** (§6-1-1703)
For each high-risk system: notice explaining AI involvement in unfavorable decisions, contributing factors, and contest process.
Estimate: 1-2 hours per system | CO-AIMS: Auto-generated notices
**22. [CRITICAL] Implement disclosure delivery mechanism** (§6-1-1703)
Integrate notices into your business processes: application forms, decision letters, email workflows, in-app modals.
Estimate: 2-8 hours depending on systems | Partial CO-AIMS (templates provided)
**23. [HIGH] Create human review request process** (§6-1-1703)
Allow consumers to request human review of AI-driven decisions. Define timeline for response and escalation.
Estimate: 2-4 hours | Your action
**24. [HIGH] Create decision contest/appeal process** (§6-1-1703)
Allow consumers to formally dispute AI-driven decisions. Document outcomes.
Estimate: 2-4 hours | Your action
**25. [MEDIUM] Test disclosure readability** (§6-1-1703)
Ensure notices are clear and conspicuous, written in plain language, not buried in legal documents.
Estimate: 1-2 hours | Your action
Phase 5: Incident Response and AG Notification (Week 5-6)
**26. [CRITICAL] Create AG notification procedure** (§6-1-1703)
Document the process for notifying the Colorado AG within 90 days of discovering algorithmic discrimination.
Estimate: 2-4 hours | CO-AIMS: Template and timeline tracking
**27. [CRITICAL] Define "algorithmic discrimination discovery" trigger** (§6-1-1703)
What constitutes "discovery"? When bias audit detects adverse impact? When a consumer complains? When a vendor notifies you? Document your definition.
Estimate: 1-2 hours | Your action (CO-AIMS: guidance provided)
**28. [HIGH] Create incident response playbook** (NIST AI RMF - Manage)
Step-by-step procedure: detection → investigation → containment → notification → remediation → documentation.
Estimate: 4-8 hours | CO-AIMS: Playbook template
**29. [HIGH] Assign incident response roles** (NIST AI RMF - Manage)
Who investigates? Who authorizes AG notification? Who implements remediation? Who communicates externally?
Estimate: 1-2 hours | Your action
**30. [MEDIUM] Establish internal reporting mechanism** (NIST AI RMF - Manage)
Allow employees to report suspected AI bias or incidents confidentially.
Estimate: 2-4 hours | Your action
**31. [MEDIUM] Create consumer complaint tracking** (§6-1-1703)
Track all consumer complaints related to AI decisions. Each complaint may trigger investigation obligations.
Estimate: 2-4 hours | CO-AIMS: Complaint tracking
Phase 6: Documentation and Ongoing Operations (Week 6+)
**32. [CRITICAL] Complete impact assessments for each high-risk AI system** (§6-1-1703)
Document purpose, data inputs, stakeholders affected, risk level, bias audit results, and mitigation measures.
Estimate: 2-4 hours per system | CO-AIMS: Impact assessment generator
**33. [CRITICAL] Generate first evidence bundle** (§6-1-1705)
Package your governance policy, bias audit results, consumer disclosures, impact assessments, and NIST AI RMF mapping into a single compliance evidence package.
Estimate: 2-4 hours manually | CO-AIMS: One-click generation
**34. [CRITICAL] Establish 3-year record retention** (§6-1-1703)
All compliance records must be maintained for at least 3 years after the AI system is last deployed.
Estimate: 1-2 hours | CO-AIMS: Automatic retention
**35-41. [HIGH] Map compliance activities to NIST AI RMF functions** (§6-1-1705)
Govern: policies, roles, culture (items 7-12)
Map: inventory, classification, data provenance (items 1-6)
Measure: bias audits, metrics, monitoring (items 13-19)
Manage: remediation, incidents, documentation (items 26-34)
Estimate: 2-4 hours | CO-AIMS: Automatic mapping
**42. [HIGH] Schedule annual impact assessment updates** (§6-1-1703)
Impact assessments must be reviewed and updated at least annually.
Estimate: 1-2 hours per system annually | CO-AIMS: Renewal reminders
**43-47. [MEDIUM] Ongoing operational items:**
43. Monthly automated bias checks on all high-risk systems
44. Quarterly full audit reviews with documented results
45. Annual comprehensive governance program review
46. Ongoing employee training on AI governance responsibilities
47. Continuous vendor compliance monitoring
Estimate: 2-4 hours/month | CO-AIMS: Automates items 43-45
Automate Your Colorado AI Compliance
CO-AIMS handles bias audits, impact assessments, consumer disclosures, and evidence bundles — so you can focus on your business.