Why We Annotated the Full Act
SB 24-205 is 20+ pages of legal language. Most businesses don't have the time or legal background to parse it. So we went through the entire Act, section by section, and translated every provision into plain English.
This isn't legal advice. It's a practical business translation. For every section, we explain: what it says, what it means for your business, and what you need to do about it.
Related: our compliance guide · penalty calculator · 7-step compliance checklist
Part 1: Definitions (§6-1-1701)
**"Algorithmic discrimination"** — When an AI system contributes to unjustified differential treatment based on protected class membership.
→ *Plain English: If your AI treats people differently because of their race, gender, age, disability, or other protected characteristic — even unintentionally — that's algorithmic discrimination under this law.*
**"Artificial intelligence system"** — A machine-based system that, for explicit or implicit objectives, infers from input data how to generate outputs such as predictions, content, recommendations, or decisions.
→ *Plain English: Any software that uses data to make predictions, recommendations, or decisions. This is extremely broad — it covers everything from ChatGPT to your CRM's lead scoring algorithm.*
**"Consequential decision"** — A decision that has a material legal or similarly significant effect on the provision or denial of: education, employment, financial services, government services, healthcare, housing, insurance, or legal services.
→ *Plain English: The Big 8 — if your AI touches any of these domains and the decision actually matters to the person affected, it's "consequential" and the full weight of the law applies.*
**"Developer"** — A person doing business in Colorado that develops, intentionally and substantially modifies, or otherwise makes available an AI system.
→ *Plain English: If you build AI tools that others use, you're a developer. Even if you don't deploy the AI directly, you have obligations.*
**"Deployer"** — A person doing business in Colorado that deploys an AI system.
→ *Plain English: If you use AI tools in your business operations for consequential decisions, you're a deployer.*
**"High-risk AI system"** — An AI system that makes or is a substantial factor in making a consequential decision.
→ *Plain English: If the AI's output materially drives a consequential decision — even if a human "reviews" it — it's high-risk. The human-in-the-loop defense only works if the human regularly overrides the AI.*
Part 2: Developer Duties (§6-1-1702)
**What developers must do:**
1. **Make available documentation** describing the AI system's high-level capabilities, known limitations, and intended uses.
→ *Action: Create a comprehensive model card or system documentation for every AI product you sell or license.*
2. **Make available results of bias testing** conducted on the AI system.
→ *Action: Run bias audits and share the results with your customers (deployers). They need your testing data for their own compliance.*
3. **Provide deployers with information** necessary to complete impact assessments.
→ *Action: Give your customers the data they need to understand how your AI works, what data it uses, and what risks exist.*
4. **Publish on your website** a statement describing the types of high-risk AI systems you develop and how you manage associated risks.
→ *Action: Add a public AI risk management policy to your website. This is not optional — it's a publishing requirement.*
5. **Report known or reasonably foreseeable risks** of algorithmic discrimination to deployers and the Attorney General.
→ *Action: If you discover your AI has bias issues, you must proactively notify your customers AND the Colorado AG.*
Part 3: Deployer Duties (§6-1-1703)
**What deployers must do:**
1. **Implement a risk management policy and program** using reasonable care to protect consumers from known or foreseeable risks of algorithmic discrimination.
→ *Action: Create and publish your AI risk management policy. CO-AIMS provides templates.*
2. **Complete an impact assessment** for each high-risk AI system before deployment (and annually thereafter).
→ *Action: Document what each AI system does, who it affects, what data it uses, and what risks exist. Update annually.*
3. **Notify consumers** when AI is used in consequential decisions (pre-decision notice) and when adverse decisions are made (post-decision notice).
→ *Action: Implement consumer disclosure notices. CO-AIMS generates compliant notices automatically.*
4. **Provide consumers the opportunity to contest** adverse AI decisions.
→ *Action: Create a process for consumers to appeal AI-driven decisions and request human review.*
5. **Notify the Attorney General** within 90 days of discovering algorithmic discrimination.
→ *Action: Implement an incident response procedure with a 90-day clock. CO-AIMS tracks the timeline automatically.*
6. **Maintain records** for at least 3 years after the AI system is last deployed.
→ *Action: Keep all documentation, audit results, disclosure records, and incident reports for a minimum of 3 years.*
Part 4: Affirmative Defense (§6-1-1705)
**The legal shield:**
A deployer has a **rebuttable presumption** that they used reasonable care if they:
1. Complied with the NIST AI Risk Management Framework or a substantially equivalent framework
2. Complied with the requirements of this law
→ *Plain English: If you follow the NIST AI RMF AND comply with SB 24-205's requirements, you get a legal advantage. In any enforcement action, the AG has to prove you DIDN'T follow the framework, rather than you having to prove you DID. This shifts the burden of proof in your favor.*
**Critical detail:** The presumption is *rebuttable*. It's not immunity. If the AG can show you checked boxes without substance — documented compliance without actually monitoring or remediating — the defense can be overcome.
→ *Action: Don't just generate paperwork. Actually run bias audits, actually review results, actually remediate issues. CO-AIMS creates the evidence trail that demonstrates genuine, ongoing compliance effort.*
Part 5: Enforcement (§6-1-1706)
**Who enforces:** The Colorado Attorney General.
**Enforcement mechanism:** Civil actions under the Colorado Consumer Protection Act.
**Penalties:** $20,000+ per violation. Each affected consumer, each AI system, each incident can be a separate violation.
**Cure provision:** For first-time violations, deployers may have an opportunity to cure within a specified timeframe. However, this is at the AG's discretion and does not apply to willful or pattern violations.
**Private right of action:** Consumers do NOT have a direct private right of action under SB 24-205. Only the AG can bring enforcement actions. However, evidence of algorithmic discrimination discovered through SB 24-205 mechanisms can support private claims under existing anti-discrimination law.
→ *Plain English: The AG is the cop. There's no private lawsuits under this specific law, but the AG can stack penalties fast. One AI system affecting 500 customers without compliance = $10 million potential exposure.*
Part 6: Effective Date and Timeline
**Original effective date:** February 1, 2026
**Amended effective date:** June 30, 2026 (delayed by approximately 5 months through amendment)
**Key milestones:**
- Now: Voluntary compliance period — implement governance, run audits, generate disclosures
- June 30, 2026: Enforcement begins — AG can bring actions for violations
- Ongoing: Annual impact assessments, continuous bias monitoring, 3-year record retention
→ *Action: You have until June 30, 2026 to be compliant. That's not "start thinking about it" — it's "have your entire governance program operational." CO-AIMS gets most businesses compliant within 2 weeks.*
Automate Your Colorado AI Compliance
CO-AIMS handles bias audits, impact assessments, consumer disclosures, and evidence bundles — so you can focus on your business.