How Colorado's AG Will Investigate Your AI: The Enforcement Playbook
In This Article
The AG's Enforcement Authority Under SB 24-205
Colorado SB 24-205 is enforced exclusively by the Colorado Attorney General under the Colorado Consumer Protection Act (CCPA), C.R.S. § 6-1-101 et seq. There is no private right of action — consumers cannot sue you directly for SB 24-205 violations. But that's less reassuring than it sounds, because the AG's office has some of the most aggressive consumer protection enforcement in the country.
Between 2020 and 2025, the Colorado AG's Consumer Protection Division brought over 180 enforcement actions under CCPA, resulting in more than $85 million in penalties and restitution. The office has explicitly signaled that AI compliance will be a priority beginning in Q3 2026, with dedicated staff allocated to technology-focused investigations.
Penalties under CCPA enforcement are up to $20,000 per violation (C.R.S. § 6-1-112). Critically, each affected consumer constitutes a separate violation. An AI system that makes discriminatory decisions affecting 500 Colorado consumers represents $10 million in potential exposure — before injunctive relief, legal costs, and reputational damage.
Related: AG enforcement overview · The affirmative defense as your legal shield · SB 24-205 penalty calculator
How Investigations Are Triggered
Based on the AG's historical CCPA enforcement patterns and public statements from the Consumer Protection Division, there are five primary triggers for an AI investigation:
1. Consumer Complaints
The most common trigger. Colorado consumers can file complaints directly through the AG's website. If a consumer is denied credit, insurance, employment, or legal services and suspects AI was involved, a complaint can initiate a preliminary review. The AG's office received over 14,000 consumer complaints in 2025 — even a small percentage involving AI will generate meaningful investigation volume.
2. Mandatory AG Notification
Under SB 24-205 § 6-1-1705(3), deployers who discover algorithmic discrimination must notify the AG within 90 days. This is the most direct trigger: your own disclosure. Organizations that detect bias through internal auditing are legally required to report it, creating a known pipeline of investigations.
3. Whistleblower and Employee Reports
Employees who observe discriminatory AI outcomes, inadequate documentation, or missing bias audits may report to the AG directly or through the media. High-turnover AI/ML teams are a particular risk vector — departing engineers who know the system lacks required safeguards.
4. Investigative Journalism and Advocacy Organizations
Organizations like the ACLU, Consumer Reports, and ProPublica have established AI accountability programs that systematically test AI systems for bias. Their published findings frequently trigger AG investigations in other consumer protection contexts.
5. Cross-Agency Referrals
The Colorado Division of Insurance, Division of Banking, and other regulatory bodies may refer AI concerns to the AG when they identify potential violations during routine oversight of regulated industries.
Inside the Investigation: What the AG Requests
Once an investigation is opened, the AG's office uses Civil Investigative Demands (CIDs) — essentially subpoenas — to compel production of documents and testimony. Based on the statutory requirements and CCPA enforcement precedent, the following categories of evidence are virtually certain to be requested:
Tier 1: Immediate Production (typically 30 days)
- Your published risk management policy (§ 6-1-1702)
- Impact assessments for the AI system(s) in question (§ 6-1-1703)
- Consumer disclosure records — proof that affected consumers were notified (§ 6-1-1704)
- Incident detection and response logs
- Your AI system inventory and classification documentation
Tier 2: Extended Production (typically 60–90 days)
- Training data documentation and data provenance records
- Bias audit results, including methodology, statistical tests applied, and findings
- Model performance metrics disaggregated by protected class
- Internal communications regarding known limitations, bias risks, or compliance gaps
- Vendor contracts and third-party AI documentation
- Board or executive committee presentations regarding AI governance
Tier 3: Depositions and Technical Review
- Depositions of your AI governance officer, CTO, or equivalent
- Technical review of model architecture, feature selection, and output distributions
- Access to the AI system for independent testing by the AG's technical consultants
If you cannot produce Tier 1 evidence within 30 days, the rebuttable presumption is functionally unavailable. The time to build this documentation is before the investigation, not during it.
The Rebuttable Presumption in Practice
SB 24-205 § 6-1-1706 creates a rebuttable presumption that a deployer has complied with the law if they can demonstrate adherence to a recognized AI risk management framework — specifically naming the NIST AI RMF and ISO 42001. This is the most powerful defensive tool in the statute, but it's widely misunderstood.
What it does: Shifts the burden of proof. Instead of you proving compliance, the AG must prove that your framework adherence was insufficient. In practical terms, this means the AG must show that despite following NIST AI RMF, your specific implementation was inadequate to prevent the alleged violation.
What it does NOT do: The presumption is rebuttable, not absolute. It is not an immunity. If the AG can demonstrate that your NIST mapping was superficial — checkbox compliance without substantive implementation — the presumption can be overcome. The quality and specificity of your evidence matters enormously.
How to make it stick:
- Map every AI system to all four NIST AI RMF functions (Govern, Map, Measure, Manage) with specific evidence for each subcategory
- Document not just that you conducted bias audits, but which statistical tests you applied (disparate impact ratio, equalized odds, demographic parity), what thresholds you set, and what you did when results were adverse
- Maintain continuous evidence generation — point-in-time audits are weaker than ongoing monitoring records
- Demonstrate that your framework produced actual changes in system behavior, not just documentation
The comparison to CCPA data privacy enforcement is instructive. In People v. DoorDash (2022), having a privacy policy alone was insufficient — the AG successfully argued that the policy didn't match actual data practices. The same logic will apply to AI risk management policies that exist on paper but aren't operationalized.
Enforcement Timeline: What to Expect After June 30, 2026
Based on the AG's enforcement cadence in other CCPA contexts (data privacy, unfair lending, deceptive advertising), here's the likely enforcement timeline:
- Q3 2026 (July–September) — Initial "education and outreach" period. The AG's office has historically provided a brief grace period for new consumer protection regulations. Expect public guidance documents, industry webinars, and targeted outreach to high-risk sectors. Investigations may be opened but formal actions are unlikely.
- Q4 2026 (October–December) — First investigations based on consumer complaints and mandatory AG notifications. Expect the AG to prioritize sectors with high consumer impact: financial services (lending, insurance), healthcare, and employment. The first CIDs will likely be issued in this window.
- Q1–Q2 2027 — First public enforcement actions. The AG will likely choose 2–3 high-profile cases to establish precedent and signal enforcement seriousness. Expect a mix of "willful non-compliance" cases (organizations that made no effort) and "inadequate compliance" cases (organizations that tried but fell short).
- 2027 and beyond — Steady-state enforcement. Expect 5–15 AI-specific enforcement actions per year, consistent with the AG's pace in other CCPA domains.
The window between now and July 2026 is your compliance window. After that, it becomes a litigation window. Build your evidence bundles now while you have the luxury of time. Start your free trial of CO-AIMS to generate AG-ready documentation from day one.
Frequently Asked Questions
How will the Colorado AI Act be enforced?
The Colorado Attorney General has exclusive enforcement authority under the Colorado Consumer Protection Act (CCPA). There is no private right of action. Violations carry penalties of up to $20,000 per affected consumer, plus injunctive relief that can compel you to stop using non-compliant AI systems. Investigations are triggered by consumer complaints, mandatory AG notifications, whistleblowers, and advocacy organization reports.
What triggers a Colorado AG AI investigation?
The five primary triggers are: consumer complaints filed through the AG's website, mandatory deployer notifications of algorithmic discrimination under § 6-1-1705(3), employee/whistleblower reports, investigative journalism and advocacy organization findings, and cross-agency referrals from regulators like the Division of Insurance or Division of Banking.
What are the penalties for violating SB 24-205?
Penalties under CCPA enforcement are up to $20,000 per violation, and each affected consumer constitutes a separate violation. An AI system affecting 500 Colorado consumers creates $10 million in potential exposure. The AG can also seek injunctive relief ordering you to stop using the non-compliant AI system, effectively shutting down AI-dependent business processes.
Does the rebuttable presumption guarantee compliance?
No. The rebuttable presumption under § 6-1-1706 shifts the burden of proof to the AG but can be overcome if the AG demonstrates your framework adherence was superficial. You need substantive, well-documented implementation of NIST AI RMF or ISO 42001 — not just checkbox compliance. The quality of your evidence and the operational reality of your program determine whether the presumption holds.
Automate Your Colorado AI Compliance
CO-AIMS handles bias audits, impact assessments, consumer disclosures, and evidence bundles — so you can focus on your business.
AI Solutionist and founder of CO-AIMS. Building compliance infrastructure for Colorado's AI Act. Helping law firms, healthcare providers, and enterprises navigate SB 24-205 with automated governance.