AI Compliance for Colorado Healthcare: Hospitals, Clinics, and Diagnostic AI Under SB 24-205
In This Article
- 1.Healthcare AI Under SB 24-205: Why This Sector Has the Highest Stakes
- 2.Clinical AI Use Cases That Trigger Compliance
- 3.The Regulatory Intersection: SB 24-205, FDA, HIPAA, and HB26-1139
- 4.Bias in Healthcare AI: The Evidence and the Tests
- 5.Building a Healthcare AI Compliance Program
- Q.Frequently Asked Questions
Healthcare AI Under SB 24-205: Why This Sector Has the Highest Stakes
Colorado SB 24-205 § 6-1-1701(4) defines "consequential decisions" to include those with a material effect on a consumer's access to healthcare services or health insurance. Healthcare providers operating in Colorado — hospitals, clinics, physician practices, and health systems — are deployers of high-risk AI if they use any AI system that influences clinical or administrative decisions about patients.
The stakes in healthcare are uniquely severe. Algorithmic discrimination in a hiring tool produces an unfair employment outcome. Algorithmic discrimination in a diagnostic AI tool produces a missed cancer diagnosis, a delayed treatment recommendation, or a biased triage decision that routes patients of color to lower-acuity care. Published research has documented these outcomes: a widely used clinical algorithm (Optum/Change Healthcare) was shown in 2019 to systematically deprioritize Black patients for care management programs, affecting an estimated 70 million patient predictions annually.
Colorado's healthcare AI landscape is substantial. UCHealth, SCL Health (now Intermountain), Denver Health, Centura Health, and dozens of smaller systems deploy AI across clinical and administrative functions. The compliance obligation is immediate, and the intersection with existing healthcare regulations (HIPAA, FDA, Colorado HB26-1139) creates a uniquely complex regulatory environment.
Related: FDA and Colorado medical device bias auditing · How to audit AI for bias · Algorithmic impact assessments guide
Clinical AI Use Cases That Trigger Compliance
Healthcare AI falls into four categories under SB 24-205, each with distinct compliance considerations:
1. Diagnostic AI
Systems that analyze medical images (radiology AI, pathology AI, dermatology screening), lab results, or patient data to detect disease. Examples: Aidoc (radiology triage), Viz.ai (stroke detection), Paige.AI (pathology). These systems directly influence clinical decisions — a false negative means a missed diagnosis. When that false negative rate differs across racial or demographic groups, it's algorithmic discrimination under SB 24-205.
2. Clinical Decision Support (CDS)
Systems embedded in Electronic Health Records (EHRs) that recommend treatments, flag drug interactions, calculate risk scores, or suggest clinical pathways. Epic's Sepsis Prediction Model, Cerner's HealtheIntent, and dozens of third-party CDS tools operate within EHR workflows. If a CDS system's recommendations substantially influence treatment decisions for Colorado patients, it's a high-risk AI system under the statute.
3. Patient Triage and Risk Stratification
AI systems that prioritize patients for care, assign acuity levels, or determine care management enrollment. These are among the most studied — and most problematic — healthcare AI applications. The Optum algorithm that used healthcare cost as a proxy for health need systematically disadvantaged Black patients who had less access to care (and therefore lower historical costs) despite equal or greater medical need.
4. Administrative AI
Prior authorization automation, claims processing, denial prediction, and scheduling optimization. While these may seem administrative, when they determine whether a patient receives timely care — or receives care at all — they cross the consequential decision threshold. A prior authorization AI that systematically denies or delays certain categories of care disproportionately affecting protected classes creates SB 24-205 exposure.
The Regulatory Intersection: SB 24-205, FDA, HIPAA, and HB26-1139
Healthcare AI compliance in Colorado doesn't exist in isolation. Multiple regulatory frameworks apply simultaneously, and understanding their intersection is critical for efficient compliance.
FDA and Software as a Medical Device (SaMD)
The FDA regulates AI/ML-based Software as a Medical Device under its 2021 action plan for AI. FDA clearance or approval (via 510(k), De Novo, or PMA pathways) addresses safety and effectiveness but does not address algorithmic discrimination or the specific documentation requirements of SB 24-205. An FDA-cleared diagnostic AI tool still requires a separate SB 24-205 impact assessment, bias audit, and consumer disclosure. FDA clearance is necessary but not sufficient.
Colorado HB26-1139 (2026)
Colorado's newest healthcare AI legislation (introduced February 2026) creates additional requirements specifically for clinical AI. While SB 24-205 applies broadly to all consequential-decision AI, HB26-1139 adds healthcare-specific mandates: clinical validation requirements, physician oversight obligations, and patient consent provisions for AI-assisted diagnosis. Deployers must satisfy both statutes simultaneously.
HIPAA Intersection
SB 24-205's bias auditing requirements may require analysis of patient data disaggregated by race, ethnicity, gender, and age. Under HIPAA, this data is Protected Health Information (PHI). Healthcare deployers must conduct bias audits within HIPAA-compliant environments — you cannot export patient demographics to a third-party bias testing tool without appropriate Business Associate Agreements and de-identification protocols where applicable.
| Requirement | SB 24-205 | FDA SaMD | HB26-1139 | HIPAA |
|---|---|---|---|---|
| Impact Assessment | Annual, documented | Clinical evaluation | Clinical validation | Risk analysis |
| Bias Testing | Required, documented | Not explicit | Required | N/A (data constraints) |
| Consumer Disclosure | Required | Labeling | Patient consent | Notice of Privacy Practices |
| Incident Reporting | AG notification (90 days) | MDR reporting | State reporting | Breach notification |
| Record Retention | 3 years | Design history file | 5 years | 6 years |
Bias in Healthcare AI: The Evidence and the Tests
Healthcare AI bias isn't hypothetical — it's extensively documented in peer-reviewed literature:
- Dermatology AI — A 2021 Lancet Digital Health study found that dermatology AI trained primarily on lighter skin tones showed significantly reduced sensitivity for melanoma detection in darker-skinned patients. Sensitivity dropped from 91% to 67% across skin types.
- Sepsis Prediction — Epic's sepsis prediction model has been studied extensively, with findings showing variable performance across demographics. A 2022 study in JAMA Internal Medicine found the model's positive predictive value differed significantly across racial groups.
- Clinical Risk Scores — The Optum/Change Healthcare algorithm that used cost as a health proxy affected an estimated 70 million patients before the bias was identified by Obermeyer et al. (2019) in Science.
- Cardiac Risk — The HEART score and other cardiac risk calculators have been shown to underperform in women and younger patients, potentially delaying critical interventions.
For bias auditing in healthcare, the following statistical approaches are required:
- Subgroup performance analysis — Sensitivity, specificity, positive predictive value, and negative predictive value disaggregated by race, gender, age, and socioeconomic proxies
- Disparate impact testing — Four-fifths rule applied to clinical outcomes (treatment recommendations, triage levels, admission decisions)
- Calibration analysis — Ensuring predicted risk probabilities correspond to actual outcomes equally across demographic groups
- Intersectional analysis — Testing combinations of protected characteristics (e.g., elderly Black women) where bias may compound
Healthcare deployers must document which tests were applied, what thresholds were set, and what actions were taken when adverse results were found. This documentation is the core of your SB 24-205 evidence bundle.
Building a Healthcare AI Compliance Program
Healthcare providers face a compressed timeline: SB 24-205 takes effect June 30, 2026, and HB26-1139 provisions begin phasing in Q4 2026. Here's a structured approach:
Step 1: AI System Inventory with Clinical Context
Identify every AI system in your organization, including those embedded in your EHR (Epic, Cerner/Oracle Health, MEDITECH), third-party clinical decision support tools, imaging AI, and administrative automation. For each system, document: clinical workflow integration, patient population affected, data sources, and the clinical decision it influences.
Step 2: Regulatory Mapping
For each AI system, determine which regulations apply: SB 24-205 (all consequential-decision AI), FDA (diagnostic and therapeutic AI), HB26-1139 (clinical AI), HIPAA (all patient data). Create a compliance matrix showing requirements by system and regulation.
Step 3: Bias Audit Program
Establish a clinical AI bias auditing program that operates within HIPAA constraints. This typically requires: a HIPAA-compliant analytics environment, de-identified or limited dataset protocols for bias testing, collaboration between your clinical informatics and compliance teams, and engagement with your EHR vendor for data extraction.
Step 4: Clinical Workflow Integration
Embed SB 24-205 consumer disclosure into existing clinical workflows. This means patient-facing notifications during intake, informed consent updates that reference AI-assisted clinical processes, and clear pathways for patients to request human-only decision-making where applicable.
CO-AIMS offers healthcare-specific compliance templates that address the SB 24-205/FDA/HB26-1139/HIPAA intersection, including pre-built impact assessment questionnaires calibrated for clinical AI use cases and bias audit protocols designed for HIPAA-compliant environments. See CO-AIMS Enterprise for healthcare compliance capabilities.
Frequently Asked Questions
Does SB 24-205 apply to hospitals?
Yes. Any Colorado hospital or health system that uses AI systems to influence clinical decisions — diagnostic AI, clinical decision support, patient triage, risk stratification, or administrative AI like prior authorization — is a deployer of high-risk AI under SB 24-205. Healthcare services are explicitly named as a consequential decision domain in § 6-1-1701(4).
Is clinical decision support AI regulated in Colorado?
Yes. Clinical decision support systems embedded in EHRs that recommend treatments, calculate risk scores, or flag clinical concerns are high-risk AI systems under SB 24-205 when they substantially influence patient care decisions. Additionally, Colorado HB26-1139 (2026) creates healthcare-specific obligations for clinical AI including validation requirements and patient consent provisions.
How does the Colorado AI Act affect healthcare providers?
Healthcare providers must conduct annual impact assessments for each clinical AI system, implement bias auditing programs that test for disparate outcomes across protected classes, provide patient disclosure when AI influences care decisions, establish incident response procedures for algorithmic discrimination, and report confirmed discrimination to the AG within 90 days — all while operating within HIPAA constraints for patient data handling.
Does FDA clearance satisfy SB 24-205 requirements?
No. FDA clearance addresses safety and effectiveness but does not cover SB 24-205's specific requirements for impact assessments, bias monitoring, consumer disclosure, or AG notification. An FDA-cleared diagnostic AI tool still requires separate SB 24-205 compliance documentation. The two regulatory frameworks are complementary, not overlapping — deployers must satisfy both independently.
Automate Your Colorado AI Compliance
CO-AIMS handles bias audits, impact assessments, consumer disclosures, and evidence bundles — so you can focus on your business.
AI Solutionist and founder of CO-AIMS. Building compliance infrastructure for Colorado's AI Act. Helping law firms, healthcare providers, and enterprises navigate SB 24-205 with automated governance.