Algorithmic Discrimination Under Colorado Law: What It Means and How to Prevent It
In This Article
- 1.The Statutory Definition of Algorithmic Discrimination
- 2.Real-World Examples by Industry
- 3.The AG Notification Requirement: Your 90-Day Clock
- 4.Detection Methods: How to Find Algorithmic Discrimination Before the AG Does
- 5.Prevention: Building Fairness Into Your AI Lifecycle
- Q.Frequently Asked Questions
The Statutory Definition of Algorithmic Discrimination
Colorado SB 24-205 § 6-1-1701(1) defines "algorithmic discrimination" as any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived:
- Age
- Color
- Disability
- Ethnicity
- Genetic information
- Limited proficiency in English
- National origin
- Race
- Religion
- Reproductive health
- Sex
- Veteran status
Two elements are critical. First, "unlawful differential treatment or impact" — this borrows from established civil rights law. It encompasses both disparate treatment (the AI intentionally treats people differently based on a protected class) and disparate impact (the AI produces different outcomes for protected classes even without explicit intent). Second, the protected classes extend beyond traditional civil rights categories to include limited English proficiency, genetic information, and reproductive health, reflecting Colorado's broader anti-discrimination framework.
This definition is deliberately broad. It doesn't require proof that the AI was designed to discriminate. It doesn't require proof that a human operator intended discrimination. If the AI system's outputs systematically disfavor a protected class in a consequential decision domain, that constitutes algorithmic discrimination — regardless of why.
Related: What is algorithmic bias? · How to audit AI for bias · The 90-day AG notification timeline
Real-World Examples by Industry
Algorithmic discrimination isn't theoretical. Documented cases span every consequential decision domain in SB 24-205:
Employment (Hiring and HR)
Amazon's resume screening AI, abandoned in 2018, systematically downgraded resumes containing the word "women's" (as in "women's chess club") and penalized candidates from all-women's colleges. HireVue's video interview AI was challenged for using facial analysis that could disadvantage candidates with darker skin or physical disabilities. Under SB 24-205, any AI hiring tool deployed in Colorado that produces disparate outcomes across protected classes triggers compliance obligations and potential enforcement.
Lending and Credit
A 2022 study in the Journal of Financial Economics found that algorithmic lending models charged Black and Hispanic borrowers 5.6–8.6 basis points more than comparable white borrowers, amounting to $765 million in excess interest annually. The Consumer Financial Protection Bureau (CFPB) has issued guidance that existing fair lending laws (ECOA, FCRA) apply fully to algorithmic lending — and SB 24-205 adds a layer of state enforcement on top.
Insurance
Insurance underwriting AI that uses proxy variables — ZIP code (correlated with race), credit score (correlated with race and income), homeownership status — can produce discriminatory premium and coverage decisions even without using race as a direct input. Colorado's Division of Insurance has flagged AI underwriting as a priority oversight area.
Healthcare
The Optum/Change Healthcare algorithm that deprioritized Black patients for care management is the most widely cited example. Diagnostic AI trained primarily on lighter-skinned populations shows reduced accuracy for darker-skinned patients. EHR-embedded risk scores that use healthcare utilization as a proxy for health need systematically disadvantage populations with historical barriers to care access.
Housing
Tenant screening AI that weighs criminal history, credit scores, and eviction records can produce disparate impact against Black and Hispanic applicants due to systemic disparities in the criminal justice and financial systems. Colorado's existing fair housing laws, combined with SB 24-205, create dual liability for landlords using automated screening.
The AG Notification Requirement: Your 90-Day Clock
SB 24-205 § 6-1-1705(3) creates a mandatory reporting obligation: upon discovering algorithmic discrimination, a deployer must notify the Colorado Attorney General within 90 days. This notification must include:
- A description of the algorithmic discrimination discovered
- The AI system involved
- The categories of consumers affected
- The steps taken or planned to remediate the discrimination
The 90-day clock starts at discovery, not at confirmation. If a bias audit produces results suggesting disparate impact, the clock is arguably running — even if you haven't completed your investigation. This creates a tension: you want to investigate thoroughly before reporting, but delay creates the risk that the 90-day window closes.
Best practice: Treat any bias audit finding that exceeds your pre-established thresholds (e.g., disparate impact ratio below 0.80) as a "discovery" that starts the clock. Begin your investigation immediately and prepare a preliminary notification to the AG. You can supplement the notification as your investigation progresses — a preliminary report demonstrating good faith is far better than a late report.
Failure to notify within 90 days is itself a violation. Even if the underlying algorithmic discrimination is ultimately remediated, missing the notification deadline creates independent enforcement exposure. The AG's office can enforce both the discrimination and the failure to report.
Organizations that have no bias auditing program in place have a different problem: they can't "discover" what they don't test for. But willful blindness is not a defense. The AG can argue that a deployer's failure to conduct required bias monitoring (§ 6-1-1705(1)) constitutes negligence, making any resulting discrimination a violation regardless of whether the deployer had actual knowledge.
Detection Methods: How to Find Algorithmic Discrimination Before the AG Does
A comprehensive detection program uses three complementary approaches:
Pre-Deployment Testing
Before launching any high-risk AI system, conduct baseline fairness testing against all protected classes listed in § 6-1-1701(1). Apply multiple statistical tests — no single metric captures all forms of discrimination:
- Disparate Impact Ratio — Calculate the selection/approval rate for each protected group divided by the rate for the most-favored group. Ratios below 0.80 (the EEOC four-fifths rule) warrant investigation.
- Statistical Parity Difference — The difference in positive outcome rates between groups. Differences exceeding 0.10 indicate potential discrimination.
- Equalized Odds Difference — Differences in true positive rates and false positive rates between groups. Particularly important for diagnostic and screening AI.
- Predictive Parity — Ensure positive predictive value is comparable across groups. Critical for risk scoring systems.
Continuous Production Monitoring
Pre-deployment testing catches static bias, but AI systems in production can develop emergent bias as data distributions shift. Implement continuous monitoring that tracks fairness metrics on a rolling basis and alerts when thresholds are breached. Monitor both individual metric drift and intersectional effects (combinations of protected classes).
Outcome Auditing
Periodically audit actual decisions made with AI assistance — not just the AI's output, but the final decision. Human-in-the-loop systems can amplify or mitigate AI bias depending on how humans interact with recommendations. If your AI recommends denial 40% of the time for Black applicants but human reviewers override to deny 50% of the time, the combined system has a discrimination problem that output-only monitoring wouldn't catch.
Document every test, every result, and every action taken. This documentation is the foundation of your defense under the rebuttable presumption.
Prevention: Building Fairness Into Your AI Lifecycle
Detection alone is insufficient. Organizations that wait to find discrimination after deployment are always playing catch-up. Prevention must be embedded throughout the AI lifecycle:
Data Layer
- Audit training data for representation bias — are all protected classes adequately represented?
- Identify proxy variables (ZIP code, name, language preference) that correlate with protected classes and evaluate whether their inclusion is justified
- Document data provenance: where did the training data come from, and what historical biases might it encode?
Model Layer
- Apply fairness constraints during model training (adversarial debiasing, reweighting, calibrated equalized odds)
- Evaluate multiple model architectures on fairness metrics — not just accuracy
- Document the fairness-accuracy tradeoff decisions and the rationale for acceptable thresholds
Deployment Layer
- Implement human oversight for high-stakes decisions — the human-in-the-loop requirement under SB 24-205 is both an ethical and legal safeguard
- Provide consumers with clear disclosure and the ability to request human review of AI-influenced decisions (§ 6-1-1704)
- Set automated alerts on fairness metric thresholds that trigger investigation and remediation workflows
Governance Layer
- Establish a bias review board that evaluates fairness testing results and remediation plans
- Maintain a discrimination risk register with regular updates
- Conduct annual fairness reviews as part of your SB 24-205 impact assessments
CO-AIMS integrates bias detection, monitoring, and prevention documentation into a single platform. From pre-deployment fairness testing to continuous production monitoring to automated AG notification workflows, every step generates the evidence your defense requires. Start your free trial and run your first bias audit within 24 hours.
Frequently Asked Questions
What is algorithmic discrimination under Colorado law?
Under SB 24-205 § 6-1-1701(1), algorithmic discrimination is any condition where an AI system produces unlawful differential treatment or impact disfavoring individuals based on age, color, disability, ethnicity, genetic information, limited English proficiency, national origin, race, religion, reproductive health, sex, or veteran status. It covers both intentional disparate treatment and unintentional disparate impact.
How is algorithmic discrimination detected?
Detection requires applying multiple statistical tests: disparate impact ratio (the four-fifths rule), statistical parity difference, equalized odds difference, and predictive parity — disaggregated across all protected classes. Effective detection combines pre-deployment baseline testing, continuous production monitoring of fairness metrics, and periodic outcome auditing that evaluates final human+AI decisions, not just AI outputs alone.
What happens if my AI discriminates in Colorado?
You must notify the Colorado Attorney General within 90 days of discovering algorithmic discrimination under § 6-1-1705(3), describing the discrimination, the AI system involved, affected consumer categories, and planned remediation. Penalties under CCPA enforcement are up to $20,000 per affected consumer. The AG can also seek injunctive relief ordering you to stop using the discriminatory AI system.
Automate Your Colorado AI Compliance
CO-AIMS handles bias audits, impact assessments, consumer disclosures, and evidence bundles — so you can focus on your business.
AI Solutionist and founder of CO-AIMS. Building compliance infrastructure for Colorado's AI Act. Helping law firms, healthcare providers, and enterprises navigate SB 24-205 with automated governance.