Is Your AI Vendor SB 24-205 Compliant? 12 Questions to Ask Before Renewal
In This Article
The Deployer Liability Problem
Here's the compliance reality that most procurement teams haven't grasped: SB 24-205 doesn't care who built the AI. If your organization uses a SaaS tool with embedded AI features to make or substantially influence consequential decisions about Colorado consumers, you are a deployer under § 6-1-1701(5). You bear the full weight of deployer obligations — impact assessments, consumer disclosure, bias monitoring, AG notification, and three-year record retention.
Your vendor may have built the AI. But it's your name on the AG's complaint.
This creates a fundamental information asymmetry. You need documentation about how the AI works, what data it was trained on, how it's been tested for bias, and what its known limitations are — but your vendor may have no contractual obligation to provide any of it. SB 24-205 requires you to conduct impact assessments (§ 6-1-1703) and monitor for algorithmic discrimination (§ 6-1-1705), but you can't do either without your vendor's cooperation.
The following 12 questions are designed to close that information gap before you sign — or renew — any contract involving AI-powered functionality.
Related: 7-step compliance checklist · Risk management policy template · Evidence bundle requirements
Questions 1–4: Transparency and Documentation
Question 1: "Can you provide a complete AI system card or model documentation for every AI feature in your product that affects our users?"
What you're looking for: A structured document describing the AI's purpose, training data, intended use, known limitations, and performance metrics. Good vendors produce model cards (following the Mitchell et al. framework) or equivalent documentation. Red flag: "That's proprietary" without offering any alternative disclosure mechanism.
Question 2: "Which of your AI features make or substantially influence decisions about employment, credit, insurance, healthcare, housing, education, or legal services?"
What you're looking for: A clear, product-specific mapping of which features operate in consequential decision domains. Your vendor should know which of their features trigger SB 24-205 obligations for their customers. Red flag: "None of our features make decisions — they just provide recommendations." Under SB 24-205, "substantially influencing" a decision carries the same obligations as making it.
Question 3: "Do you provide the data inputs, outputs, and logic documentation necessary for us to conduct an impact assessment under SB 24-205 § 6-1-1703?"
What you're looking for: Commitment to provide sufficient technical documentation for your annual impact assessments. This includes data categories used as inputs, output types and ranges, and the logic by which inputs are transformed into outputs. Red flag: Refusal to document anything beyond marketing-level feature descriptions.
Question 4: "Will you provide us with disaggregated performance data by protected class so we can conduct bias audits?"
What you're looking for: Agreement to provide — or support your collection of — outcome data broken down by race, gender, age, disability status, and other protected characteristics. Without this data, bias auditing is impossible. Red flag: "We don't collect demographic data" with no pathway to enable bias testing.
Questions 5–8: Bias Testing and Monitoring
Question 5: "What bias testing have you conducted on your AI models, and can you share the methodology and results?"
What you're looking for: Specific bias testing methodologies (disparate impact analysis, equalized odds testing, demographic parity evaluation), the protected classes tested, the statistical thresholds used, and the results. Good vendors conduct regular bias audits and publish summary findings. Red flag: "We tested for bias during development" with no ongoing monitoring program and no documented results.
Question 6: "Do you have an ongoing model monitoring program that detects performance drift and emergent bias?"
What you're looking for: Evidence of continuous monitoring, not just point-in-time testing. AI models drift over time as data distributions change. A responsible vendor monitors model performance continuously and alerts customers when performance degrades or bias metrics exceed thresholds. Red flag: Bias testing was a one-time event during model development.
Question 7: "If your AI produces a discriminatory outcome for one of our users, what is your incident notification and cooperation process?"
What you're looking for: A defined process for notifying deployers of potential algorithmic discrimination, with specific timelines. Under SB 24-205, you have 90 days to notify the AG after discovering discrimination — so you need your vendor to notify you fast enough to investigate and report. Red flag: No incident notification commitment or timelines measured in quarters rather than days.
Question 8: "Will you cooperate with independent third-party audits of your AI systems at our request?"
What you're looking for: Contractual commitment to allow or facilitate independent auditing. Some deployers will need to engage third-party auditors to satisfy their impact assessment obligations. Vendors who refuse third-party access leave you unable to verify their claims. Red flag: Blanket refusal citing trade secrets, with no alternative verification mechanism.
Questions 9–12: Contractual Protections and Liability
Question 9: "Will you add SB 24-205 compliance obligations to our service agreement, including data access for auditing and incident cooperation?"
What you're looking for: Willingness to incorporate specific compliance commitments into the contract — not just in marketing materials or verbal assurances. Key provisions: data access rights for auditing, incident notification timelines, documentation obligations, and cooperation requirements. Red flag: "Our standard terms already cover this" without pointing to specific clauses.
Question 10: "What is your own compliance posture as a developer under SB 24-205 § 6-1-1701(6)?"
What you're looking for: If your vendor's AI system is used by Colorado deployers, the vendor is likely a "developer" under SB 24-205 with its own statutory obligations — including providing deployers with the information necessary for compliance. A vendor that hasn't considered its own developer obligations is a vendor that can't help you meet your deployer obligations. Red flag: Unawareness that SB 24-205 creates developer obligations.
Question 11: "Do you carry professional liability or errors-and-omissions insurance that covers AI-related claims?"
What you're looking for: Insurance coverage that would respond if the vendor's AI causes discriminatory outcomes resulting in regulatory enforcement against you. While this doesn't replace compliance, it provides a financial backstop. Red flag: No AI-specific coverage and no plans to obtain it.
Question 12: "If we need to switch providers due to compliance concerns, what is the data portability and transition timeline?"
What you're looking for: A clear exit strategy. If your vendor can't or won't cooperate with compliance requirements, you need to be able to migrate without losing your historical data, audit trails, or compliance documentation. Red flag: Proprietary data formats, no export capabilities, or lock-in provisions that prevent timely transition.
Turning Answers into Action
Score each vendor response on a three-point scale: Green (substantive, documented answer), Yellow (partial answer requiring follow-up), Red (no answer, deflection, or refusal). Any vendor with more than three Red answers represents a material compliance risk.
For existing vendors approaching renewal, use these questions as contract negotiation leverage. SB 24-205 creates a legitimate business need for the documentation and cooperation you're requesting. Vendors who understand the regulatory landscape will recognize this; vendors who don't are telling you something about their maturity as an AI provider.
For new procurement, incorporate these questions into your standard vendor assessment process. Build a requirement that any SaaS tool with AI features must achieve a minimum Green/Yellow score before contract execution.
CO-AIMS includes a vendor assessment module that automates this evaluation process, tracks vendor responses, flags gaps, and generates vendor risk profiles that feed directly into your impact assessments. It turns a 12-question checklist into a continuous vendor monitoring program. See CO-AIMS Enterprise to learn how vendor assessment integrates with your broader compliance architecture.
Your vendors' compliance posture is your compliance posture. The time to ask these questions is before June 30, 2026 — not after the AG's office sends a CID asking why your AI vendor couldn't produce bias testing results.
Frequently Asked Questions
Am I liable for my vendor's AI under Colorado law?
Yes. Under SB 24-205, if you use a third-party AI system to make or substantially influence consequential decisions about Colorado consumers, you are a "deployer" with full statutory obligations — regardless of who built the AI. You must conduct impact assessments, provide consumer disclosures, monitor for bias, and maintain records for three years. Your vendor's non-compliance doesn't excuse yours.
Do SaaS companies need to comply with SB 24-205?
SaaS companies that build AI features used by Colorado deployers are likely "developers" under SB 24-205 § 6-1-1701(6), with their own statutory obligations — including providing deployers with sufficient documentation and information to meet compliance requirements. Even if a SaaS company doesn't directly serve Colorado consumers, it has obligations as a developer if its customers do.
What should I ask my AI vendor about compliance?
Focus on four areas: (1) documentation transparency — can they provide model cards, data inputs, and logic descriptions for impact assessments; (2) bias testing — what methodologies they use, how often, and whether they share results; (3) incident cooperation — will they notify you of discriminatory outcomes quickly enough for your 90-day AG reporting deadline; and (4) contractual commitment — will they put compliance obligations in writing, not just verbal assurances.
Automate Your Colorado AI Compliance
CO-AIMS handles bias audits, impact assessments, consumer disclosures, and evidence bundles — so you can focus on your business.
AI Solutionist and founder of CO-AIMS. Building compliance infrastructure for Colorado's AI Act. Helping law firms, healthcare providers, and enterprises navigate SB 24-205 with automated governance.