Shadow AI in the Enterprise: The Hidden Compliance Risk Colorado Businesses Are Missing
In This Article
What Shadow AI Is and Why It's Your Biggest Compliance Gap
Shadow AI is the use of artificial intelligence tools and features by employees without formal IT or compliance approval. It's the AI equivalent of shadow IT — unauthorized technology that enters the organization through individual adoption rather than institutional procurement.
The scale of the problem is staggering. According to a 2025 Salesforce survey, 55% of employees admit to using unapproved AI tools at work. A 2025 Microsoft study found that 78% of workers using AI at work brought their own tools. Gartner estimates that by 2026, more than 30% of enterprise AI usage will occur through unsanctioned channels.
Under SB 24-205, this creates a specific and serious problem: every instance of shadow AI that makes or substantially influences a consequential decision is an unregistered, unassessed, unmonitored high-risk AI system. Your compliance program doesn't know it exists. Your risk management policy doesn't cover it. Your impact assessments haven't evaluated it. Your bias auditing hasn't tested it. And if it produces a discriminatory outcome affecting a Colorado consumer, your organization is liable.
The AG's office doesn't care that your compliance team didn't know about it. The statutory obligations under §§ 6-1-1702 through 6-1-1706 apply to every high-risk AI system deployed in your organization, regardless of whether it was formally adopted.
Related: Shadow AI compliance risk overview · 7-step compliance checklist · AI governance software guide
How Shadow AI Enters Your Organization: 7 Common Vectors
Shadow AI doesn't require technical sophistication. It enters through the same door employees use every day:
1. General-Purpose LLMs (ChatGPT, Claude, Gemini)
Employees use ChatGPT to draft client communications, summarize case files, analyze contracts, write performance reviews, or evaluate job candidates. Each of these can substantially influence a consequential decision. An HR manager using ChatGPT to rank resumes has created an unregistered AI hiring tool.
2. Microsoft Copilot and Google Gemini in Workspace
Copilot is now embedded in Microsoft 365 — Word, Excel, Outlook, Teams. Gemini is integrated into Google Workspace. Employees may not even recognize they're using AI when Copilot suggests edits to a lending decision memo or Gemini drafts a response to an insurance claim. These embedded AI features activate with a license toggle, often without compliance review.
3. SaaS AI Features Quietly Enabled
Salesforce Einstein, HubSpot AI, Zendesk AI, and dozens of other SaaS platforms have enabled AI features — sometimes by default — in existing subscriptions. Your CRM may now be scoring leads with AI, prioritizing customer service tickets based on predicted urgency, or automating email responses, all without anyone formally evaluating these as AI systems.
4. Grammarly, Otter.ai, and Productivity Tools
AI-powered writing assistants, transcription services, and note-taking tools process sensitive business content. When Grammarly rewrites a denial letter or Otter.ai transcribes a client meeting that influences case strategy, AI is substantively participating in business processes.
5. Browser Extensions and Plugins
Chrome extensions with AI capabilities — text summarizers, email assistants, research tools — operate within the browser with access to whatever the employee is viewing. An AI browser extension that summarizes a credit application while a loan officer reviews it is influencing a consequential decision.
6. Mobile AI Apps
Employees use AI apps on personal devices for work tasks. Document scanning apps with AI extraction, voice-to-text with AI summarization, and AI-powered scheduling tools process business data outside of any enterprise controls.
7. Custom GPTs and AI Workflows
Technically savvy employees build custom GPTs, Zapier AI automations, or Make.com workflows that process business data through AI models. These bespoke tools are invisible to IT and compliance teams but may directly influence consequential decisions.
Which Shadow AI Creates SB 24-205 Liability
Not all shadow AI is high-risk under SB 24-205. The trigger is whether the AI makes or substantially influences a consequential decision about a Colorado consumer. Here's how to classify common shadow AI scenarios:
| Shadow AI Use | Risk Level | SB 24-205 Trigger? | Why |
|---|---|---|---|
| ChatGPT to draft marketing copy | Low | No | Not a consequential decision |
| ChatGPT to evaluate resumes | High | Yes | Employment decision (§ 6-1-1701(4)) |
| Copilot to draft a denial letter | High | Yes | Credit/insurance decision output |
| Grammarly for email polish | Low | No | Stylistic, not decisional |
| Salesforce Einstein lead scoring | Medium-High | Likely | May influence service access |
| AI note-taking in client meetings | Medium | Context-dependent | If summaries influence decisions |
| Custom GPT for loan analysis | High | Yes | Credit decision (§ 6-1-1701(4)) |
| AI browser extension summarizing applications | High | Yes | Influences application decisions |
| Notion AI for project management | Low | No | Internal operations |
| AI-powered tenant screening app | High | Yes | Housing decision (§ 6-1-1701(4)) |
The critical takeaway: any shadow AI that touches hiring, credit, insurance, healthcare, housing, education, or legal services decisions requires full SB 24-205 compliance — impact assessment, consumer disclosure, bias monitoring, and record retention. An employee using an unauthorized tool doesn't reduce your obligations; it increases your risk.
How to Discover and Inventory Shadow AI
You can't govern what you can't see. Shadow AI discovery requires a multi-layered approach:
Technical Discovery
- Network traffic analysis — Monitor DNS queries and HTTP requests to known AI service endpoints (api.openai.com, api.anthropic.com, gemini.google.com, etc.). CASB (Cloud Access Security Broker) tools like Netskope, Zscaler, and Microsoft Defender for Cloud Apps can identify AI service usage.
- Browser extension audit — Enumerate browser extensions across managed devices. Flag extensions with AI capabilities for compliance review.
- SaaS discovery — Use SaaS management platforms (Productiv, Zylo, Torii) to identify AI features enabled in existing subscriptions. Cross-reference with your AI system inventory to identify unregistered capabilities.
- Endpoint monitoring — Monitor installed applications on managed devices for AI-powered tools. MDM solutions can detect unauthorized app installations.
Process Discovery
- Department surveys — Ask each department to self-report AI tool usage. Make it non-punitive; the goal is visibility, not discipline. Frame it as compliance preparation, not surveillance.
- Workflow audits — Review decision-making workflows in high-risk departments (HR, lending, claims, legal) and identify where AI tools could be or are being used informally.
- Procurement review — Examine credit card statements and expense reports for AI subscriptions purchased outside of IT procurement.
Ongoing Monitoring
Discovery isn't a one-time event. New AI tools launch weekly. Existing tools add AI features quarterly. Establish a continuous discovery program that combines automated technical monitoring with periodic human-driven process reviews. Integrate shadow AI discovery into your quarterly compliance review cycle.
Building a Shadow AI Governance Framework
Discovery without governance is just documentation of your liability. Once you've identified shadow AI, you need a framework to manage it:
Tier 1: Acceptable Use Policy
Publish a clear AI acceptable use policy that defines: which AI tools are approved for use, which categories of business decisions may not involve unapproved AI tools, the process for requesting approval of new AI tools, and the consequences of violating the policy. Distribute the policy to all employees and require acknowledgment. Update it quarterly as the AI landscape evolves.
Tier 2: Rapid Assessment Process
When employees identify AI tools they want to use, make the approval process fast enough that it doesn't incentivize going around it. Create a lightweight assessment questionnaire: Does this tool influence consequential decisions? What data does it process? Where is the data stored? Can we audit its outputs? A 48-hour turnaround for low-risk tools and a 2-week process for high-risk tools strikes the right balance.
Tier 3: Technical Controls
For high-risk environments (lending, underwriting, HR), consider technical controls: block unapproved AI services at the network level, restrict browser extensions to approved lists, disable AI features in SaaS tools until compliance review is complete. Technical controls should complement — not replace — policy and culture.
Tier 4: Amnesty and Integration
The shadow AI you discover through this process is shadow AI your employees found valuable enough to adopt on their own. Don't just block it — evaluate whether to formally adopt, replace with a compliant alternative, or genuinely prohibit it. Shadow AI discovered through an amnesty-style inventory process often reveals genuine workflow needs that your approved tool stack doesn't meet.
CO-AIMS includes a shadow AI discovery and governance module that tracks your AI system inventory, flags unregistered tools, and provides rapid assessment workflows for new AI tool requests. Every tool in your inventory — whether formally procured or discovered through shadow AI auditing — feeds into your impact assessments and compliance evidence. See CO-AIMS Enterprise for shadow AI governance capabilities.
Frequently Asked Questions
What is shadow AI?
Shadow AI is the use of AI tools and features by employees without formal IT or compliance approval. It includes general-purpose LLMs like ChatGPT, AI features embedded in existing SaaS tools (Salesforce Einstein, HubSpot AI), browser extensions with AI capabilities, and custom AI workflows built by individual employees. Under SB 24-205, any shadow AI instance that influences a consequential decision creates compliance liability for the organization.
Is employee use of ChatGPT a compliance risk?
It depends on what they use it for. An employee using ChatGPT for brainstorming or drafting internal emails is low-risk. An employee using ChatGPT to evaluate job candidates, analyze loan applications, summarize insurance claims, or make any decision that affects a Colorado consumer's access to employment, credit, insurance, healthcare, housing, education, or legal services has created an unregistered high-risk AI system under SB 24-205 — with full deployer obligations.
How do you detect shadow AI in your organization?
Use a multi-layered approach: technical discovery (network traffic analysis to AI service endpoints, browser extension audits, SaaS feature discovery, endpoint monitoring), process discovery (department surveys, workflow audits, expense report review for AI subscriptions), and ongoing monitoring (continuous network monitoring, quarterly process reviews, integration into your compliance review cycle). CASB tools like Netskope and Zscaler can automate much of the technical detection.
How should companies create an AI acceptable use policy?
An effective AI acceptable use policy defines approved AI tools, prohibits use of unapproved AI for consequential decisions, establishes a fast approval process for new tools (48 hours for low-risk, 2 weeks for high-risk), requires employees to acknowledge the policy, and is updated quarterly. Pair the policy with technical controls in high-risk departments and an amnesty program for discovering existing shadow AI usage without punishing early disclosure.
Automate Your Colorado AI Compliance
CO-AIMS handles bias audits, impact assessments, consumer disclosures, and evidence bundles — so you can focus on your business.
AI Solutionist and founder of CO-AIMS. Building compliance infrastructure for Colorado's AI Act. Helping law firms, healthcare providers, and enterprises navigate SB 24-205 with automated governance.