Skip to Content
Cyber and Privacy Risk and Insurance

Colorado Artificial Intelligence Law: Deployer and Developer Definitions

Melissa Krasnow | June 17, 2024

On This Page
A set of legal scales with a hologram of a digital brain projected next to it

The Colorado artificial intelligence (AI) law ("Colorado AI law") will take effect on February 1, 2026. This article discusses the Colorado AI law deployer and developer definitions, as well as deployer risk management policy and program and impact assessment requirements.

Deployer and Developer Definitions

A deployer means a person doing business in Colorado that deploys a high-risk AI system. C.R.S. § 6-1-1701(6). Deploy means to use a high-risk AI system. C.R.S. § 6-1-1701(5). A high-risk AI system means any AI system that, when deployed, makes, or is a substantial factor in making, a consequential decision. C.R.S. § 6-1-1701(9)(a). A high-risk AI system does not include (i) an AI system if the AI system is intended to (a) perform a narrow procedural task, or (b) detect decision-making patterns or deviations from prior decision-making patterns and is not intended to replace or influence a previously completed human assessment without sufficient human review, or (ii) the following technologies, unless the technologies, when deployed, make, or are a substantial factor in making, a consequential decision: (a) antifraud technology that does not use facial recognition technology, (b) antimalware, (c) antivirus, (d) AI-enabled video games, (e) calculators, (f) cyber security, (g) databases, (h) data storage, (i) firewall, (j) Internet domain registration, (k) Internet website loading, (l) networking, (m) spam- and robocall-filtering, (n) spell-checking, (o) spreadsheets, (p) Web caching, (q) Web hosting or any similar technology, or (r) technology that communicates with consumers in natural language for the purpose of providing users with information, making referrals or recommendations, and answering questions and is subject to an accepted use policy that prohibits generating content that is discriminatory or harmful. C.R.S. § 6-1-1701(9)(b).

An AI system means any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations that can influence physical or virtual environments. C.R.S. § 6-1-1701(2).

A consequential decision means a decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of, (a) education enrollment or an education opportunity, (b) employment or an employment opportunity, (c) a financial or lending service, (d) an essential government service, (e) healthcare services, (f) housing, (g) insurance, or (h) a legal service. C.R.S. § 6-1-1701(3).

A substantial factor means a factor that (i) assists in making a consequential decision, (ii) is capable of altering the outcome of a consequential decision, and (iii) is generated by an AI system and includes any use of an AI system to generate any content, decision, prediction, or recommendation concerning a consumer that is used as a basis to make a consequential decision concerning the consumer. C.R.S. § 6-1-1701(11).

Consumer means an individual who is a Colorado resident. C.R.S. § 6-1-1701(4).

Healthcare services has the same meaning as provided in 42 U.S.C. § 234(d)(2). C.R.S. § 6-1-1701(8).

A developer means a person doing business in Colorado that develops or intentionally and substantially modifies an AI system. C.R.S. § 6-1-1701(7). Intentional and substantial modification means a deliberate change made to an AI system that results in any new reasonably foreseeable risk of algorithmic discrimination. C.R.S. § 6-1-1701(10)(a). Intentional and substantial modification does not include a change made to a high-risk AI system, or the performance of a high-risk AI system, if (i) the high-risk AI system continues to learn after the high-risk AI system is (a) offered, sold, leased, licensed, given, or otherwise made available to a deployer, or (b) deployed, (ii) the change is made to the high-risk AI system as a result of any learning described in C.R.S. § 6-1-1701(10)(b)(i), (iii) the change was predetermined by the deployer, or a third party contracted by the deployer, when the deployer or third party completed an initial impact assessment of such high-risk AI system pursuant to C.R.S. § 6-1-1703(3), and (iv) the change is included in technical documentation for the high-risk AI system. C.R.S. § 6-1-1701(10)(b).

Algorithmic discrimination means any condition in which the use of an AI system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of Colorado or federal law. C.R.S. § 6-1-1701(1)(a). Algorithmic discrimination does not include (i) the offer, license, or use of a high-risk AI system by a developer or deployer for the sole purpose of (a) the developer's or deployer's self-testing to identify, mitigate, or prevent discrimination or otherwise ensure compliance with state and federal law, or (b) expanding an applicant, customer, or participant pool to increase diversity or redress historical discrimination, or (ii) an act or omission by or on behalf of a private club or other establishment that is not, in fact, open to the public, as set forth in Title II of the federal "Civil Rights Act of 1964," 42 U.S.C. § 2000a(e), as amended. C.R.S. § 6-1-1701(1)(b).

Deployer Risk Management Policy and Program Requirements

On and after February 1, 2026, and except as provided in C.R.S. § 6-1-1703(6), a deployer of a high-risk AI system shall implement a risk management policy and program to govern the deployer's deployment of the high-risk AI system which must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. C.R.S. § 6-1-1703(2)(a). Such risk management policy and program must be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk AI system, requiring regular, systematic review and updates. C.R.S. § 6-1-1703(2)(a).

Such risk management policy and program must be reasonable considering (i)(a) the guidance and standards set forth in the latest version of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology in the United States Department of Commerce, Standard ISO/IEC 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for AI systems, if the standards are substantially equivalent to or more stringent than the requirements of the Colorado AI law, or (b) any risk management framework for AI systems that the attorney general, in the Colorado attorney general's discretion, may designate, (ii) the size and complexity of the deployer, (iii) the nature and scope of the high-risk AI systems deployed by the deployer, including the intended uses of the high-risk AI systems, and (iv) the sensitivity and volume of data processed in connection with the high-risk AI systems deployed by the deployer. C.R.S. § 6-1-1703(2)(a). Such risk management policy and program may cover multiple high-risk AI systems deployed by the deployer. C.R.S. § 6-1-1703(2)(b).

Deployer Impact Assessment Requirements

Except as provided in C.R.S. § 6-1-1703(3)(d), (3)(e), and (6): (i) a deployer, or a third party contracted by the deployer, that deploys a high-risk AI system on or after February 1, 2026, shall complete an impact assessment for the high-risk AI system, and (ii) on and after February 1, 2026, a deployer, or a third party contracted by the deployer, shall complete an impact assessment for a deployed high-risk AI system at least annually and within 90 days after any intentional and substantial modification to the high-risk AI system is made available. C.R.S. § 6-1-1703(3)(a).

Such impact assessment must include, at a minimum, and to the extent reasonably known by or available to the deployer, (i) a statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the high-risk AI system, (ii) an analysis of whether the deployment of the high-risk AI system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate the risks, (iii) a description of the categories of data the high-risk AI system processes as inputs and the outputs the high-risk AI system produces, (iv) if the deployer used data to customize the high-risk AI system, an overview of the categories of data the deployer used to customize the high-risk AI system, (v) any metrics used to evaluate the performance and known limitations of the high-risk AI system, (vi) a description of any transparency measures taken concerning the high-risk AI system, including any measures taken to disclose to a consumer that the high-risk AI system is in use when the high-risk AI system is in use, and (vii) a description of the post-deployment monitoring and user safeguards provided concerning the high-risk AI system, including the oversight, use, and learning process established by the deployer to address issues arising from the deployment of the high-risk AI system. C.R.S. § 6-1-1703(3)(b).

In addition, such an impact assessment following an intentional and substantial modification to a high-risk AI system on or after February 1, 2026, must include a statement disclosing the extent to which the high-risk AI system was used in a manner that was consistent with, or varied from, the developer's intended uses of the high-risk AI system. C.R.S. § 6-1-1703(3)(c).

A single impact assessment may address a comparable set of high-risk AI systems deployed by a deployer. C.R.S. § 6-1-1703(3)(d). If a deployer, or a third party contracted by the deployer, completes an impact assessment for complying with another applicable law or regulation, the impact assessment satisfies the requirements hereunder if it is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant hereto. C.R.S. § 6-1-1703(3)(e).

A deployer shall maintain the most recently completed impact assessment for a high-risk AI system as required hereunder, all records concerning each impact assessment, and all prior impact assessments, if any, for at least 3 years following the final deployment of the high-risk AI system. C.R.S. § 6-1-1703(3)(f).

On or before February 1, 2026, and at least annually thereafter, a deployer, or a third party contracted by the deployer, must review the deployment of each high-risk AI system deployed by the deployer to ensure that the high-risk AI system is not causing algorithmic discrimination. C.R.S. § 6-1-1703(3)(g).


Opinions expressed in Expert Commentary articles are those of the author and are not necessarily held by the author's employer or IRMI. Expert Commentary articles and other IRMI Online content do not purport to provide legal, accounting, or other professional advice or opinion. If such advice is needed, consult with your attorney, accountant, or other qualified adviser.