Articles

Has Colorado Set the Stage for AI Regulation in America?

Regulations
James Grieco
James Grieco
Jun 4, 2024
7
min read
Has Colorado Set the Stage for AI Regulation in America?

The United States has moved slower than Europe and other places when it comes to regulating the internet and tech, but one state has emerged as a must follow when it comes to all things data privacy and protection, and it isn’t California. 

California is the high profile name that tends to jumpstart conversations, but it has been Colorado that has been perhaps the most interesting state to track, due to its speed, forward-thinking mindset, ability to get laws passed, and modelability for future states to copy.

The third state to pass comprehensive data privacy legislation, trailing only California and Virginia, this time around Colorado has the honorable distinction of being the first state to pass a comprehensive law on artificial intelligence (a narrow Utah law notwithstanding). 

This, combined with major progress in the state on children’s online privacy and biometric data thus far in 2024 has set the state as a leader in tech regulation. 

Colorado AI Act Background & Overview

Last summer, in response to the rapid rise of ChatGPT and other AI systems, The Future of Privacy Forum joined forces with a wide array of American lawmakers to educate the group on the topic and help coordinate multi-state approaches. Representatives from Colorado and Connecticut (the 4th state to implement data privacy regulation) led that charge, resulting in comprehensive laws passing in both those legislatures this Spring. 

Connecticut’s bill died after the Governor refused to sign it, but the Colorado AI Act was subsequently signed. 

The law is a toned down version of the EU AI Act, but carries enough importance to set a meaningful precedent for other states (for comparison, Colorado’s AI Act is 26 pages long, whereas the EU AI Act initially came in at over 170 pages long). 

Colorado’s law takes a similar risk-based approach, with the focus fully on high risk systems and requirements for both developers and deployers. 

A high risk system is defined as, “any artificial intelligence system that, when deployed, makes, or is a substantial factor in making a consequential decision.”

A consequential decision corresponds to a meaningful outcome when choosing to provide or deny these types of services to an individual:

  • education enrollment or an education opportunity; 
  • employment or an employment opportunity; 
  • a financial or lending service; 
  • an essential government service; 
  • healthcare services; 
  • housing; 
  • insurance; or 
  • a legal service

That definition is extremely relevant to many organizations in those industries as well, as algorithms have been in place for years across many sectors doing this precise work. It is no surprise then that the Colorado AI Act is also highly concerned with anti-discriminatory enforcement. 

The bill notes that developers (and deployers) have a duty to protect consumers from algorithmic discrimination, which is defined as “any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the english language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law.”

This duty of care also means developers must provide extensive documentation on how the system should work and possible risks, among other things, to system deployers. Developers also must report incidents to the state Attorney General and system deployers within 90 days of an adverse event occurring, while deployers are responsible for explaining algorithmic decisions that go against the consumer directly to the individual. 

In short, there is a high level of transparency required throughout the process, including noting the presence of a high-risk system in the decision making process to then explaining how that could have worked against an individual. 

For consumers, much of the Colorado AI Act takes shape behind the scenes, and many will likely not commiserate or feel protected when told they’ve been denied a loan or rental application because of AI. That is what makes this regulation so noteworthy, as it will serve as the ultimate litmus test for if change can be driven at the organizational level as opposed to a publicly-backed, grassroots movement. 

Colorado AI Act Business Obligations

Developers and deployers must also adhere to mandatory AI Governance programs that align with NIST’s new AI standards or ISO 42001 (the two specific standards called out in the law, or any future equivalent), and complete yearly data protection impact assessments (with additional DPIA requirements within 90 days of any “substantial” alteration to a high risk AI system).

All of this is not to pile on companies for complex and evolving technology. Organizations are protected by a “rebuttal presumption” that they practiced reasonable care if found to be in compliance with the statutes within the AI Act.

This wording, along with AG rulemaking and a long runway to the law going into effect on February 1, 2026, allows Colorado to continue tweaking the bill if necessary, and does not crystallize the language around where AI technology is today. This should give the bill a good indefinite basis for AI governance and compliance, and is part of the reason why it makes such a great model for other states to borrow. 

Colorado Privacy Act Amendments: Children's Privacy and Biometrics

But beyond potentially setting the model for AI regulation, Colorado also amended its data privacy law, the Colorado Privacy Act (CPA), to bolster children’s privacy and biometric data this year.

Biometric data has been gaining steam as sensitive data with more and more states passing comprehensive privacy laws, and Colorado has again taken things one step further by amending CPA to include protections for biometric identifiers and data similar in scope to Illinois’s influential BIPA legislation (one of the few laws to strike fear into the hearts of American companies).

These biometric protections will extend to cover employees, meaning Colorado is just the second state after California to regulate employee data. There is a distinction, as California regulates all employee data and Colorado will only regulate biometric employee data, but it is a meaningful amendment that will hopefully curtail bad corporate behavior such as White Castle requiring employee fingerprints to log their work attendance (which has become the center of a major BIPA lawsuit). 

The amendment also leans into retention limits, as organizations must delete biometric data within two years if not currently in use. This best practice aligns with the data retention limit just passed in Minnesota’s data privacy law, more evidence that necessary progressive advancements in American data privacy are spreading across states.

This is also seen in the amendments to children’s privacy in the CPA, which now features the requirement to take “reasonable care" to avoid "heightened risk of harm to minors." Businesses that offer services or products that fall within this category must also complete additional data protection impact assessments to account for this heightened risk of harm.

New data controller prohibitions around children’s data include

  • Using a system design feature to significantly increase, sustain, or extend a minor's use of the service, product, or feature; or
  • Collecting a minor's precise geolocation.

These additional requirements are aligning around a more stringent and careful handling of minors’ data, a welcome sight and a rising floor for any federal legislation passed to modernize the Children’s Online Privacy Protection Act.

For all of these data privacy victories to happen within a single year in a single state is remarkable. Colorado has cemented itself as a leader in data privacy and AI governance not only in the US, but globally–a truly crazy sentence that would have gotten you a lot of dubious side glances if said aloud just three years ago (CPA passed in July 2021). 

The Home of Mile High has set the bar, and now it's time to see how the rest of the country responds.

How to Prepare for the Colorado AI Act

With the EU AI Act officially finalized as of a few weeks ago, AI governance operations will be key for privacy and compliance programs going forward. The passage of the Colorado AI Act only adds to the urgency, which means organizations need a good starting point.

What is that starting point? AI Asset Discovery and Risk Assessment.

If you can’t answer basic questions on your organization’s AI usage, you won’t be able to manage it and mitigate risk. 

Get a personalized tour of MineOS’s new AI Asset Discovery and Risk Assessment module to put your organization ahead of the regulatory curve.