Articles

Utah's New AI Law Shows How Messy American Data Privacy Is

Guides
James Grieco
James Grieco
Mar 13, 2024
4
min read
Utah's New AI Law Shows How Messy American Data Privacy Is

Utah's recent legislative move with Senate Bill 149, known as the AI Policy Act, marks a pioneering step toward addressing the intricate dance between innovation and regulation. While the bill underscores a critical recognition of AI's profound impact across various sectors, it also casts a spotlight on a burgeoning regulatory challenge: the piecemeal approach to AI governance vis-à-vis privacy and data protection laws in the United States.

The Utah AI Policy Act

Much of what is in Utah's AI Policy Act is not brand new information, but rather repackaged concerns and requirements to fit genAI. The law is significant as the first US private-sector law governing AI, and you have to commend Utah for its speed in passing something to address AI as the European Union moves the AI Act closer to the finish line.

Utah was also one of the first five states to pass a comprehensive data privacy law, and while that law is amongst the weakest to pass now that over 15 states have privacy regulation on the books, the state looks to be a trendsetter for issues like this.

The AI Policy Act looks to be similarly toothless, but we'll get to why in a second. The law defines AI as systems "trained on data that ... interact with a person using text, audio, or visual communication and generate non-scripted outputs similar to outputs created by a human, with limited or no human oversight."

The AI Policy Act is careful to note that it applies to any communications covered by Utah's range of consumer protection laws, including the state's privacy law, the Utah Consumer Privacy Act. This is more proof that data privacy and AI governance are layers of the same onion, as noted over and over again, both in the EU AI Act and President Biden's executive order on AI last fall.

The bulk of the AI Policy Act establishes that companies can and will be held liable for deceptive practices and behaviors exhibited by genAI products under their watch. Other responsibilities include:

  • Clearly disclosing to users that they are interacting with a generative AI system and not a person.
  • This disclosure requirement extends to the supply of occupational services, such as therapy or graphic design, as professionals need to disclose use of AI

Among systems currently in widespread use, this will affect chatbots the most. Then again, chatbots are hardly advanced AI usage, and rarely leave customers feeling satisfied with interactions to begin with.

Companies found to be in violation of the law will be subject to $2500 administrative fines per violation, and up to $5000 in civil penalties per violation.

More interestingly is the creation of the Office of AI Policy as part of Utah's consumer protection division, which will oversee the new AI Learning Lab Program. The lab's goal is to study the risk-benefit calculus of emerging AI technology to help direct future regulation and technological innovation. The idea of the Learning Lab and Office of AI Policy is phenomenal, although it will be years until we can properly assess whether it is doing its job in practice.

The good news? That clock is starting immediately, as the AI Policy Act enters into force on May 1, 2024.

So, What's the Problem?

Utah believes it is setting a precedent and kickstarting AI regulations within the U.S., but in reality passing laws like the AI Policy Act are only splintering the country's legislative potential around tech even more. Several other states such as Colorado, Connecticut, and New York have introduced amendments to existing laws or entirely new bills to govern various AI use cases, and other states like New Mexico are taking the cue in an election year to combat AI technology like deepfakes from being used in political messaging.

Now, the content of all these laws is not problematic. Even the AI Policy Act, which is quite short and largely uneventful policy-wise, is not an issue because of the ink on the page.

The fractured manner in which these laws are already starting to appear and pass is the problem, and it is only made worse by happening alongside the EU cohesively and painstakingly hammering out the AI Act to ensure there is a single law of the land when it comes to AI regulation.

Europe pulled off a similarly influential event when it passed the General Data Protection Regulation back in 2016, setting the stage for the world to adopt data privacy regulation. Well, much of the world did, with dozens of countries across every inhabited continent adopting similar laws and frameworks. Of course, one of the few notable countries that did not pass a federal data privacy law was the United States.

In the past half decade, states have had to take up the mantle and pass a patchwork of data privacy laws to move the conversation forward in the U.S., but that has led to a tangled mess of requirements and applicability thresholds that leave most businesses confused and most individuals out of luck regarding data rights.

The country is still struggling to pass a federal law after a failed attempt in 2022, and despite many in government knowing and proclaiming how important proper data privacy practices are to AI governance, it seems as if the states are content to race ahead just to say they passed an AI regulation.

The country has not learned it lesson from the disastrous rollout of data privacy laws, and repeating the mistake would be infinitely worse with AI.

Toward a Cohesive Framework

The solution, then, requires Congress and the federal government to take a step back so it can take a leap forward in the future. That means passing a meaningful federal data privacy law that raises the floor of what such regulation needs to be, curtails the lobbying efforts that have routinely watered down Virginia's VCDPA model across states like Utah, Tennessee, Iowa, Indiana, and New Hampshire, and reengages the American population on data privacy.

The public has been apathetic about the matter for years, beaten into that state by the constant idea that people are powerless to protect their own data and stop widespread corporate data collection practices. But that couldn't be farther from the truth, and Europe has served as that model, even if imperfectly at times.

Once a comprehensive data privacy law covers the entire country and cuts through the unnecessary noise of a 16-state regulatory patchwork, then the country and Congress will have laid the groundwork to pass AI regulation on par with the EU AI Act.

Utah's AI Policy Act multiplied by 50 and developed over the next several years is not going to work, and we know that because it hasn't worked for data privacy. The U.S. needs cohesion now more than ever on technological regulation, and the longer it waits, the more convoluted the AI patchwork will become.

How MineOS Helps

In the meantime, if the American government is going to lag behind on regulating AI, businesses will need to step up to address consumer worry.

What's the best way to do that? By identifying AI systems in use and mitigating the risk those systems carry. By doing so, not only will organizations set themselves up for compliance with future AI regulations, but they will safeguard the public from the data privacy and security risks inherent to genAI.

MineOS’s unique data discovery and classification sets organizations of any size up for success, enabling full data visibility and governance follow-through with automated assessments and auditable reports, and we've expanded the core of that system with our new module, AI Asset Discovery & Risk Assessment, to tackle the problems associated with genAI.

Take charge of your AI Governance before it becomes a problem. Get a demo of the new module for free.