artificial intelligence

Think about your last online purchase, a recent job application, or even scrolling through your social media feed. Artificial intelligence (AI) was likely part of that interaction. It curates recommendations, sorts through resumes, and decides what content we see online. As helpful as that sounds, it also raises important questions about fairness, privacy, and ethics.

Take job applications, for example. An AI screening system may unintentionally favor certain candidates over others due to biases inherent in the data used to train it. 

These scenarios are already happening every day, and they impact real lives.

At CU Denver Business School, the conversation around how to use AI in the best way possible isn’t new. Alfred Ortiz, Senior Clinical Instructor of Information Systems at CU Denver Business School, has actively brought this discussion into the classroom, encouraging students to critically examine how businesses can and should manage emerging technologies. Recently, Ortiz invited Colorado Senator Robert Rodriguez, the driving force behind a groundbreaking new law, to speak directly with his students. The discussion was timely, provocative, and eye-opening.

That law, Senate Bill 205, is set to take effect in February 2026, will make Colorado the first state in the U.S. to formally regulate how businesses use high-risk AI. The bill is a pioneering step toward ensuring AI is used ethically and transparently in Colorado, and its implications will undoubtedly ripple across industries and educational institutions alike.

So, what does this mean for businesses, for the students soon to enter this changing landscape, and for everyday Coloradans? The answers aren’t simple, but the conversation is essential.

Behind the Code: Why Regulation Matters Today

Artificial intelligence already influences many aspects of American life, often without even realizing it. Imagine submitting a resume online, only to have it instantly filtered out because the algorithm has learned biases from historical hiring data. Or think about pricing algorithms that subtly increase prices for certain users based on their online behavior, potentially reinforcing unfair economic divides.

The risks go beyond unfair pricing or missed job opportunities, and such patterns shape how opportunity and access are distributed. The reality is that, in the absence of clear oversight, systems intended to help make decisions more effectively can actually reinforce the very inequalities they were meant to address.

For instance, studies have found that facial recognition technology used in security or law enforcement settings can incorrectly identify individuals, particularly those of color, and may lead to serious consequences. Specifically, the National Institute of Standards and Technology (NIST) reported that some facial recognition algorithms were 10 to 100 times more likely to incorrectly identify Asian and African American faces than those of White individuals in one-to-one matching tests. It also exhibited particularly high false-positive rates for African American women in one-to-many searches. As a result, when these AI systems make decisions without transparency, people have no way to challenge outcomes or correct mistakes.

Senate Bill 205 acknowledges these risks and, rather than hindering innovation, promotes transparency and fairness, particularly targeting what it identifies as “high-risk AI systems.”

Understanding these implications helps to appreciate why this conversation and this legislation are so important. It sets the stage for clearer rules and expectations, which will directly impact how businesses operate and how individuals interact with technology on a daily basis. With that context in mind, what does this groundbreaking law require?

Breaking Down the New AI Rules

Colorado’s Senate Bill 205 outlines a framework to make AI more transparent, accountable, and fair, particularly when its decisions have serious consequences for people’s lives.

First, it requires companies to conduct impact assessments for any high-risk AI systems they use. To put it simply, before a business rolls out a tool that may impact someone’s access to a job, housing, or healthcare, it must assess the potential consequences and document its efforts to mitigate bias and harm.

Second, businesses must clearly inform people when an AI system is involved. This includes customer service chatbots and any other automated systems involved in decision-making. The goal is to prevent confusion and ensure that people understand when a machine is making or guiding decisions.

Third, an AI tool can significantly influence a decision that affects someone, such as rejecting a loan application or determining insurance pricing. In that case, the person impacted must be informed so they have a chance to understand what happened and, when possible, respond or request an appeal.

Lastly, developers of high-risk AI systems must publish public transparency statements that explain what the system does, how it was trained, and the steps taken to minimize harm. Such statements will make it easier for the public and regulators to understand and evaluate the technology behind the scenes.

While this law is intended to regulate the use of AI for the better, like any new policy, it has sparked a mix of support and concern.

Balancing Innovation and Oversight

Supporters of the bill say it lays the groundwork for building trust and will help restore fairness in systems that often feel unclear. They also see it as a step forward in protecting consumers and setting a precedent other states could follow.

For businesses, though, there are real concerns. Some worry that the definitions in the bill are too vague, making it difficult to determine which AI tools are considered “high-risk” and what must be disclosed. Others fear that the added requirements will slow down product development or innovation, especially for startups with fewer resources.

Still, many experts argue that the law opens the door for meaningful dialogue. It encourages businesses to think carefully about how they build and utilize AI, promoting intentionality.

“With a background in government and tech entrepreneurship, there is a balance that needs to be achieved with respect to legislating artificial intelligence,” said Alfred Ortiz, Senior Clinical Instructor of Information Systems at CU Denver Business School. “AI is a technical disruptor in the marketplace and a competitive advantage for ecosystems/businesses that can apply AI technology.”

Ortiz noted how heavy government restrictions can negatively impact competitive advantages, particularly for small businesses. 

“Tech start-ups already operate in a very high-risk environment, so governing AI should be approached through a tiered approach, with SMEs, Medium, and Enterprise business levels considered accordingly,” Ortiz advised. “Business will, and should, move faster than government, as business seeks to achieve the goal, and ethical obligation, of profit generation for its shareholders.”

That’s why CU Denver Business School sees this as a teaching moment. For students preparing to enter the workforce, understanding both the promise and the pitfalls of AI is no longer optional. They are stepping into a business world where these questions are becoming part of everyday decisions, and the answers will require critical thinking, ethical reasoning, and adaptability.

A First Step, Not a Final Word

The rise of artificial intelligence brings huge possibilities, but also serious challenges. It’s changing how people work, connect, and make decisions. At times, it may feel overwhelming or even disruptive, but that’s part of every technological shift.

What matters is how we choose to respond. AI is a tool built by humans. That means humans have the power and the responsibility to shape it with intention. Therefore, setting clear guidelines, questioning how decisions are made, and creating safeguards are ways that can help make this technology work better for everyone.

Eli Wood, team captain at Black Flag Design and a participant in the CU Denver classroom conversation with Senator Rodriguez, underscored the nuance behind implementation. Regulation can help mitigate bias and risk; however, if not implemented thoughtfully, it may push companies to disclose trade secrets or overwhelm smaller businesses with excessive requirements.

“As we develop AI-powered tools to track and report system changes,” Wood noted, “we have to continually ask whether these efforts are truly advancing transparency, or unintentionally stifling innovation.”

That tension doesn’t undermine the law’s importance. It shows just how high the stakes are and why critical reflection must continue as AI evolves.

Senate Bill 205 is a first attempt at that. The bill marks new ground not only for Colorado but for the rest of the country, as it indicates that regulation can keep up with innovation. The state has chosen to lead, and others will be watching.

For CU Denver Business School students and the broader business community, this is more than a case study. It’s a real-world example of how ethics, strategy, and policy are converging. It’s an invitation to think deeply about the future of business and the role we all play in shaping it.

In a world where machines make decisions, the public is left with an important question: “Who ensures those decisions are fair?”

CU Denver

Business School

1475 Lawrence Street

Denver, CO 80202

303-315-8000

CU in the City logo