Contact Information

For Businesses in the United States, AI Governance Will Soon Become a Compliance Imperative

As the technological landscape evolves, so too do the regulatory frameworks designed to ensure ethical and responsible usage of emerging technologies. The recent adoption of the EU Artificial Intelligence Act has set a global benchmark for risk-based AI regulation, fundamentally changing how businesses approach AI governance. With its phased implementation, starting with high-risk systems bans and expanding into comprehensive governance by 2026, this legal framework signals that AI governance is not merely a best practice; it is poised to become a compliance necessity for businesses, particularly in the United States.

Colorado’s Landmark Law Will Have Ripple Effects

In May 2024, Colorado made history by becoming the first state to enact a sweeping AI regulation—the Colorado Artificial Intelligence Act. This landmark legislation is modeled partly on the EU framework and imposes strict obligations on the developers and deployers of high-risk AI tools, especially those involved in critical sectors such as employment, healthcare, housing, and lending. Key requirements include conducting impact assessments, establishing risk management programs, ensuring transparency, and maintaining human oversight throughout the AI decision-making process.

Although the law was initially set to take effect in February 2026, its enforcement has been postponed to June 2026 due to industry pushback. Nevertheless, Colorado’s approach has ignited a firestorm of similar legislative proposals in other states, such as California and Illinois, and even inspired discussions in New Hampshire’s legislative sessions.

Momentum Is Building in State Legislatures

Following Colorado’s trailblazing efforts, state legislatures across the U.S. are taking significant strides in advancing AI regulations. California, for example, has introduced several laws aimed at increasing transparency and safety in AI development. The Transparency in Frontier Artificial Intelligence Act mandates that developers disclose specific information regarding their advanced AI models. Meanwhile, laws focused on consumer protection and chatbot safety are set to be enforced starting in 2026.

Illinois and New York City are focusing their legislative efforts on regulating AI in employment contexts. New laws require notifications or consent from job applicants before employing AI tools in the hiring process and mandate auditing of automated employment decisions to ensure fairness. At the same time, broad-based privacy laws in states like New Hampshire stress the importance of ethical considerations in automated decision-making, covering various sectors including employment.

New Hampshire has yet to adopt a comprehensive AI governance law; instead, it has focused on narrower issues. Current laws prohibit state agencies from using AI for real-time biometric surveillance and discriminatory profiling without a warrant, in addition to restricting specific applications of generative AI, like deepfakes.

Federal Legislation and Executive Orders

On the federal level, comprehensive AI legislation remains a moving target. Instead of establishing a formal legislative framework, the current landscape is primarily shaped by executive actions. In early 2025, President Trump signed Executive Order 14179, which emphasized removing barriers to American leadership in AI, thereby repealing several safety-focused initiatives. This trend continues with a leaked draft of an executive order from November 2025, indicating a possible preemption of state AI laws. The draft expressed concern over a fragmented regulatory environment that could hurt American competitiveness, proposing the creation of a federal AI task force and tying federal funding to state compliance with national regulations. This tension symbolizes the ongoing debate between federal oversight and states’ rights, a narrative that will significantly impact AI governance in 2026 and beyond.

Businesses Should Start Now To Prepare for Compliance

Given the rapidly evolving legislative landscape surrounding AI, businesses should not wait for formal regulations to take effect before preparing for compliance. Taking proactive measures will help organizations not only fulfill upcoming legal obligations but also position themselves competitively in a landscape increasingly governed by AI.

1. Conduct an AI Use Assessment: Begin by inventorying all AI tools currently in use and evaluating additional AI technologies that could benefit the organization.

2. Establish an AI Governance Framework: Form a cross-functional AI governance team that includes representatives from various business units, along with legal and technology advisors who specialize in AI compliance. It’s crucial to develop written policies that not only align with existing regulations but also anticipate emerging standards, such as those set forth in the EU AI Act and the AI Risk Management Framework outlined by the National Institute of Standards and Technology.

3. Integrate AI into Operations: Make the transition from theory to practice by operationalizing AI use. This can involve rigorous testing, prototyping, and implementing AI systems in production environments. Ensure that due diligence is upheld and that contracts with vendors explicitly address their AI usage, safeguarding against potential liabilities.

As AI continues to permeate various industries, the urgency for compliance and ethical governance will only increase. With a multitude of AI-related bills introduced across the nation and global frameworks establishing rigorous compliance standards, American businesses must act decisively now to retain their competitive edge. Ignoring the momentum of legislative changes may result in being unprepared for a future where AI governance is not only a best practice but a legal requirement.

Share:

administrator

Leave a Reply

Your email address will not be published. Required fields are marked *