The rules and regulations around AI can feel as bewildering as some of the wild hallucinations that large language models spit out.
Roughly 100 state laws and proposed rules have emerged over the past couple of years to fill the void left by the lack of a federal standard. This month, President Trump signed an executive order that he said would simplify things by giving the federal government oversight and by rolling back the patchwork of state rules.
The federal framework for AI however, is still a work in progress. And observers say that Trump’s executive order will almost certainly face legal challenges in court.
For businesses seeking guidance and predictability as they craft their AI plans, 2026 seems unlikely to bring relief.
“What this means for my clients, both the AI innovators and the Fortune 500 companies trying to adopt AI, is even more uncertainty,” says Danny Tobey, chair of law firm DLA Piper’s Americas AI and data analytics practice.
The lack of clarity is due in large part to the broader tension between the federal government and state governors and legislatures about who can control how AI companies develop their technologies, which are quickly remaking work, affecting how companies hire, and raising societal questions about privacy and consumer protections. Also looming large over any attempts to regulate AI is the increased influence of the technology on the nation’s stock market. Many of the largest companies by market capitalization including Nvidia, Alphabet, Microsoft, Amazon, and Meta Platforms have seen their valuations soar owing to investor enthusiasm as AI adoption accelerates.
Mel Walker, the data and AI practice leader for accounting firm CohnReznick, says that the federal government has moved too slowly to establish guidelines for AI, but that the hesitancy is likely in part the result of how quickly the technology is developing. The scattered regional oversight, led by states like California and Colorado, has become difficult for businesses to track.
“We have to make sure it is not too onerous or cumbersome for the business owners to be able to comply, or we’re going to stifle the innovation altogether,” says Walker.
She says there’s been a noticeable uptick in conversations among government and private sector officials in the wake of last month’s disclosure by AI startup Anthropic that it thwarted a large-scale AI cyberattack, likely from a China state-sponsored group. Anthropic CEO Dario Amodei has called for more regulation over AI. “I think it’s caused a lot of excitement in the area—meaning urgency—because of the nature of what happened with Anthropic,” says Walker. “We’re going to continue to see this make headlines until we in the U.S. make a decision on how we want to handle this regulation.”
States that have been more aggressive in regulating AI have included New York, which requires employers to disclose AI’s role in layoffs, and California, where a law signed in September by California Gov. Gavin Newsom requires some AI developers to disclose their safety protocols and offers protections for potential AI whistleblowers.
“We know California has a lot of great tech companies,” says Wende Knapp, employment and labor practice leader at law firm Woods Oviatt Gilman. “I think you can continue to see that [AI is] going to be tightly monitored. And I think other states will follow from a data privacy perspective.”
Some governors have signaled that they aren’t willing to cede their role in AI oversight to the executive branch. “An executive order doesn’t/can’t preempt state legislative action,” wrote Florida Gov. Ron DeSantis on X. Utah Gov. Spencer Cox told NPR last month that he was “very worried about any type of federal incursion into states’ abilities to regulate AI.”
Regulation is not only in flux in the U.S. but also in Europe, where the region is reportedly considering some changes that would weaken the AI Act signed into law last year.
As business leaders wait for greater clarity from their regulators, most chief information officers and chief technology officers have relied on two frameworks to guide their AI policies. The ISO 42001 standard guides international companies that want to comply with the European Union’s AI Act, while national firms tend to utilize the National Institute of Standards and Technology (NIST) AI Risk Management Framework.
“All CIOs and CTOs that I talk to from these large, public companies, they are all basically using either NIST or ISO 42001 as their baseline framework,” says Bhavesh Vadhani, a partner at CohnReznick. “If they do that, chances are they’ll be able to satisfy most—if not all—requirements by the states.”
“We have a tempest going on about AI regulation, but the smart companies are nonetheless building safety, transparency, and trust by design into their AI, because there will always be ways for people to get at those harms, regardless of whether we have AI-specific legislation,” says Tobey.
Burkhard Boeckem, CTO at industrial technology company Hexagon, has advocated for stricter boundaries and regulations to oversee “physical AI,” which includes the Stockholm-based company’s own efforts to develop humanoid robots. “Physical AI must have higher standards, because you see real-world consequences if something goes wrong,” says Boeckem.
For most of the technology solutions that Hexagon develops, the approach is to meet the most stringent regulatory requirements, so that everything can easily be sold internationally. But in the case of AI, where technology is rapidly evolving, Hexagon may make exceptions and develop broader AI capabilities for the U.S. market than those allowed in Europe.
“Ultimately, I can only imagine that such a fragmented approach slows the whole industry down,” says Boeckem.
Read more about The Year in AI—and What's Ahead in the latest Fortune AIQ special report, reflecting on the AI trends that took over the business world and captivated consumers in 2025. Plus, tips on preparing for new developments in 2026.










