As businesses add buzzy artificial intelligence tools across their systems, experts are warning of an approaching storm of regulation that could dampen expected efficiency gains from the technology.
The Trump administration is preparing an “Artificial Intelligence Action Plan” that will lay out how the U.S. will try to “dominate” the artificial intelligence race with China through a series of new policies. The initiative is still at the early stage of requesting public comment about “concrete AI policy actions” related to a list of topics including hardware and chips, cybersecurity, privacy, and export controls.
The existing U.S. regulatory landscape is best described as a patchwork quilt riddled with holes, not a blanket policy. There are no cohesive federal regulations. Instead, a handful of states are leading the charge. Three states have AI regulations that explicitly apply to the private sector and nine others have legislation under consideration. But some of the more local rules have a potential national impact. For example, Texas has banned state workers and contractors from using DeepSeek, a buzzy open-source AI model developed by a startup based in China that has raised national security concerns.
In general, businesses looking to cash in on open-source AI, in particular, could be on the regulatory hook if they use the tools haphazardly and leak sensitive information, and other supply chain risks.
Unlike the European Union’s General Data Protection Regulation (GDPR) or the Artificial intelligence Act, the U.S. does not “have a uniform AI policy,” said Adnan Masood, chief AI architect at the technology services company UST. Without one, it can be difficult to advise companies how best to implement AI without stepping on a regulatory landmine such as privacy.
What’s more, state leaders like California Gov. Gavin Newsom’s plan to limit Trump’s ambitions largely through their own regulations could further complicate matters for businesses. State and federal laws can often be at odds, particularly when it comes to policies about emerging technology.
“States will lead the charge, primarily in the interest of driving federal action. I fear this will lead to a more complex patchwork than we’ve seen with consumer privacy protections, in a significantly more complex domain,” said Casey Bleeker, CEO of generative AI security firm SurePath AI.
Predicting the unpredictable
It’s an open question how generative AI will impact an increasingly complex and interconnected digital society. While lauded as broadly beneficial by some, AI is also prone to frequent hallucinations and biases that are leading states to quickly come up with rules to prevent harmful impact. Because of the complications, creating easy-to-understand policies will be difficult, experts note.
“Instead of broad and consistent protections, most organizations will encounter acts that are poorly technically defined,” Bleeker said. “Unlike consumer privacy protections that could easily label types of data that cannot be shared without consumer approval—AI regulation is a complex landscape and is not just privacy focused.”
Bleeker said this could lead to businesses trying to guess at how policies will be applied or penalties enforced. For c-suite leaders, “the best offense is an affirmative defense,” Bleeker said. Businesses should leave an auditable trail of how their organization is using AI, from the software-as-a-service to any private models, so they can quickly reference them when any new rules or regulations are enacted.
Otherwise, businesses risk civil damages or fines. What’s more, risks of internal threats increase without systems in place explicitly laying out data and tool access for employees — which includes generative AI tools.
Unapproved use of AI can also lead to a company accidentally leaking some of its intellectual property or let employees access privileged information they otherwise wouldn’t be able to, said Bleeker.
Eran Barak, co-founder and CEO of the data loss prevention firm MIND, said that discussion about federal AI legislation is driven partly by fears that a patchwork of rules will slow AI development amid the ongoing competition between the U.S. and China. In the absence of federal law, businesses must look to state laws as a guide.
“Businesses must be prepared to comply with the strictest state AI laws that are passed and expect those to come from California or New York given their history with strict privacy mandates,” Barak said.
For heavily regulated industries like finance or healthcare, businesses should focus on data lineage, or the ability to track the flow of data over time, said UST’s Masood. They should also have controls letting them set the level of access given to individual employees. In choosing language model service providers, they should seek ones that can conduct regular audits of whether they comply with any regulatory requirements, that they can respond to incidents like cyberattacks, and have constant data logging and monitoring tools in place.
“So if you have not vetted your model for data collection, for profiling, for what kind of information it uses, you are liable,” Masood said.
However, even if a business is not required to follow certain regulations, misusing AI could have a negative impact on a business’ brand or reputation. It’s similar to the early days of cybersecurity, when companies were afraid to report a cyberattack or ransomware extortion due to fears of bad public relations.
So failures can be unexpectedly catastrophic, for example, when an update from cybersecurity firm CrowdStrike bricked Microsoft Windows devices worldwide last year. The glitch took down businesses of all kinds, including airlines.
In general, considering the risk, AI should be thought of internally as a potential threat to a company’s supply chain. It’s the mentality businesses should adopt for any software they use.
In some cases, the AI may be low-risk, such as the potential to give users movie recommendations that are simply bad. In others, it could be high risk, such as an AI tool that sifts through job applications to find the best candidates, but errantly using racial bias to decide which is which.
While companies operating in Europe have relatively clear guidelines, it’s a different case in the U.S. That means having to follow a two-tier strategy, according to Masood.
“If you’re in the EU, follow the EU Act. If you’re in the U.S., do the right thing,” he said.