A first-of-its-kind AI bill moving through the Connecticut legislature went up in smoke this week after the state’s governor said he’d veto it if it reached his desk.
“I’m just not convinced that you want 50 states each doing their own thing. I’m not convinced you want Connecticut to be the first one to do it,” Gov. Ned Lamont told CT Insider, adding that the bill is too much too soon.
“I said, ‘Why don’t we take the lead, together, and work with all the other governors?’” he said. “If you don’t think the feds are going to take the lead on this, maybe the states should, but you shouldn’t have one state doing it. You should have us do this as a collective.”
Over the past few years, 18 states have passed narrow AI laws regulating automated employment decision tools, mandating that individuals are informed when they’re interacting with an AI system, and taking other specific actions aimed at protecting individuals from harmful impacts of unsafe or ineffective AI systems. The Connecticut bill—which passed in the State Senate but is unlikely to advance to the House after Lamont voiced his intention to veto—is different because of how wide-reaching it is. The bill would introduce rules around the development and use of both general-purpose AI systems and models deemed high-risk, including the requirement to implement a risk management policy prior to the release of high-risk systems and mandates around technical documentation that must be kept, and in some cases, made public, along with a slew of other requirements. The bill would also prohibit the dissemination of certain synthetic images, set up an AI advisory council, provide AI training for workers, establish a confidential computer cluster, and much more.
The implosion of the Connecticut bill encapsulates many of the concerns and challenges around AI regulation. As flawed AI systems proliferate and quickly improving deepfakes threaten everything from individuals to businesses and elections, legislators are confronted every day with more reasons to act. At the same time, they’re wary of being first or being too bold. From the state level to the federal, the push and pull between moving too fast (and potentially stifling innovation) and not moving fast enough (leaving society open to foreseeable, widespread harm) is continuing to hinder AI legislation.
Indeed, while Lamont supports aspects of the bill, one of his main reasons for opposing it overall stems from concerns about hobbling businesses pursuing AI.
“I do worry if it’s too burdensome and regulatory, all the startups around AI won’t be in Connecticut. They’ll be in Georgia or Texas. And I don’t want that to happen,” he said. The state’s Department of Economic and Community Development similarly opposed the legislation during a recent public hearing, warning that some of the proposed regulations could hinder early-stage businesses.
Some industry stakeholders have voiced strong opposition, which has seemingly had an effect on the bill and its support. Officials from the Virginia-based Consumer Technology Association (CTA)—the trade association that puts on the annual CES conference in Las Vegas—submitted a three-page letter to Connecticut lawmakers last month calling the proposed legislation a threat to the industry that would put “significant new duties on developers and deployers of AI” and “would effectively mandate strict new compliance obligations that would reach far beyond Connecticut.” The organization’s VP and author of the letter, Douglas Johnson, has since further voiced concerns about the “piecemeal approach” and called for policy at the federal level.
While tech executives’ warnings not to stifle innovation are typically made in their own self-interest, both the CTA officials and Lamont have a point. A state-by-state approach would be burdensome to companies from a compliance standpoint. And as we’ve seen with states’ piecemeal approach to data privacy legislation, state laws don’t make a lot of sense when we’re talking about technology that doesn’t start or end at state borders. The U.S.’s lack of a federal data privacy law—something that exists in the vast majority of countries around the world—has left Americans open to widespread and highly damaging data privacy violations. Countries across the globe are currently grappling with AI legislation, and the EU AI Act, the first comprehensive AI legislation to be passed, only made it past the finish line in March. So while the lack of federal legislation on AI doesn’t make the U.S. an outlier just yet, we’ve seen how this goes.
Lamont also acknowledged these are vastly complicated issues and that this bill is moving too fast, needing to be pushed through in a matter of days after two weeks of intense rewrites. Instead, he voiced support for a smaller, pending bill that would criminalize distributing deceptive synthetic media 90 days before an election as well as AI-created deepfake porn. It seems like a reasonable action all can agree on, but wouldn’t it be more effective if Congress just passed that law instead?
And with that, here’s more AI news.
Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com
AI IN THE NEWS
OpenAI is working on a web search feature for ChatGPT. That’s according to Bloomberg. The feature would allow users to ask ChatGPT questions and receive answers from the web with citations in response. One version also turns up images. ChatGPT currently offers information from the web to paying customers only and in a limited capacity. The new product appears to be an effort to expand on this capability—and compete with Google and unicorn AI search startup Perplexity AI, which are making increasing inroads in elevating search over chatbots as a method for accessing information using AI.
Google DeepMind and Isomorphic Labs unveil AI breakthrough for biology research. Google DeepMind and its spin-off Isomorphic Lab have created a new AI model they say can help predict both the structure and interaction of most molecules involved in biological processes. That includes proteins, DNA, RNA, as well as some of the chemicals used to create new medicines—and it could present a giant leap for biological research. The companies are allowing researchers working on non-commercial projects to query the model for free. Isomorphic Labs is also already using the system internally to speed up the discovery of new drugs, which is one of the most highly anticipated use cases for generative AI. (Other research out today from BCG found that AI-discovered drugs in Phase I clinical trials have an 80-90% success rate, compared to the average 40-65% success rate of drugs discovered by humans). You can read my colleague Jeremy Kahn’s coverage of the new model, called AlphaFold 3, here.
Stack Overflow users revolt against the company’s deal with OpenAI. That’s according to Tom’s Hardware. After the company signed a deal to let OpenAI scrape user posts to train ChatGPT, users have begun removing or editing their questions and answers to prevent them from being used for AI training. Now users who are participating in the protest are being suspended and banned en masse, according to the report. Stack Overflow is a longtime cornerstone for the developer community where users share coding knowledge and ask and answer questions. With all the interest in teaching LLMs to code, it makes sense why companies like OpenAI would want to get their hands on the data. But just like the writers, artists, musicians, and other creatives taking issue with AI companies training on their work, programmers aren’t okay with seeing their contributions used for profit without their consent. It doesn’t help that for years, Stack Overflow had a strict policy prohibiting the use of generative AI for contributing to the site.
Election officials undergo training to identify and respond to AI threats. It’s a fascinating and important read from the Washington Post, detailing an intensive, multi-day training seminar Arizona election officials recently underwent to prepare for the upcoming election. The attendees at the training in Phoenix studied AI-generated content that could be used to influence the election—from faked social media posts designed to suppress voter turnout to AI-generated voicemails supposedly from the secretary of state’s office telling them to keep polling locations open due to nonexistent court orders. Even the introductory video featuring Arizona Secretary of State Adrian Fontes that the officials watched to kick off the training, they later learned, was a very convincing deepfake. One official who completed the training called the simulations “mind-blowing” and unsettling. “By the end of the second day, you’re like: Trust no one,” they told the Post.
FORTUNE ON AI
The AI panic looks a lot like early criticism of electricity. We know how that turned out —Rachyl Jones
OpenAI is touting a new plan to protect creator works—here’s why it won’t actually resolve AI’s copyright crisis —Sharon Goldman
Is your company moving too slow, or too fast, on gen AI? —John Kell
Gen Zers could swipe millennials’ jobs if they have AI skills, LinkedIn and Microsoft data shows —Orianna Rosa Royle and Jane Thier
Data-driven tactics are great, but Liverpool FC’s real AI goal is to help fans get more kicks out of their content —Molly Flatt
Politicians and nonprofits will struggle to keep AI in check—but corporate boards can’t afford to fail —Jeffrey Savian (Commentary)
AI CALENDAR
May 14: Google I/O
May 21-23: Microsoft Build in Seattle
May 21-22: AI Seoul Summit in Seoul, South Korea
June 5: FedScoop’s FedTalks 2024 in Washington, D.C.
June 25-27: 2024 IEEE Conference on Artificial Intelligence in Singapore
July 15-17: Fortune Brainstorm Tech in Park City, Utah (register here)
July 30-31: Fortune Brainstorm AI Singapore (register here)
Aug. 12-14: Ai4 2024 in Las Vegas
EYE ON AI NUMBERS
129%
That’s the increase in job postings for roles related to AI safety and compliance so far in 2024 compared to the same period last year, according to data ManPowerGroup shared with Eye on AI.
The workforce solutions company also found a 117% year-over-year increase in listings for prompt engineers. But overall, lead engineer and senior engineer in generative AI intelligence product development are the most sought-after AI roles, according to the firm. Anthropic is one of the top hirers of these roles, experiencing a staggering 459% increase.