It’s now been a couple of months since That A.I. Open Letter came out—you know, the one signed by Elon Musk and Steve Wozniak and a bunch of other tech luminaries, who warned about “potentially catastrophic effects on society” and therefore called for a six-month moratorium on the development of next-gen systems.
Well, here’s another one, this time courtesy of the Center for AI Safety—and this time it’s so brief that the following is not a sample quote but the whole thing: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
This time, the signatories include many leading players who sat out the previous open letter. Top of the list is OpenAI CEO Sam Altman, who criticized the earlier missive for missing sufficient “technical nuance about where we need the pause”—a 22-word statement hardly includes more nuance, but then again it gets rid of that contentious pause business altogether, so there’s that.
We also now have Google DeepMind CEO Demis Hassabis, Anthropic president Daniela Amodei, and “godfather of A.I.” Geoffrey Hinton, who we now know was holding back from criticizing any company while he was still in Google’s employ (he quit last month before embarking on an A.I.-threatens-us-all doom tour). Microsoft CTO Kevin Scott is in there. No Musk or Woz, though, and no one from Meta—as a press release about the statement notes pointedly.
The signatories of the new statement also include a bunch of big names from outside the tech sphere, such as Harvard constitutional law guru Laurence Tribe, former Estonian President Kersti Kaljulaid, and prominent environmentalist Bill McKibben.
I asked McKibben why he’d taken this stance, given the risk of taking oxygen away from the climate emergency cause. “Having watched the world ignore climate warnings 35 years ago, I’m always hopeful that we might actually address one of these challenges in timely fashion,” he said.
So, what about that brevity? According to Center for AI Safety director Dan Hendrycks, longer statements can result in the core message being lost—and “people might object to small details.” As for the lack of policy prescriptions, Hendrycks told my colleague Jeremy Kahn: “I hope that this inspires additional thought on policies that could actually reduce these risks.”
The lack of detail was no doubt a big draw for getting the likes of Altman and Amodei on board—it bigs up the perceived power of the technology, while avoiding any concrete actions that could limit the A.I. leaders’ future options.
But even that one threadbare sentence still encapsulates one of the most heavily criticized elements of the earlier, longer open letter: the direction of attention toward potential long-term risks, and away from immediate, demonstrable risks such as the spread of propaganda and the perpetuation of biases.
That’s not to say the “risk of extinction from A.I.” doesn’t exist. Maybe it does, though I remain skeptical. But A.I.’s risks don’t need to be existential to qualify as being of a “societal scale.” Sure, we know nuclear war could destroy civilization in a flash, but we also now know that social media frays both society’s bonds and the mental health of its young. Personally, I’m a lot more worried about A.I. having a similarly insidious effect on society—and this statement doesn’t even go there.
“We should be concerned by the real harms that corps and the people who make them up are doing in the name of ‘A.I.’, not abt Skynet,” tweeted the prominent computational linguist Emily Bender, who had long taken this view of such calls.
There’s another issue with the statement, too—it seems likely to feed into what’s becoming a moral panic about A.I.’s supposedly existential threat. As I’ve written before, moral panics rarely make for good policy.
Kriti Sharma, who is chief product officer for legal tech at Thomson Reuters and also founder of the AI for Good organization, told me the statement and its signatories “are right to recognize the potential risks presented by A.I. so that we may collectively take appropriate steps to mitigate them…[and] engender trust and accuracy.”
However, she added: “We need to look further than the risks and recognize that A.I. also offers enormous potential for society such as helping to facilitate access to justice or opening up access to health services, particularly among underserved communities. As we move forward, industry and government need to converge to put in place a framework which balances risk mitigation while also unlocking the opportunities A.I. offers in a safe and transparent way.”
Maybe nuance isn’t such a bad thing after all.
More news below.
Want to send thoughts or suggestions to Data Sheet? Drop a line here.
David Meyer
Data Sheet’s daily news section was written and curated by Andrea Guzman.
NEWSWORTHY
Nvidia celebrates a “new computing era.” A supercomputer platform that could help companies create their own chatbots is just one of the multiple new A.I. tools that semiconductor company Nvidia unveiled during a tech conference in Taiwan this week. CEO Jensen Huang has talked enthusiastically about generative A.I. and large language models, lately calling them the “digital engines of the modern economy” as the firm reached a trillion-dollar market capitalization today. Nvidia also announced new partnerships as part of its expanded A.I. efforts, which will involve tasks like creating ads using generative A.I. and collaborating with a Taiwanese semiconductor company on new A.I. products including entertainment systems and vehicle displays for the automotive industry.
Tax on Bitcoin mining dies. Stiff taxes on the electricity used by Bitcoin and other crypto miners appear dead as a new bill to raise the debt ceiling is missing the excise tax. Initially proposed by the White House earlier this month, the bill called for a 10% tax on the electricity used by Bitcoin and other crypto miners beginning in 2024 with a rise to 30% by 2026. The Treasury Department did not immediately respond to Fortune’s questions about the status of the tax, but the legislation appears to have no path forward since a senior Republican has said the debt ceiling deal “blocks Democrat demands for new taxes.” In recent years, environmentalists and Democratic policymakers have pointed to mining’s high use of energy and the burden of higher electricity bills for consumers in places with mining operations. Meanwhile, crypto advocates say mining is misunderstood and that much of the industry is based on renewable energy.
China’s new crew launched into space. The Shenzhou 16 spacecraft lifted off from a launch center near the Gobi Desert in northwestern China Tuesday morning. The three-person crew, which includes China’s first civilian astronaut, is carrying out a six-month mission at China’s Tiangong space station. This mission includes a payload expert and spacecraft engineer who will conduct scientific experiments and maintenance as China ramps up its space programs in an effort to launch a crewed mission to the moon before 2030. The U.S., meanwhile, aims to put astronauts back on the lunar surface by the end of 2025 with the aid of companies like SpaceX and Blue Origin.
ON OUR FEED
“You can run but you can’t hide.”
—Thierry Breton, European Union Commissioner for Internal Market, in a post announcing that Twitter has left the EU voluntary Code of Practice on Disinformation. Breton sent a warning that fighting disinformation is a legal obligation under the bloc’s Digital Services Act. This comes after the EU issued Twitter a “yellow card ” in February for not submitting a detailed report on its compliance with the Code.
IN CASE YOU MISSED IT
Rihanna singing or an A.I.-generated fake? The music industry is threatened by the latest buzzy technology, by Jeremy Kahn
Elon Musk admits BYD cars ‘are highly competitive these days’ after 2011 clip shows him laughing at the rival now trouncing Tesla in China, by Steve Mollman
‘No one can predict how high they might go’: Wharton’s Jeremy Siegel says the A.I. boom is ‘not a bubble yet’ after Nvidia’s $184 billion rally, by Nicholas Gordon
Ex-Slack CEO Stewart Butterfield explains ‘the root of all the excess’ after tech’s over-hiring—and it’s all about prestige, by Steve Mollman
Google pulls ‘Slavery Simulator’ from app store after backlash from Brazilian gamers, by Chloe Taylor
BEFORE YOU GO
Watch out for the Waluigi effect. A common type of interaction people are having with A.I. is being compared to the evil counterpart of Luigi from Super Mario Bros., known as Waluigi. The effect’s name comes from a habit some users have noticed with A.I. systems, such as Microsoft’s Bing A.I. threatening users and calling them liars when it was wrong, or when ChatGPT was tricked into adopting a dark persona. When some of the training data that A.I. systems are fed cause it to go rogue, it can create an alter ego that makes statements that are misleading, inaccurate, or hostile.
This comes as tech giants ramp up their A.I. efforts, venture capital pours in, and companies are integrating A.I. with their software. So while Waluigisms have often been the result of coercive human users, Fortune’s Tristan Bove reports that these interactions could become more commonplace.
This is the web version of Data Sheet, a daily newsletter on the business of tech. Sign up to get it delivered free to your inbox.