Well, how about that—on the same day that China unveiled its strict new rules for artificial intelligence safety, the U.S. government moved forward with its own, more cautious push to keep A.I. accountable.
While Beijing’s rules are typically draconian, imposing censorship on both the inputs and outputs of generative A.I. models, the U.S. National Telecommunications and Information Administration (NTIA) has merely launched a request for comment on new rules that might be needed to ensure A.I. systems safely do what their vendors promise.
Here’s NTIA administrator Alan Davidson: “Responsible A.I. systems could bring enormous benefits, but only if we address their potential consequences and harms. For these systems to reach their full potential, companies and consumers need to be able to trust them. Our inquiry will inform policies to support A.I. audits, risk and safety assessments, certifications, and other tools that can create earned trust in A.I. systems.”
There are some similarities between what the NTIA is tentatively envisioning and what China’s Cyberspace Administration just dictated—though the methods seem quite different. Most notably, the Chinese rules demand that companies submit their models for official security review before they start serving the public, while the NTIA’s request for comment outlines ideas such as independent third-party audits, the effectiveness of which could be incentivized through bounties and subsidies.
Both China and the U.S. want to battle bias in A.I. systems, but again, Beijing just orders A.I. companies not to allow their systems to be discriminatory, while the NTIA document talks about more nuanced tactics, like the use of procurement standards.
If you want to share your thoughts with the agency, you’ll find the necessary forms here. The deadline is June 10, by which point U.S. officials will also have a better idea of what Europe’s A.I. rules might end up looking like.
The EU’s A.I. Act was first proposed a couple of years back, but a lot has happened in that time—the European Commission’s original proposal didn’t think chatbots would need regulating; insert wry chuckle here—so lawmakers are now trying to bring it up to date. Two weeks from today, the European Parliament’s committees dealing with the bill will vote on the general shape of the version they’d like to see. By the time the full Parliament votes on the bill next month, more details will need to have been worked out. Then it goes to backroom “trilogue” negotiations with the Commission and representatives of the EU’s member states.
All this painstaking democratic wrangling is a far cry from China’s simple imposition of A.I. rules, but hopefully, the result will be somewhat friendlier to the companies providing such systems, and the citizens who want to get a straight answer from them.
Want to send thoughts or suggestions to Data Sheet? Drop a line here.
David Meyer
Data Sheet’s daily news section was written and curated by Andrea Guzman.
NEWSWORTHY
Are Spotify and Apple enabling A.I. song ripoffs? Streaming platforms like Spotify and Apple Music are caught in the middle of an emerging battle between copyright holders and A.I. technology that analyzes music to create new, A.I.-generated songs. Universal Music Group, which controls about a third of the music market, sent a letter to Spotify and Apple in March, demanding that the streaming platforms block A.I. services from scraping their copyrighted songs. “We will not hesitate to take steps to protect our rights and those of our artists,” UMG wrote to streaming services in emails viewed by the Financial Times.
Americans would find workarounds to a TikTok ban. From the White House to Montana, there's talk of banning TikTok. But the reality is that it's impossible to completely block the app in the U.S. since users can resort to a variety of clever technical workarounds. And attempts to outlaw the app through legislation like the RESTRICT Act are raising concerns about the impact on personal freedoms, with privacy and free speech watchdogs warning about the dangers of overly broad rules.
OpenAI’s call for bounty hunters. As part of its “commitment to secure A.I.,” OpenAI is paying people who find vulnerabilities in ChatGPT. Users can sign up for the project on Bugcrowd, which is showing that 14 vulnerabilities have been identified so far, with an average payout of $1,287.50. More than 500 people have already signed up for the program, which has rules of engagement and can land them on the “hall of fame” list for successfully identifying the most pressing issues.
ON OUR FEED
“We are not putting our journalism on platforms that have demonstrated an interest in undermining our credibility and the public’s understanding of our editorial independence."
—National Public Radio CEO John Lansing announcing that the organization will quit Twitter after the platform labeled it state-affiliated media.
IN CASE YOU MISSED IT
The making of Binance’s CZ: An exclusive look at the forces that shaped crypto’s most powerful founder, by Jeff John Roberts and Yvonne Lau
I’ve been my friends’ favorite DungeonMaster for 2 years now and I gave ChatGPT-fueled Dungeons & Dragons a try. I’m not threatened, by Brian Childs
Elon Musk paints over the ‘W’ in Twitter sign at San Francisco headquarters after apparent row with landlord, by Eleanor Pringle
FBI and FCC warn Americans over ‘juice jacking’ at public phone charging stations: ‘Don’t let a free USB charge drain your bank account’, by Chloe Taylor
Twitter’s former trio of top execs sue Elon Musk’s company for not paying their legal bills, by Christiaan Hetzner
BEFORE YOU GO
NYPD's robo-dog will patrol the streets again. High-tech policing devices like a GPS tracker for stolen cars, a cone-shaped security robot, and a robotic dog are coming to New York. The 70-pound, remote-controlled Digidog will be used for high-risk situations like hostage standoffs. The city deployed the robo-dog in 2020 but pulled back after criticism that it was dystopian. But on Tuesday, Mayor Eric Adams, a former police officer, declared that “Digidog is out of the pound.”
The canine droid, expected to be in use this summer, still has plenty of critics. The Surveillance Technology Oversight Project said the "NYPD is turning bad science fiction into terrible policing. New York deserves real safety, not a knockoff RoboCop."
This is the web version of Data Sheet, a daily newsletter on the business of tech. Sign up to get it delivered free to your inbox.