AI is lighting a fire under the data privacy debate, as President Biden’s new order makes clear

President Joe Biden holds a press conference with Prime Minister of Australia Anthony Albanese the Rose Garden at the White House on October 25, 2023 in Washington, DC.
President Joe Biden signed an executive order on AI Monday, Oct. 30, 2023.
Drew Angerer—Getty Images

President Joe Biden today released his much-awaited executive order on the subject of AI. The headlines are justifiably about the order’s safety and security aspects (we have a story up on that here), but there’s also a fair amount in there about privacy and other civil liberties.

The U.S. lacks a comprehensive federal privacy law, with existing rules relating narrowly to either children (COPPA) or health information (HIPAA). Biden clearly doesn’t like this—in his State of the Union earlier this year, he identified data privacy as a rare opportunity for bipartisan legislation, mostly with a focus on protecting under-18s, but also featuring “stricter limits on the personal data that companies collect on all of us.”

Now the president is using AI-related risks to bolster his case and slowly move towards action. From today’s White House statement: “AI not only makes it easier to extract, identify, and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems.”

“To better protect Americans’ privacy, including from the risks posed by AI, the President calls on Congress to pass bipartisan data privacy legislation to protect all Americans, especially kids,” the statement continued.

A plea to a broken Congress is one thing, but Biden also directed a slew of actions around “privacy-preserving” technologies and techniques. There’s now going to be more federal support for their development, and federal agencies will be encouraged to use them, with guidelines being established to evaluate their effectiveness. There will also be an evaluation of how government agencies buy personally identifiable data from commercial sources such as data brokers and new guidance about avoiding “AI risks” when using it.

Biden’s White House has previously laid out concerns about AI and data privacy—a whole section in last November’s Blueprint for an AI Bill of Rights is devoted to it—but now it’s actually starting to do something about the issue. The bar may be low, but I’ve never seen a U.S. administration being so proactive on privacy, and I’m intrigued to see whether this momentum can be maintained or even hopefully increased.

Biden’s AI order is also proactive on other fronts, in ways that ought to help tackle both longer-term and more immediate risks. Among other things, federal agencies will have to: develop AI safety and security standards and evaluate risks to critical infrastructure; start figuring out how to better support workers who find their jobs displaced; create resources for schools who want to use AI for things like personalized tutoring; and coordinate better on identifying and ending AI-powered civil rights violations.

There are some responsibilities here for Big AI—companies will have to share “safety test results and other critical information” with the government, and give it a heads-up when training risky new models—but, so far, industry is mostly being left to get on with it. Biden has already got the big players to make voluntary commitments around AI safety, and the G7 today also released a code of conduct that is again voluntary.

The U.K. is also hosting its Global Summit on AI Safety this week, so let’s see what comes out of that. Incidentally, a coalition of digital rights activists and trade unionists today issued a rebuke to Prime Minister Rishi Sunak, complaining that his event is shutting them out even though Sunak has acknowledged that the technology “will fundamentally alter the way we live, work, and relate to one another.”

More news below.

David Meyer

Want to send thoughts or suggestions to Data Sheet? Drop a line here.

NEWSWORTHY

ON OUR FEED

$2 billion

—The upper limit of a new Google investment in Anthropic, as reported by the Wall Street Journal. Google previously invested $550 million in the AI darling earlier this year, before Amazon stepped in with an up-to-$4-billion investment

IN CASE YOU MISSED IT

Wall Street is obsessed with AI. From the ‘new electricity’ to the next gold rush, here’s how top analysts see the tech revolution playing out, by Will Daniel

Airbnb cofounder Joe Gebbia raises $41 million for his startup building small, pre-fabricated houses that spun out of Airbnb in 2022, by Alexei Oreskovic

OpenAI seals deal for San Francisco office space after CEO Sam Altman calls remote work ‘experiment’ one of tech industry’s worst mistakes, by Steve Mollman

Infosys founder Narayana Murthy wants young workers to have a 70-hour work week—and thinks it should be a matter of national pride, by Lionel Lim

Mark Zuckerberg’s $46.5 billion loss on the metaverse is so huge it would be a Fortune 100 company—but his net worth is up even more than that, by Paolo Confino

Manhattan restaurant ‘Thai Food Near Me’ went viral a week before it even opened: ‘It’s exactly what I search for on Google, by Irina Ivanova

BEFORE YOU GO

Papal Python skills. When one thinks of programming, one naturally thinks of Pope Francis, so it’s only natural to see a new global initiative called “Code with Pope,” designed by Polish entrepreneur Miron Mironiuk as a way to encourage kids to get coding in Python.

Pope Francis has endorsed the scheme, and Mironiuk believes his involvement will convince 11-to-15-year-olds in Europe, Africa, and Latin America “to spend some time and use this opportunity to learn programming for free,” the BBC reports. Details here.

This is the web version of Data Sheet, a daily newsletter on the business of tech. Sign up to get it delivered free to your inbox.