There have been a couple of developments in the last day or two regarding what is and isn’t permissible with AI. The first comes courtesy of the U.K.’s Supreme Court, and it’s a bitter blow to those who want to use AI to invent things.
There’s an outfit called the Artificial Inventor Project that’s been trying—through a series of test cases—to get countries to recognize that AI-derived inventions can be patented. The cases hinge on inventions made by an AI called DABUS (“Device for Autonomous Bootstrapping of Unified Sentience”) that was created by Missouri-based inventor Stephen Thaler. Specifically, Thaler claims DABUS has come up with a novel kind of food container that increases rather than stifles heat transfer, and a new kind of “neural flame” light source for a flashing beacon, all by itself.
Thaler and his colleagues hit the end of the British road yesterday, following years of rejection by the U.K. Intellectual Property Office and then the courts. The Supreme Court unanimously dismissed Thaler’s appeal, essentially because DABUS is not a natural person and, legally speaking, only people can invent things. No inventor, no patent.
The Artificial Inventor Project’s efforts have not gone completely unrewarded—DABUS’s food container and beacon were granted patents in South Africa in 2021. But patent offices and courts in the U.S., Australia, and Taiwan have all definitively rejected the patent applications. Appeals are pending in Europe, Germany, Israel, Korea, Japan, and New Zealand, and the original applications are still pending in a bunch more countries (including China).
AI’s it-ain’t-a-person legal issue isn’t just limited to the patent world—it’s the same reason that, as things stand, the U.S. won’t grant copyright to AI-created works. But the Artificial Inventor Project’s big concern is that, if someone uses AI to invent stuff that doesn’t go on to be patented, the specifics of the invention will likely become trade secrets rather than being publicly disclosed. Advocates for AI patent rights argue this will prove particularly harmful when it comes to AI drug discovery.
Artificial Inventor Project chief Ryan Abbott, who represented Thaler in the U.K. case, told me he found the ruling “unfortunate,” but he took heart from the fact that the Supreme Court said Parliament could fix the problem. “Hopefully lawmakers act quickly to extend protection to encourage the use of AI in research and development,” he said.
Separately, but still on the subject of AI and intellectual property rights, a Japanese government panel reckons it may be a copyright violation for companies to train their AIs on copyright-protected works. The panel’s draft report will feed into new guidelines that should clear up currently murky rules around the ability of rights holders to limit what AI companies can do with protected IP.
This is of course a live issue around the world—that big copyright lawsuit against Microsoft and OpenAI in the U.S. just gained nearly a dozen new litigants, including the Pulitzer Prize-winning coauthors of the J. Robert Oppenheimer biography American Prometheus, which was the basis for Christopher Nolan’s Oppenheimer. “The defendants are raking in billions from their unauthorized use of nonfiction books, and the authors of these books deserve fair compensation and treatment for it,” said the writers’ attorney, Rohit Nath.
These are truly precedent-setting times. More news below—and see you in a month. I’m taking vacation back home in sunny South Africa, and am leaving you in my colleagues’ ever-capable hands. Totsiens!
David Meyer
Want to send thoughts or suggestions to Data Sheet? Drop a line here.
NEWSWORTHY
X outage. X suffered a one-hour outage overnight, with tens of thousands of users across many countries finding themselves unable to access posts on the former Twitter, or even the X Pro service that was once called Tweetdeck, according to Reuters. The incident remains unexplained. And while on the subject of Elon Musk’s companies, Consumer Reports is the latest to claim that Tesla’s latest over-the-air software update doesn’t fully address the concerns that led U.S. regulators to issue a “recall” last week. “Drivers can still use Autopilot if they’re looking away from the road, using their phone, or otherwise distracted,” said CR’s Kelly Funkhouser.
Apple Watch ban stands. The U.S. International Trade Commission has rejected Apple’s attempt to keep selling its Watch Series 9 and Watch Ultra 2 while the company appeals a patent ruling that led to their banning. As The Verge notes, Apple will stop selling the smartwatches in its stores after Sunday, but third-party retailers can keep selling them until their stocks run out. Apple is racing to fix the issue by altering its blood-oxygen-reading algorithms.
Anthropic seeks $750 million. Hot AI player Anthropic, into which Amazon and Google have been pouring money, is reportedly in talks to raise $750 million in a Menlo Ventures-led funding round. The news was first reported by The Information, but then also by Reuters and Bloomberg, both of which say the round would come at an $18.4 billion valuation. Meanwhile, the Financial Times reports that the U.S. Federal Trade Commission may be interested in the concentration of power caused by those Big Tech investments into companies like Anthropic.
ON OUR FEED
“We had been investing and we believed in this technology. For all of that belief, it still snuck up on us. It was like, ‘Oh my god, it’s here. We’ve got to do things differently. We’ve got to change it up.’”
—Meta CTO Andrew “Boz” Bosworth describes his company’s reaction to the sudden AI explosion, in a Semafor interview.
IN CASE YOU MISSED IT
Top AI image generators are getting trained on thousands of illegal pictures of child sex abuse, Stanford Internet Observatory says, by the Associated Press
A startup tested if ChatGPT and other AI chatbots could understand SEC filings. They failed about 70% of the time and only succeeded if told exactly where to look, by Paolo Confino
A quiet cybersecurity revolution is touching every corner of the economy as U.S., allies ‘pull all the levers’ to face new threats, by Eric Noonan (Commentary)
BEFORE YOU GO
EU lawmakers vs algorithms. The EU’s new Digital Services Act isn’t even fully in force yet, but more than a dozen members of the European Parliament—from across the political spectrum—are already calling for the introduction of a measure that was considered for the blockbuster legislation but failed to make it in.
As reported by TechCrunch, they want big online platforms to have to turn off their algorithmic recommender systems by default, which would instead automatically steer users towards feeds that are not based on their profiling. The DSA only demands transparency measures for such systems, along with the provision of a non-recommended feed, but Ireland’s media regulator is going one further by trying to push the big platforms to disable recommender algorithms by default. The EU lawmakers want the European Commission to approve the under-consultation Irish measure, and to recommend it across the EU.
“Interaction-based recommender systems, in particular hyper-personalized systems, pose a severe threat to our citizens and our society at large as they prioritize emotive and extreme content, specifically targeting individuals likely to be provoked,” they wrote. “The insidious cycle exposes users to sensationalized and dangerous content, prolonging their platform engagement to maximize ad revenue.”
This is the web version of Data Sheet, a daily newsletter on the business of tech. Sign up to get it delivered free to your inbox.