Some rather extraordinary news has broken regarding the European Commission’s attempt to force tech companies to scan users’ uploads and private messages for child sexual abuse material (CSAM).
According to a lengthy investigation by a group of European news outlets, the proposal followed close coordination between Home Affairs Commissioner Ylva Johansson and the U.S. company Thorn, which was founded by actors Ashton Kutcher and Demi Moore 11 years ago to fight the scourge of online CSAM. (Thorn ejected Kutcher as board chairman earlier this month, due to his scandalous defense of former colleague and convicted rapist Danny Masterson.)
“We have shared many moments on the journey to this proposal,” Johansson wrote to Thorn executive director Julie Cordua, according to the Balkan Investigative Reporting Network’s English-language report of the investigation. “Now I am looking to you to help make sure that this launch is a successful one.” Days later, in May last year, Johansson unveiled her proposal for the CSA Regulation, which is currently being scrutinized by the European Parliament.
Few oppose the objective of fighting CSAM, but cryptography experts have blasted the proposal—dubbed “chat control” by its opponents—saying you can’t alter end-to-end encryption systems to allow this sort of client-side scanning without busting people’s privacy and security. That’s also pretty much what Apple said when defending its decision to abandon CSAM scanning on iCloud. The EU’s own internal legal service warned earlier this year that the proposal could bring about “permanent surveillance of all interpersonal communications” and would probably be nixed by the courts.
Thorn is a registered nonprofit, but it sells a system called Safer that federal agencies and tech companies—like Slack, Flickr, GoDaddy, and even OpenAI—use to spot what is known or suspected to be CSAM. Known images are identified by comparing the hashes of uploaded images to those stored in a vast database (this is the element of Safer that OpenAI uses), while new images are identified as CSAM by AI.
Despite the apparently commercial nature of this operation, which would obviously benefit from the CSA Regulation as proposed, the report notes that Thorn is “registered in the EU lobby database as a charity” and has had meetings under that classification with several EU commissioners. The organization apparently paid over $630,000 for lobbying last year. The report also highlights Europol officials’ suggestion to the Commission that “there are other crime areas [apart from CSAM] that would benefit from detection.” Meanwhile, privacy activists—who are more than a little alarmed at the implications of such message-scanning systems—say they have struggled to get the Commission’s attention.
“The investigation published today confirms our worst fears: The most criticized European law touching on technology in the last decade is the product of the lobby of private corporations and law enforcement,” said Diego Naranjo, policy chief at digital rights organization EDRi, in a statement.
Thorn insists that its income from Safer sales does not generate any profits, and says it remains reliant on donations to cover the rest of its costs. Regarding its influence, it said in an emailed statement: “Our technical expertise is unique. We make this expertise available to policymakers to support the EU’s legislation in this space.”
Johansson’s office did not respond to a request for comment. More news below.
Want to send thoughts or suggestions to Data Sheet? Drop a line here.
David Meyer
NEWSWORTHY
EU warns Big Tech. Armed with new laws, European Commission officials have warnings for multiple U.S. tech giants. Reuters reports that Internal Market Commissioner Thierry Breton (the EU’s industry chief) has told Apple it must open up its closed ecosystem, to satisfy the new Digital Markets Act. Meanwhile, Commission Vice President Vera Jourova has told X/Twitter that it has to comply with the new Digital Services Act’s provisions about fighting disinformation after an official report found Elon Musk’s platform was the worst offender on the fake news front.
AI copyright fight goes to trial. A federal judge has ruled that Thomson Reuters’s complaint against Ross Intelligence, for allegedly copying its legal-research content to train a competing AI model, must go to jury trial. As Reuters notes, this sort of behavior has sparked waves of outrage among companies who say their content has been unfairly exploited, and this may be one of the first big cases to hit the courts.
ChatGPT goes multimodal. OpenAI’s ChatGPT will start conversing with paying users, taking their spoken prompts and replying with a synthesized voice. It will also be able to process images as parts of prompts. CNBC reports that OpenAI acknowledged deepfake concerns over the voice synthesis part by saying the voices were “created with voice actors we have directly worked with.”
ON OUR FEED
“Given the similarities between WeChat mini apps and Telegram mini apps, we believe that mini app developers from WeChat who are currently using Tencent’s cloud service will begin to build on TON.”
—Justin Hyun, head of growth at Telegram’s TON Foundation, tells TechCrunch why he thinks Telegram will make for a successful “everything app”
IN CASE YOU MISSED IT
Even OpenAI’s Sam Altman is startled by how powerful he’s become: ‘I can’t imagine that this would have happened to me’, by Eleanor Pringle
Sam Bankman-Fried is still trying to get out of jail to prepare for his trial just a week before it starts, by Leo Schwartz
Jeff Bezos’ Blue Origin spaceflight company gets a veteran Amazon executive as a new CEO amid setbacks and delays, by Bloomberg
Spotify is using AI to imitate podcast hosts’ voices after plowing $1 billion into the business and breaking up with Prince Harry and Meghan, by Paolo Confino
How climate models intended for cell towers are helping communities plan for floods, drought, and wildfires, by Charlene Lake
Generative AI could be Europe’s shot at gaining a competitive edge against the U.S., Accenture’s AI chief for Europe says, by Prarthana Prakash
BEFORE YOU GO
AR power imbalance. Remember the "glasshole" phenomenon? U.S. computer science researchers have delved into the social and emotional consequences of situations where one person is wearing augmented-reality glasses, but the person they’re interacting with is not—and as you might expect, the wearer generally feels relatively at ease while the non-wearer feels disempowered and nervous.
There may be implications for how future AR glasses should be designed. “When we think about design in [human-computer interaction], there is often a tendency to focus on the primary user and design just for them,” Cornell’s Malte Jung, a coauthor, told IEEE Spectrum. “Because these technologies are so deeply embedded in social interactions and are used with others and around others, we often forget these ‘onlookers’ and we’re not designing with them in mind.”
This is the web version of Data Sheet, a daily newsletter on the business of tech. Sign up to get it delivered free to your inbox.