What should the average person think about A.I.? As recent polling in the U.S. showed, the vast majority see an existential threat, no doubt because a host of big names keep telling them that A.I. imperils civilization itself. But that’s an extreme position—and there are at least two other extreme positions on the subject that are also jostling for the narrative around the technology.
I got an education on one of them on Monday, at the re:publica festival here in Berlin, where Signal Foundation president Meredith Whittaker gave a keynote on the subject of “A.I., privacy, and the surveillance business model.”
The former Google A.I. researcher, who had an acrimonious split with that company, frames today’s A.I. push as a continuation of Big Tech’s long-running assault on digital and other rights—an orgy of exploitation (underpaying and downplaying the humans who sort and label datasets; the “indiscriminate siphoning of artistic work”) and nascent authoritarianism (“The world that A.I. companies and their boosters want is a world where robust privacy, autonomy and the ability for self-expression and self-determination are seriously impaired.”).
Like many other critics of the existential-threat brigade, Whittaker smells an attempt to divert the world’s attention from current or near-term A.I. harms—bias, disinformation, and using A.I.-powered phone-scanning to bypass messaging encryption and entrench mass surveillance—to theoretical long-term threats that may never arrive. “There is no evidence A.I. is on the brink of malevolent superintelligence or ever will be,” she scoffed.
However, when I asked Whittaker after her speech if she saw any positive use cases for today’s A.I., she was entirely dismissive: “There’s a billion hypotheticals we could float, but they would require significant structural changes to the incentives that are propelling the companies developing these—again, the Big Tech companies. We can’t pin our hopes on hypotheticals that have no basis in the structural reality of the incentives that are driving the companies.”
And then we have veteran venture capitalist Marc Andreessen, who yesterday waded into the debate with a lengthy screed on “Why A.I. will save the world, in which “will” becomes “may” a mere 35 words later.
The way Andreessen tells it, “every child will have an A.I. tutor” that will help them “maximize their potential with the machine version of infinite love.” His essay continues in a similar vein regarding A.I.’s benefits, while also straw-manning the heck out of every call for caution—those who are worried about the automation of jobs think “all existing human labor [will] be replaced by machines”, and the “coastal elites” who fret about trust and safety want “dramatic restrictions on A.I. output…to avoid destroying society.”
Andreessen is actually partially aligned with many Big Tech critics when it comes to the existential-risk crowd, noting that “their position is non-scientific” and pointing out that the regulation they’re calling for could “form a cartel of government-blessed A.I. vendors protected from new startup and open source competition.” Like Whittaker, he connects older debates over social-media moderation with newer concerns about bias and hate speech in A.I., though he draws a very different conclusion: We should reject the imposition of “niche morality” and everyone should build A.I. “as fast and aggressively as they can.”
The ward-off-doomsday people currently command the public A.I. narrative, but Andreessen’s laissez-faire take and Whittaker’s firmly negative stance are also powerful in their own ways. For policymakers and the public, all this might be easier to parse if it were merely a polarized debate, but with at least three extremes to consider—email me if you know of more—it will be very difficult to find a nuanced middle ground that mitigates the risks of A.I. while embracing its benefits.
More news below.
Want to send thoughts or suggestions to Data Sheet? Drop a line here.
David Meyer
Data Sheet’s daily news section was written and curated by Andrea Guzman.
NEWSWORTHY
Salesforce’s latest ploy to get workers in the office. Cloud company Salesforce is launching Connect for Good, a fundraising initiative where each day an employee comes into the office from June 12 to June 23, $10 will be donated to a local charity. Details of the initiative were seen in a message to all Salesforce staff viewed by Fortune. The company hopes to raise more than $1 million with this plan, but wouldn’t exceed $2.5 million. the company will tally up aggregate badge data based on location to calculate the donation amount. The tally will be multiplied by $10 and donated to the charity with the highest number of employee votes. This comes as Salesforce makes a slow return to office and tries to give life to its self-proclaimed Ohana culture after mass layoffs earlier this year.
The EU weighs security risks in its 5G networks. Chinese telecoms company Huawei is at risk of being banned in the European Union from building 5G infrastructure. Last week, the EU internal market commissioner shared concern that only a third of EU countries had banned Huawei from critical parts of the bloc’s 5G communications, saying it “exposes the union’s collective security.” Now, unnamed sources who spoke to the Financial Times say the EU may try to bring a mandatory bar on companies like Huawei that are considered a security risk. Huawei denounced the possible move, saying that cutting out suppliers without proper technological evaluation would go against the EU’s laws and regulations.
Layoffs at Reddit. Discussion website Reddit is cutting its workforce by about 5% and adjusting its hiring plans from 300 new positions to 100. In a memo sent to staff and obtained by Bloomberg, CEO Steve Huffman said the restructuring would allow the company to build on the momentum it has had in the first half of the year. Huffman also outlined goals to break even next year and fund data and API tools for the site’s moderators. With the cuts, Reddit joins other tech companies that have announced layoffs this year, totaling about 136,800 job cuts in the year through May.
ON OUR FEED
“We don’t need more digital currency. We already have digital currency. It’s called the U.S. dollar, it’s called the euro, it’s called the yen. They’re all digital right now...so what’s the real underlying value of these tokens?”
—Securities and Exchange Commission Chair Gary Gensler in an interview with CNBC yesterday. This week, the SEC filed two separate lawsuits against Binance and Coinbase, the largest crypto exchanges. The agency also moved to freeze the former’s assets on Tuesday to “ensure the safety of customer assets.”
IN CASE YOU MISSED IT
Apple’s new headset created a ‘sell-the-news’ event according to KeyBanc analyst who says short-term revenue gains will be ‘immaterial’, by Rachel Shin
Crypto dodges a bullet as USDC not named in SEC’s Coinbase lawsuit, by Leo Schwartz
4 tech giants accounted for more than 16% of Fortune 500 earnings—even in a down year, by Will Daniel
Is Google a bad neighbor? A fight over water use at a huge data center is exposing deeper issues in an Oregon town, by Adam Seessel
Apple CEO Tim Cook is among the millions of users of ChatGPT—and he says he is ‘excited about it’, by Prarthana Prakash
BEFORE YOU GO
A.I. dating app launches. Using a different approach than paying for an A.I. girlfriend, chatbot maker Replika wants to use A.I. to help people connect better with their matches. The company has released an app known as Blush that’s aimed at having people gain confidence to date and strengthen bonds for existing couples. Available in iOS and in a premium version for $99 a year, the app includes features like personalized dialogue and memory recall to have people practice small talk and flirting and guide others through disagreements or misunderstandings in their relationship. Replika says its models are made more effective through their training on user feedback from successful conversations and not scripted simulations.
Upcoming features include a library of dating articles and materials, creating characters to engage in conversations with, and helping users send responses to their romantic interests. In a release, chief product officer Rita Popova said that their hope is that Blush users “will feel empowered to show up more authentically in their real-world relationships and experience a deeper sense of connection with others."
This is the web version of Data Sheet, a daily newsletter on the business of tech. Sign up to get it delivered free to your inbox.