OpenAI might sometimes seem like it’s on a nonstop winning streak, but a couple reports out yesterday suggest that’s not quite true.
The first came from The Information, which reported OpenAI’s abandonment of a new AI model called Arrakis, named after the desert planet in Dune. Arrakis would apparently have allowed OpenAI to run its ChatGPT chatbot more cheaply than it can using its GPT line of large language models. The keyword here is “efficiency,” and Arrakis reportedly failed to meet expectations, leading to the project being scrapped by the middle of this year.
Unfortunately, that appears to have disappointed “some executives” at OpenAI’s big backer, Microsoft, who were hoping to see a demonstration of OpenAI’s capacity to churn out LLMs at high speed.
Disappointment may also await those who can’t wait for OpenAI to launch the AI gadget that it’s reportedly been brainstorming with design icon Jony Ive, the guy behind the iPhone. Speaking yesterday at a Wall Street Journal tech conference, CEO Sam Altman couldn’t have sounded any more vague: “I think there is something great to do, but I don’t know what it is yet.”
Per The Verge, Altman said he has “no interest in trying to compete with the smartphone,” which…seems sensible? Whatever he’s actually talking about, any hardware efforts are “very nascent,” so there’s nothing to see here for now.
Meanwhile, The Verge also reported on new research—partly backed by Microsoft itself—that showed GPT-4 was more “trustworthy” than GPT-3.5, but also more inherently prone to being “easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history.” That sounds like a problem, though the researchers also said they couldn’t find evidence of these vulnerabilities in the Microsoft products that are currently using GPT-4, most likely because those apps try to mitigate such problems.
On a more clearly positive note, OpenAI CTO Mira Murati said at the Journal conference that OpenAI’s tool for detecting AI-generated images is “99% reliable.” It remains unclear when the tool will be publicly released.
Regarding the development of GPT-5, though, Murati reportedly said the upcoming model may still have the making-stuff-up problem that has afflicted OpenAI’s (and everyone else’s) generative AI models thus far. “We’ve made a ton of progress on the hallucination issue with GPT-4, but we’re not where we need to be,” she said.
Nobody said it wouldn’t be a bumpy road.
Separately, kudos to my colleague Kylie Robison for scoring the scoop on X’s plan to start charging some new users $1 a year if they want to do anything more than read other people’s posts. This will commence on Tuesday as a test, initially just in New Zealand and the Philippines. Subsequently confirming the story, X said the move wouldn’t be a “profit driver” and was “developed to bolster our already successful efforts to reduce spam, manipulation of our platform, and bot activity.”
However, as Kylie points out, it’s also a great way for X to get users’ payment information (and phone numbers), which will prove invaluable as the company continues its pivot to being an “everything app” with an e-commerce aspect.
I think at this point the move is worth trying, if only because Elon Musk’s previous attempts to tackle X’s bot problem haven’t done the trick. One dollar a year certainly isn’t enough to deter all miscreants, but it does add friction to the bot-creation process. And if X has a future at all under Musk and CEO Linda Yaccarino’s leadership, it lies in that “everything app” vision, because ad revenue is unlikely to prove sufficient anytime soon. Does the fee betray Twitter’s model? Sure, but it should really be clear to everyone by now that Twitter doesn’t exist anymore.
Want to send thoughts or suggestions to Data Sheet? Drop a line here.
David Meyer
NEWSWORTHY
Chips for China. The U.S. may be tightening its controls on high-end chip exports to China, but Reuters reports that it’s dangling a “potential lifeline” in front of Nvidia, Intel, and AMD, by soliciting their thoughts on ways they could keep selling China the kinds of chips that could be used in small- and medium-size AI systems. The new restrictions have still alarmed Nvidia’s investors, though. Arm chief Rene Haas said yesterday that the chip embargo will be tricky to get right because of the complexity of what goes onto circuit boards. Meanwhile, Western intelligence chiefs are warning Silicon Valley that China is still trying to steal its intellectual property.
Starlink for Israel. Israel is talking to SpaceX about turning on its Starlink broadband-beaming satellite network over the country. According to Bloomberg, Israel’s communications ministry said the aim is to ensure ongoing connectivity to towns on the front lines of its war with Gaza-based Hamas, with Starlink acting as a backup system. Elon Musk’s space company hasn’t commented yet.
Tired in a Tesla. Musk’s car firm is reportedly about to roll out a “Driver Drowsiness Warning” feature that involves the vehicle’s cameras monitoring the driver’s face for excessive yawns and blinks. According to Electrek, the feature has popped up in Tesla’s European owner’s manual, though not its North American counterpart. Meanwhile, TechCrunch reports that Tesla is lobbying for stricter fuel economy standards in the U.S., which would probably lead to massive noncompliance fines for legacy rivals such as GM and Ford.
ON OUR FEED
“Teamwork is a mixed blessing.”
—Dietlind Helene Cymek, lead author of a study from the Technical University of Berlin that suggests people working alongside robots tend to become more laid-back and lax in their approach to their own work—much as they would when working alongside a reliable and respected colleague. The phenomenon is apparently known as “social loafing.”
IN CASE YOU MISSED IT
The company that makes your iPhone is expanding to EVs and it’s getting Nvidia to help make an ‘AI factory,’ by Lionel Lim
AI hype sends funding for the sector’s startups soaring to $17.9 billion, defying a broader tech slump, by Bloomberg
Billionaire AI investor Vinod Khosla’s advice to college students: ‘Get as broad an education as possible,’ by Jeff John Roberts
Eric Adams’s revelation that he uses AI to speak in Mandarin stirs outcry: ‘The mayor is making deep fakes of himself,’ by the Associated Press
Federal regulators are investigating whether Cruise robotaxis are risky to pedestrians following several accidents, by Bloomberg
BEFORE YOU GO
Moderation game. Got some spare time today? Then play Trust & Safety Tycoon, a new game from Techdirt’s Mike Masnick. The game is much as the name implies—you’re the trust and safety chief at a social-media startup, and you have to navigate your way through a variety of common situations that such a person faces.
Masnick, in a post announcing the game: “You have to set policies, deal with various dilemmas, face internal and external pressures, weigh tradeoffs, determine resource allocation, and more, all while trying to keep your website from descending into a cesspit of hate, driving away users and advertisers.” It’s fun, educational as heck, and should be mandatory for anyone with strong opinions on how content moderation should work.
I got 1,816 points and won via IPO, after which I left to advise the government—there’s only so much anyone can take in that role.
This is the web version of Data Sheet, a daily newsletter on the business of tech. Sign up to get it delivered free to your inbox.