Hi there—David Meyer here in Berlin, filling in for Jeremy again.
Ever-bigger large-language models are not the future. So says none other than Sam Altman, whose OpenAI has set the world on fire with the superstar of large language models, GPT.
“I think we’re at the end of the era where it’s gonna be these giant models, and we’ll make them better in other ways,” Altman said at an MIT event last week, according to a TechCrunch report. His stated reasoning is that it’s better to focus on “rapidly increasing capability” rather than parameter count, and if it’s possible to achieve capability improvements with lower parameter counts or by harnessing multiple smaller models together, then great.
As VentureBeat pointed out, there is likely a cost driver behind Altman’s thoughts. LLMs are really, really expensive—GPT-4’s training reportedly cost $100 million. This cost is one reason why Microsoft is reportedly developing its own, finely tuned A.I. chip, and it’s probably been a factor in Google’s rapidly-crumbling reluctance to dive headfirst into the generative-A.I. lake.
But while OpenAI is in no hurry to develop GPT-5, the competition continues to pile in. Amazon just unveiled its Titan family of LLMs (for generating text and for translating text into representations of semantic meaning). And Elon Musk, fresh from signing that six-month-moratorium letter, is also up to something—he’s reportedly incorporated a company called X.AI and bought thousands of Nvidia GPUs to build his own LLM. Musk also told Fox News’ Tucker Carlson that he plans to take on OpenAI’s “politically correct” ChatGPT with something he called TruthGPT, a “maximum truth-seeking A.I. that tries to understand the nature of the universe.” (No biggie.)
Whether these next-generation LLMs gain their power through girth or through other means, they are most definitely in policymakers’ sights.
Partly inspired by the moratorium letter—although they called it “unnecessarily alarmist”—some of the members of the European Parliament who are working on the bloc’s A.I. Act said in an open letter yesterday that they “are determined to provide…a set of rules specifically tailored to foundation models, with the goal of steering the development of very powerful artificial intelligence in a direction that is humancentric, safe, and trustworthy.”
The lawmakers called for a high-level summit between U.S. President Joe Biden and European Commission President Ursula von der Leyen, “with the view to agree on a preliminary set of governing principles for the development, control, and deployment of very powerful artificial intelligence.” They acknowledged that the EU’s A.I. Act could serve as a blueprint for other countries’ regulations—and given that recent tweaks to the bill reportedly include forcing OpenAI et al. to declare the use of copyrighted material in the training of their A.I. models and making vendors liable for the misuse of their models, this blueprint for A.I. regulation could have seismic repercussions across the industry.
In the end, size may indeed not matter when compared to what you do—and don’t do—with your foundation models. And that’s something you can expect regulators to increasingly have a say in.
A.I. IN THE NEWS
Platforms like Spotify and Apple have scrambled to remove a viral new collaboration between Drake and The Weeknd that was actually created by someone called @ghostwriter using software that was trained on the stars’ voices. Just a few months after the music YouTuber Rick Beato warned the pop industry had set itself for an A.I. takeover by wholeheartedly embracing autotune, he seems to have been proven correct. After all, how much difference is there really between this and Drake’s regular robotic vocal treatment? Seriously, I wonder if the march of A.I. will prove to be the death of autotune that Jay-Z once demanded. One can but wish. (Bonus read: the Wall Street Journal’s Gregory Zuckerman on why “A.I. can write a song, but it can’t beat the market.”)
Meanwhile, the photography world has also been rocked by self-professed “cheeky monkey” Boris Eldagsen, a German artist who won a Sony World Photography Award last month and then refused to accept it because his photo—an old-school black-and-white portrait of two women—was actually an A.I. creation. It was all a test to see whether such competitions are prepared for the A.I. onslaught, Eldagsen said. “They are not,” he concluded. “We, the photo world, need an open discussion. A discussion about what we want to consider photography and what not. Is the umbrella of photography large enough to invite A.I. images to enter—or would this be a mistake? With my refusal of the award I hope to speed up this debate.” The World Photography Organisation, which ran the contest, said Eldagsen had misled them into thinking the image was an A.I. “co-creation.”
EYE ON A.I. RESEARCH
Carnegie Mellon University researchers have devised an “Intelligent Agent [that is] capable of autonomous design, planning, and performance of complex scientific experiments.” The agent uses GPT-3.5 and GPT-4 to find information, solve problems, and generate code for the experiments—which it could then correct based on the outputs. On the plus side, it refused to synthesize heroin and mustard gas. However, the researchers warned, “it is crucial to recognize that the system’s capacity to detect misuse primarily applies to known compounds,” so it may be far less cautious with novel compounds such as new poisons and bioweapons.
From the paper: “The development of new machine learning systems and automated methods for conducting scientific experiments raises substantial concerns about the safety and potential dual use consequences, particularly in relation to the proliferation of illicit activities and security threats…We strongly believe that guardrails must be put in place to prevent this type of potential dual-use of large language models.”
FORTUNE ON A.I.
BabyAGI is taking Silicon Valley by storm. Should we be scared?—by Jeremy Kahn
Exclusive: Goldman Sachs CIO suggests bank could train its own ‘ChatGS’ A.I. chatbot—by Jeremy Kahn
What 3 finance leaders make of generative A.I., and how it is affecting their businesses now—by Sheryl Estrada
OpenAI will pay you to join its ‘bug bounty program’ and hundreds have signed up—already finding 14 flaws within 24 hours—by Eleanor Pringle
Billionaire mogul Barry Diller has some advice for the media about A.I.: Sue—by Prarthana Prakash
The AB InBev beer brand Beck’s is diving headfirst into A.I. with the launch of a “futuristic concoction” called Beck’s Autonomous. Preorders apparently sold out within minutes, though given that this "limited edition" consisted of a total of 450 cans, that’s hardly a great achievement.
It’s certainly a heck of a gimmick, though. To celebrate 150 years in the business, the Beck’s marketing team used ChatGPT and Midjourney for pretty much everything: A.I. decided an anniversary beer was in order, then it devised the recipe, the name, the logo, the “mission statement," the rather trippy glass-aluminum container design, the Beck’s Autonomous website, the ad campaign—including an A.I.-voiced radio ad—and the influencer strategy.
Yes, it’s all very daft—though I am intrigued to know what “the beer that made itself” actually tastes like. But there’s a serious side to it. As Food Dive points out, food companies such as Mars and McCormick & Co. are increasingly using A.I. to discover new ingredients and new ways to combine them, so there's a real trend behind this. And hey, as long as the result doesn’t taste like regular Beck’s (sorry, not a fan), I’m willing to try the results.
This is the online version of Eye on A.I., a free newsletter delivered to inboxes on Tuesdays and Fridays. Sign up here.