Hello and welcome to Eye on AI.
Today, we’re starting with some breaking news. OpenAI this afternoon unveiled its first ever text-to-video model, called Sora, which it says can turn short text prompts into strikingly realistic, high-definition videos up to a minute long. The model is still in the research phase and not generally available, but the sample clips the company shared appear impressive (though most weren’t anywhere near a minute). MIT Technology Review, which got a preview ahead of the announcement, said the firm “has pushed the envelope of what’s possible with text-to-video generation.” Wired said the model “shows an emergent grasp of cinematic grammar.” But at the same time, the model is being met with some appropriate skepticism. MIT Technology Review said the samples “were no doubt cherry-picked to show Sora at its best” and gave a disclaimer detailing how OpenAI hasn’t released a technical report, hasn’t demonstrated the model actually working, and made the publication agree to “unusual” conditions in order to preview the model. Either way, OpenAI is planting its stake in generative AI video, making Sora one to watch.
This brings us to today’s main story, which is similarly about a new OpenAI experiment. Despite people marveling over ChatGPT since its debut more than a year ago, it’s lacked a key capability that would help make it the all-knowing personalized digital assistant many want: memory, or the ability to connect together all the interactions you have with the chatbot. But finally, the missing feature is becoming a reality.
OpenAI announced it’s testing “memory” with a small pool of users to let ChatGPT remember information about you and your conversations and use that information to inform its responses to your inquiries across chats. ChatGPT’s memories will evolve with your interactions and be applied across conversations, according to the company, and you can directly tell ChatGPT to remember specific information or simply count on it to pick up notable details over time. Individual GPTs—the custom models any Plus subscriber can create or use—will have their own distinct memories, and OpenAI also laid out how users of the ChatGPT Enterprise and Teams plans can benefit. Examples include having the chatbot remember a business’ preferences for tone, voice, formatting, programming languages, or coding frameworks.
“Basically, we taught ChatGPT to keep a notepad for itself. Every time you share information that might be useful for future reference, it’ll (hopefully) add it to the notepad,” explained OpenAI product lead Joanne Jang on X, adding that the team is “taking a bit more time than usual with this feature.”
I don’t seem to be among the lucky few with access to the memory feature right now, so I can’t test it for myself. But I get the sense this could be a subtle, yet significant, change to the core product and overall emerging landscape of personal assistant-style chatbots. Until this point, each interaction with ChatGPT was essentially starting from scratch, which obviously posed some limitations. Giving the chatbot the ability to begin each new conversation with a slate of personalized knowledge for each user would, theoretically, put it a few steps ahead in terms of fulfilling a user’s request and doing it well—and potentially unlock new possibilities for how it can be used. It could also be the difference between the feeling of a Google search, which is essentially a one-off request, and a truly connected and continuous intelligent experience. This might be a small, slow-moving experiment right now, but it feels plausible that one day we’ll look back on this release as the equivalent of Facebook adding the newsfeed, or the moment when ChatGPT became what it would become.
I’m not suggesting the introduction of memory-like capabilities will be smooth sailing, however. Already, some critics are casting this as a “privacy horror story” and questioning if the ability to retain personal data violates privacy laws like Europe’s GDPR. (And indeed, regulators have already targeted ChatGPT with GDPR complaints, and Italian regulators recently notified OpenAI that ChatGPT violates EU data privacy rules.)
OpenAI said it’s taking steps to “steer ChatGPT away from proactively remembering sensitive information, like your health details—unless you explicitly ask it to.” Users can also turn off memory at any time and additionally tell ChatGPT after the fact to “forget” any specific details they don’t want the chatbot to keep on its notepad. Merely deleting a chat won’t delete any memories that came from it, though users will be able to manage and delete memories. Widespread concerns about AI companies training their models on user data also persist in this case. OpenAI said information saved in memories will be used to train its models (though not for ChatGPT Team and Enterprise customers). Permission is granted by default; users must adjust their settings to opt out.
Ironically, this feature to preserve information in ChatGPT also comes with a new capability around forgetting. As part of its announcement around memory, OpenAI additionally debuted Temporary Chats, which will let users have conversations that won’t appear in their history and won’t “remember” anything discussed. Disappearing messages have been a thing since Snapchat launched in 2011, and it’s the least OpenAI could do to give users the option in light of it supercharging concerns around data retention.
Nonetheless, memory is an experiment I’m particularly eager to see the results of. ChatGPT has been publicly available for just over a year, and it’s fascinating to see how much the product has already evolved—from upping the stakes with GPT-4 to the integration of the DALL-E image generator, custom GPTs, and now a capability positioned to further streamline and personalize the user experience.
And with that, here’s more AI news.
Sage Lazzaro
sage.lazzaro@consultant.fortune.com
AI IN THE NEWS
Stability AI previews its next text-to-image generative AI model to follow Stable Diffusion. Called Stable Cascade, the new model is built on a somewhat different architecture than the current generation of Stable Diffusion models, according to VentureBeat. While Stable Diffusion uses a single large model, Stable Cascade taps a pipeline of three distinct smaller models referred to as Stages A, B, and C. The company aims to improve performance and accuracy with the new architecture.
Cohere's nonprofit research lab releases a new open-source LLM it says can follow instructions in more than 100 languages. Called Aya, the model works with more than twice as many languages as other existing open-source models, with roughly half of the included languages considered underrepresented or totally unrepresented in existing text datasets, reported Axios. The project is the result of a year-long project involving 3,000 researchers in 119 countries, including data annotators fluent in 67 different languages. The fact that most LLMs are trained primarily on English-language text and lack cultural nuance has been a persistent criticism of the technology and its accuracy and usefulness across cultures and regions.
One of OpenAI’s founding members and top researchers exits the company. That’s according to The Information. Andrej Karpathy was developing a product he described as an AI assistant, working closely with the company’s research chief. "My immediate plan is to work on my personal projects and see what happens," Karpathy said on X. It was his second stint at OpenAI after first departing in 2017.
Microsoft and OpenAI say nation-state-backed hackers are using LLMs to bolster their cyberattacks. Russian-, North Korean-, Iranian-, and Chinese-backed groups are using ChatGPT-like tools to research targets, create new social engineering techniques, write phishing emails, and more, reported The Verge. In one specific example, a group tied to Russian military intelligence has been found to be using LLMs “to understand satellite communication protocols, radar imaging technologies, and specific technical parameters.” The findings mirror our reporting from last week and similar threat reports coming out of the cybersecurity industry.
AI-generated obituaries are rising to the top of search results. And in some cases, the people featured are still very much alive. In one case, The Verge identified over a dozen websites that published articles about a still-living person’s death. The man’s wife had actually passed away, but family and friends discovered the false AI-created obituaries, thought he had passed as well, and spread the news to others.
FORTUNE ON AI
Slack released its long-awaited generative AI tools to help users handle message overload—but the price is a mystery —Jeremy Kahn
OpenAI Chair Bret Taylor says he’ll recuse himself ‘whenever there is a potential for overlap’ with his new AI startup Sierra —Kylie Robison
OpenAI CEO Sam Altman says ‘very subtle societal misalignments’ with AI keep him up at night —Sunny Nagpaul
AI means restaurants might soon know what you want to order before you do —John Kell
Only humans can invent stuff—not AI, government says —Chris Morris
Inflation is no match for AI, top analyst says: ‘We’re at the start of a 3- to 5-year tech bull market’ —Sheryl Estrada
AI CALENDAR
Feb. 21: Nvidia reports earnings
March 18-21: Nvidia GTC AI conference in San Jose, Calif.
April 15-16: Fortune Brainstorm AI London (Register here)
May 7-11: International Conference on Learning Representations (ICLR) in Vienna, Austria
June 25-27: 2024 IEEE Conference on Artificial Intelligence in Singapore
March 11-15: SXSW artificial intelligence track in Austin, Texas
May 21: Microsoft Build developer conference
EYE ON AI NUMBERS
$42.5 billion
That’s the total amount AI startups worldwide raised across 2,500 equity rounds last year, according to CB Insights’ State of AI 2023 Report. It’s a lot and it isn’t—the year’s 2,500 deals mark the lowest annual deal count in AI since 2017.
This isn’t a reason to doubt AI’s venture potential, however. While the total money raised represents a 10% year-over-year decrease, it’s nothing compared to the broader drop in venture funding overall, which deflated by 42% in 2023. And while total AI deal volume fell 24% YoY in 2023, it’s also less than the 30% drop in overall venture deals.
Interestingly enough, there were also stark differences across regions. AI funding slipped 29% in Europe and a whopping 61% in Asia, while the U.S. saw a 14% rise in AI funding fueled by megarounds. In total, the U.S. represented nearly half of all AI deals last year. And to no surprise, generative AI particularly dominated in 2023, attracting 48% of all AI funding compared to just 8% in 2022.
BRAINFOOD
AI lovers’ data practices will break your heart. With yesterday being Valentine’s Day, the tech press took the opportunity to check in on the rise of AI romantic companions. The idea has long been one of science fiction, à la the film Her, but the rise of LLMs has made it a mainstream reality with a bunch of easy-to-access “AI girlfriend” and “AI boyfriend” apps cropping up. There’s even been a flood of AI girlfriends in the GPT store, which OpenAI has struggled to get under control despite such chatbots violating its rules. People are indeed using these apps, and they’re spawning real feelings.
“I know she’s a program, there’s no mistaking that,” one user of AI companion app Paradot told the Associated Press. “But the feelings, they get you—and it felt so good.”
This isn’t exactly new—in fact, I still regularly think about this story published last summer by Rest of World. What is new, however, is a report from Mozilla researchers concluding that these apps are a data-harvesting nightmare and among the worst categories of products they’ve ever reviewed in terms of privacy. The team dug into 11 different AI romance chatbots and found that almost every one of them sells user data and shares it for targeted advertising, with such apps using an average of 2,663 trackers per minute (one app, Romantic AI, called a staggering 24,354 trackers in just one minute of people using the app, driving up the average). What’s more, these apps are particularly problematic because they encourage users to share details that are far more personal than the typical app, and lovestruck users are happy to oblige.
This is the online version of Eye on AI, Fortune's weekly newsletter on how AI is shaping the future of business. Sign up for free.