Hello and welcome to Eye on AI!
No, Taylor Swift did not endorse Donald Trump. Yes, the large crowds at a Kamala Harris rally were real. Do you agree?
Whether you do or not, the fact that it is even a question shows that we are all in the throes of an ongoing AI election nightmare, one in which examples of AI-generated disinformation related to the 2024 election are quickly piling up.
Just last week, Donald Trump falsely claimed that photos of large crowds at a Kamala Harris rally were generated by AI. And two days ago, Trump shared several images on Truth Social that implied Taylor Swift had endorsed him—some of which were clearly AI-generated. There was also the news that an Iranian group had used OpenAI’s ChatGPT to generate divisive US election-related content; and that Elon Musk’s Grok AI model on X had spewed false information about voting.
The chaos caused by generative AI during this election season, which expands on the spreading of falsehoods that famously accompanied the 2016 election as well as the aftermath of the 2020 election, has long been predicted. Back in December, Nathan Lambert, a machine learning researcher at the Allen Institute for AI, told me that he thought AI would make the 2024 elections a “hot mess.”
It certainly feels that way to me: As Kamala Harris prepares to accept the Democratic nomination, I’m amazed to see chatter questioning whether crowds at the Democratic National Convention, as well as at Trump rallies, are real or AI-generated. As the Washington Post reported yesterday, many AI fakes are not necessarily meant to fool anyone—instead, they can simply be powerful, provocative memes meant to simply provoke, humiliate, or just grab a cheap laugh that makes a candidate’s base happy.
Either way, it feels like an insidious march toward mass self-doubt about what is real and what isn’t. I’ve noticed that even I have begun to question what I’m seeing—either assuming that everything is AI-generated or desperately scanning photos for clues.
It can, however, get worse. How about real-time live deepfake video? A tool called Deep-Fake-Cam has made the viral rounds on X over the past two weeks: With a single image of Elon Musk, for example, the developer was able to swap his face for Musk’s and present high-quality live video as the billionaire founder of Tesla and SpaceX. Combined with any one of the easy-to-use AI voice clones available today, this type of technology could offer next-level opportunities for deepfakes.
“I’ve seen a lot of deepfake tech but this one is freaking me out a little,” said Ariel Herbert-Voss, founder of RunSybil and previously OpenAI’s first security research scientist, adding that Deep-Fake-Cam is even “light-invariant”—which means as light moves around the image the AI-generated image remains “in character.” That makes it “harder to detect in the moment,” he told Fortune.
Don’t expect much help from the platforms where these images and videos are shared, either. According to a panel in Chicago yesterday, hosted by the University of Southern California’s Annenberg School of Communication and Journalism, social media companies have “sharply downsized” their election integrity departments. That will lead to a surge of AI-generated media and deepfakes in the lead-up to and aftermath of the 2024 election, the panel cautioned.
“This is only August—what’s going to happen in December?” said Adam Powell III, executive director of the USC Election Cybersecurity Initiative.
With any federal AI regulation at a standstill until after the election, it looks like there is little to do but wait—and hope we wake up from this AI election nightmare.
Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman
AI IN THE NEWS
Another week, another AI copyright lawsuit. According to Reuters, three authors filed a lawsuit in California federal court yesterday against AI model developer Anthropic. They say the company trained its AI-powered chatbot Claude with their books and hundreds of thousands of others. The three plaintiffs—writers and journalists Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson—said that Anthropic “used pirated versions of their works and others to teach Claude to respond to human prompts.” It follows other lawsuits filed by authors against generative AI companies, including one by 11 nonfiction authors in December against OpenAI and Microsoft, and one filed in September 2023 by over a dozen authors, including John Grisham, against OpenAI.
Chip giant AMD acquires ZT systems to challenge Nvidia. Yesterday, AMD announced it had signed a definitive agreement to acquire ZT Systems, which provides AI infrastructure to Big Tech companies, for $4.9 billion. According to Axios, this “shows just how far ahead rival Nvidia is in AI tech infrastructure than everybody else.” Artificial intelligence requires not just chips, but the right software and networking as well. AMD's bid now for ZT Systems is "a bit of an admission that they were weak here."
Sparring over California AI bill. California Gov. Gavin Newsom has yet to publicly indicate where he stands on SB-1047, the state's seminal AI bill, but that hasn't stopped others from weighing in. Congresswoman Nancy Pelosi said the bill is “ill-informed,” adding that “we must have legislation that is a model for the nation have the opportunity and responsibility to enable small entrepreneurs and academia—not big tech—to dominate,' TechCrunch reports. State Sen. Scott Wiener, who sponsored the bill, issued his own statement in response, saying that while he has “enormous respect” for Pelosi, “I respectfully and strongly disagree with her statement.”
LVMH CEO Bernard Arnault’s family office goes shopping for AI startups. As per CNBC, Bernard Arnault, founder and CEO of LVMH, has made a string of artificial intelligence investments this year through his family office, called Aglaé Ventures. According to the family office database Fintrx, the largest funding round this year was in a firm called H, formerly known as Holistic AI, a French startup that’s working toward “full artificial general intelligence.” According to Fintrx, the funding rounds for the AI firms totaled more than $300 million.
FORTUNE ON AI
Alphabet’s robo-taxi service hits a major new milestone after doubling ridership in just a few months —by Jessica Mathews
TSMC’s first European plant builds momentum for the EU’s chip ambitions—but a key Intel decision is yet to arrive —by David Meyer
These boom-and-bust tech cycles show that if AI investment wanes, the recovery will be quick —by Jeff Grabow (Commentary)
The number of Fortune 500 companies flagging AI risks has soared 473.5% —by Jason Ma
Women are using ChatGPT to catch men in lies about their height on dating apps —by Sydney Lake
AI CALENDAR
Aug. 28: Nvidia earnings
Sept. 10-11: The AI Conference, San Francisco
Sept. 10-12: AI Hardware and AI Edge Summit, San Jose, Calif.
Sept. 17-19: Dreamforce, San Francisco
Sept. 25-26: Meta Connect in Menlo Park, Calif.
Oct. 22-23: TedAI, San Francisco
Oct. 28-30: Voice & AI, Arlington, Va.
Nov. 19-22: Microsoft Ignite, Chicago, Ill.
Dec. 2-6: AWS re:Invent, Las Vegas, Nev.
Dec. 8-12: Neural Information Processing Systems (Neurips) 2024 in Vancouver, British Columbia
Dec. 9-10: Fortune Brainstorm AI San Francisco (register here)
EYE ON AI RESEARCH
An AI-powered weather forecasting breakthrough during hurricane season. Nvidia announced new research in using AI to predict extreme weather and improve short-range weather forecasts. Nvidia claims a new generative AI model, Stormcast, demonstrates better simulation of extreme weather events down to the kilometer scale. According to Axios, until now, AI weather and climate models from Nvidia, Microsoft, Google, and researchers elsewhere had demonstrated advances in using AI and machine learning to produce medium-range, global weather projections that rival or beat conventional, physics-based models run on supercomputers. The new model, in addition to more accurate forecasts, could help scientists take global climate change projections and more accurately apply them to local scales. "I'm convinced we're at that moment now where AI can compete with physics for storm-scale prediction," study coauthor Mike Pritchard, a climate scientist at Nvidia, told Axios.
BRAIN FOOD
Is a stance against generative AI good for business? A popular iPad design app Procreate went viral yesterday on X when it posted a video from its CEO, James Cuda, in which he said “I f***ing hate generative AI” and and vowed never to introduce generative AI features into its products. With over 8.5 million views, the video clearly hit a nerve, especially among artists who have protested the training of AI models with copyrighted images. But it also raises questions as to whether a stance against generative AI could, in some cases, simply be good for business. After all, Google pulled an Olympics ad that showed AI writing a little girl’s fan letter to her favorite Olympian when the ad received significant online pushback. Apple also faced backlash earlier this year when it released an ad that showed creative tools being crushed by a giant hydraulic press and replaced by an iPad Pro.