Hello and welcome to Eye on A.I.
Tomorrow, Senate Majority Leader Chuck Schumer will kick off his AI Insight Forum with a packed lineup of A.I. executives in attendance. First announced back in June, it’s the first of nine listening sessions planned to discuss both the risks and opportunities posed by A.I. and how Congress might regulate the technology.
While we won’t know exactly what happens in the forum (more on that later), it’s a major show of how Congress is putting its ear to the ground on A.I. and who it’s listening to. It’s also an interesting contrast to what’s happening at the state and local levels, where we’re starting to see more action than listening.
“These forums will build on the longstanding work of our Committees by supercharging the Senate’s typical process so we can stay ahead of AI’s rapid development,” Schumer wrote in his latest “Dear Colleague” letter. “This is not going to be easy, it will be one of the most difficult things we undertake, but in the twenty-first century we cannot behave like ostriches in the sand when it comes to AI.”
And yet, while this type of investigation is desperately needed, the invitees and its closed format are already causing backlash. Executives expected to attend Wednesday’s forum include OpenAI CEO Sam Altman, Meta CEO Mark Zuckerberg, Alphabet CEO Sundar Pichai, X CEO Elon Musk, former Microsoft CEO Bill Gates, Nvidia CEO Jensen Huang, and Palantir CEO Alex Karp.
A few ethics researchers were invited, but critics have called out the Senate for seeking input largely from the powerful executives who are seeking to profit from these technologies. These are many of the same executives who have a history of publicly saying they welcome regulation while deploying armies of lobbyists to campaign against it behind closed doors. Not to mention that several of these companies, such as Facebook and Google, have recently been fined billions in the EU for their mishandling of data and user privacy, which is an issue at the core of A.I.
“This is the room you pull together when your staffers want pictures with tech industry AI celebrities. It’s not the room you’d assemble when you want to better understand what AI is, how (and for whom) it functions, and what to do about it,” tweeted Meredith Whittaker, who is president of the Signal Foundation and has previously testified before Congress regarding A.I. issues like facial recognition.
Triveni Gandhi, the responsible A.I. lead at Dataiku, shared a similar perspective with Eye on A.I., saying that “it’s vital Congress consults a complete ecosystem of A.I. innovators, not just goliaths.”
“The A.I. ecosystem is massive and is made up of many different organizations of all sizes. Congress has a checkered history of favoring the incumbents with regulations, and A.I. is too important to lock out participation in these critical conversations,” she said.
There’s also concern over the fact that these meetings will be closed to the public and press and are considered classified, resulting in calls for greater transparency from researchers, journalists, and advocates for responsible tech. And the call is coming from inside the house, too; just yesterday, Democratic Colorado Sen. John Hickenlooper convened a subcommittee hearing titled “The Need for Transparency in Artificial Intelligence.”
Given how significant A.I.’s impact will be across society, transparency doesn’t seem like an unreasonable thing to expect.
Before we get to the rest of this week’s A.I. news, a quick note about an online event Fortune is hosting next month called “Capturing A.I. Benefits: How to Balance Risk and Opportunity.”
In this virtual conversation, part of Fortune’s Brainstorm A.I., we will discuss the risks and potential harms of A.I., centering the conversation around how leaders can mitigate the potential negative effects of the technology, allowing them confidently to capture the benefits. The event will take place on Oct. 5 at 11 a.m. ET. Register for the discussion here.
Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com
A.I. IN THE NEWS
The A.I. copilot parade continues—this time with Salesforce. Another week, another Big Tech A.I. digital assistant launch. Salesforce today announced Einstein Copilot, an A.I.-powered digital assistant that will let users chat with their CRM using natural language and receive recommendations and task-specific guidance. The company also unveiled Copilot Builder to enable IT teams, admins, and developers to build and further customize their Einstein Copilots. Google and Zoom recently released similar AI-powered digital assistants for their own products.
Neuroscience-based A.I. company Numenta comes out of stealth after 18 years of research. The company announced its first commercial project, the Numenta Platform for Intelligent Computing (NuPIC), which it says is designed for any developer to easily jump into with no deep learning experience required. And speaking of digital assistants, Numenta’s team is essentially the Palm senior leadership team reunited. Palm, of course, was the company that put the idea of digital assistants on the map with its PalmPilot.
Anthropic launches Claude Pro paid plan for consumers. For $20 per month, Anthropic announced that subscribers to the premium version of its Claude LLM chatbot can get five times more usage than available with the free tier, including the ability to send more messages and priority access during high-traffic times, as well as early access to new features. The $20 price tag puts it exactly on par with OpenAI’s ChatGPT Plus plan, which is also $20 per month.
Amazon starts requiring authors selling through its e-book program to disclose if they used generative A.I.. That’s according to the Associated Press. We can’t say for sure if it was the potentially dangerous A.I.-generated mushroom foraging books we covered last week that tipped the scales here, but Amazon is finally cracking down on A.I.-generated books after mounting pressure from the Authors Guild and other various groups. The company is differentiating between A.I.-generated content (including text, images, and translations created by A.I.-based tools) and A.I.-assisted content, which it says authors do not need to disclose.
EYE ON A.I. RESEARCH
A.I.'s water problem. A.I.'s incredible thirst for water is of crucial importance against a backdrop of rising temperatures and global water shortages. Over 2.2 billion people currently live in areas where more than 80% of freshwater has been depleted—a number that’s expected to rise significantly in the coming years. And now thanks to research out of the University of California, Riverside, we’re getting close to a first look at comprehensive numbers around the impact A.I. is having on Big Tech’s water usage.
In a paper due to be published later this year, researchers at the university estimate that ChatGPT consumes about 16 ounces of water for every 5 to 50 times a user prompts it, according to the AP. The team is working to calculate the environmental impact of generative A.I. products and provide a more thorough picture than what companies self-report, accounting for indirect water usage companies don’t measure like cooling the power plants that supply electricity to data centers.
Microsoft and Google recently reported a 34% and 20% spike in their water usage, respectively, between 2021 and 2022, which tracks with the time period when they were undergoing massive LLM training efforts.
It’s all a reminder that while A.I. might be about intelligence that’s artificial, the technology requires a tremendous amount of resources that are everything but. Along with our data and the labor of barely-paid workers who prepare it for use in LLMs, natural resources are the backbone of the burgeoning artificial intelligence industry—from the various minerals being over-mined to create modern computing systems to the massive amounts of water being used throughout the A.I. supply chain. And that’s all aside from the enormous carbon footprint attached to these models.
FORTUNE ON A.I.
Elon Musk is a ‘jerk’ but was a ‘talent magnet’ for OpenAI early on, admits Sam Altman—who now faces direct competition from him —Steve Mollman
The authors of Section 230: ‘The Supreme Court has provided much-needed certainty about the landmark internet law–but A.I. is uncharted territory’ —Ron Wyden and Christopher Cox
Sam Altman risks sounding ‘arrogant’ to explain what’s wrong with Silicon Valley—and why OpenAI has no road map —Steve Mollman
BRAINFOOD
Generative A.I. who? No matter which metric you look at, generative A.I. is up. Funding is skyrocketing. News coverage is up 3,000% in the first half of 2023 compared to the later half of 2022. And one-third of people say they’re reading more on the topic. That’s all according to a recent survey report from Shift Communications, which has been tracking the generative A.I. boom from the PR perspective. But despite an onslaught of product releases and wall-to-wall discussion that has governments, educational institutions, and workplaces evaluating how to deal with generative A.I., there is virtually no brand recognition in the space.
Other than ChatGPT which was recognized by 49% of respondents in Shift’s survey of 1,000 general population adults who have heard of artificial intelligence, no other generative A.I. unicorn even cracked into double digits. This held true when the firm looked only at respondents who work within the tech industry, and even with ChatGPT’s wide recognition, only 22% said they had heard of its parent company OpenAI.
Clearly, generative A.I. companies have a lot of work to do if they want to make a name for themselves in this crowded space. They're not only competing against each other, but also against the dominance of Big Tech, the fast-moving nature of the industry, pending regulations, lawsuits, and public opinion. Already, this moment is drawing comparisons to the dotcom bubble, with many investors and technologists opining it could burst sooner rather than later.
This is the online version of Eye on A.I., a free newsletter delivered to inboxes on Tuesdays. Sign up here.