Hello and welcome to Eye on AI.
The U.S. AI Safety Institute Consortium, the European AI Office, the Council of Europe’s Committee on AI, the UN AI Advisory Body: There are so many AI regulatory bodies popping up.
It feels like every day, there’s another AI task force, regulatory body, or other group being created or proposed to tackle AI regulation. There’s also a flurry of guidelines being issued and legislation being proposed at every level of government around the globe, and the execution of President Joe Biden’s wide-ranging AI executive order alone is almost too much to follow. “There are a lot of cooks in the AI policy kitchen,” as Axios Pro said in its recent report examining the state of AI policy and regulations.
Between the pace of AI and the increasingly complex regulatory picture, it’s becoming increasingly hard to follow what’s happening, who’s in charge of what, and what progress is actually being made. Even the Center for AI and Digital Policy, which I’d consider to be the premier resource for all things AI policy and which watches all this like a hawk, mentioned in its newsletter this week how difficult it is to track all the AI bills cropping up. But as I pointed out in my recent issue of Eye on AI discussing the upcoming challenges around actually enforcing the recently enacted EU AI Act, making sure these efforts are actually executed is the most important part.
“Following regulatory developments around AI is challenging because of their quantity, the rapid pace of their introduction, and the skills needed to decipher their content and implications,” said Ravit Dotan, an AI researcher and ethicist who created and has been continuously updating a collection of AI policy resources—such as her free AI legislation trackers—to help people cut through the noise.
Trackers like Dotan’s help bring together all the information in one place, and the Electronic Privacy Information Center also keeps a handy list of all the state AI laws being proposed and enacted, for example. To see where AI policy is headed, one can also keep an eye on the lobbying efforts coming out of the tech industry. Much of this action is happening over private dinners and roundtables with heads of state around the world, as Politico detailed yesterday in a report going inside the “shadowy global battle” to tame AI technology. But as is shown by the article, which reveals AI policy-related conversations between three dozen politicians, policymakers, and tech executives, these efforts have now hit the mainstream and will continue to be subject to increased public scrutiny. As calls for regulations surge, the money spent on AI lobbying spiked 185% last year, CNBC reported.
Aside from these debates and the new AI-specific efforts unfolding, another challenging aspect is following the activities of enforcement agencies that apply non-AI-specific laws to AI. Dotan pointed to the FTC’s case against RiteAid for deploying AI facial recognition technologies without safeguards as well as the agency’s investigation into OpenAI.
“These activities are very consequential to AI regulation but they get much less attention, so it can be challenging to even hear about them,” Dotan said, adding that while some new laws can be helpful, there isn’t actually a need to reinvent the legal system.
“In fact, pointing the finger at new laws can be an effective way to distract the public’s attention from the fact that companies need to be held accountable to the laws that are already in place. More attention to enforcing existing, non-AI-specific laws on AI is a great, and probably faster, way to protect the public,” she said.
And with that, here’s more AI news.
Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com
AI IN THE NEWS
Databricks joins the LLM race with new DBRX model. As I reported last week, the data analytics and machine learning company has been one of the big winners of the generative AI boom, providing the tools used by many companies adopting the tech. Now, the firm is launching DBRX, its own model similar to OpenAI’s GPT series and Google’s Gemini. The company describes the model—which is available on GitHub and Hugging Face—as “open source,” but as TechCrunch notes, it will be “exceptionally hard” to actually use DBRX unless you’re a Databricks customer. The company spent $10 million training the model and shared benchmark data of it outperforming GPT-3.5, but it falls short of OpenAI’s leading GPT-4 model.
Amazon invests an additional $2.75 billion in Anthropic. The venture investment is the largest ever made by the tech giant and adds to the initial $1.25 billion it poured into Anthropic last September. Overall, Amazon has said it will invest up to $4 billion in the AI startup, which recently released a new family of Claude models. The deal values Anthropic at $18.4 billion and marks the company’s fifth funding deal in the last year, which together totals $7.3 billion, according to CNBC.
Leonardo is being used to generate nonconsensual sexual images. That’s according to 404 Media, which uncovered examples of such materials circulating online and investigated how slight prompt tricks can get the AI image generator to easily bypass its guardrails and create nude images of celebrities. Leonardo is a popular platform for text-to-image models and has raised $31 million from investors including Samsung. Overall, the use of AI tools for creating non-consensual sexual content remains a pressing issue surrounding the proliferation of these models, with others like Microsoft having come under fire as well.
The AI boom is turbocharging Silicon Valley’s talent wars. Competition for talent with experience training LLMs and working on AI’s toughest problem is extraordinarily tight among Big Tech companies and AI startups, the Wall Street Journal reported. According to data from Levels.fyi cofounder Zuhayeer Musa, the median salary among 344 machine learning and AI engineers at Meta was nearly $400,000, while OpenAI is offering salaries around $925,000 (both before bonuses and equity). Meta’s senior director of engineering further made this point crystal clear yesterday in a LinkedIn post announcing his departure from the company to pursue independent research and see where it lands him: “This time there were no layoffs or anything thrilling involved. In fact, I am more bullish than ever about Meta with the company’s increased focus on AI. But given the incredible competitive pressure in the field, there is really no advantage to be inside a large corp if you want to build cool stuff on top of LLMs.”
FORTUNE ON AI
Marc Benioff alludes to the market’s possible AI oversaturation by joking about a “genius” toothbrush —Chloe Berger
1 in 3 Americans who die in hospital had sepsis–and that’s just one of the many areas where AI can improve early diagnosis —Carolyn Barber
Google’s former Andys—Conrad and Harrison—reunited at VC firm S32, talk AI —Allie Garfinkle
AI CALENDAR
April 15-16: Fortune Brainstorm AI London (register here)
May 7-11: International Conference on Learning Representations (ICLR) in Vienna
May 21-23: Microsoft Build in Seattle
June 5: FedScoop’s FedTalks 2024 in Washington, D.C.
June 25-27: 2024 IEEE Conference on Artificial Intelligence in Singapore
July 15-17: Fortune Brainstorm Tech in Park City, Utah (register here)
Aug. 12-14: Ai4 2024 in Las Vegas
EYE ON AI RESEARCH
Scams and spams. In a new preprint paper out of Stanford, researchers put aside all issues of disinformation and election lies in order to hone in on a different side of AI-generated images and their presence on Facebook. They found that scammers and spammers are getting high engagement posting unlabeled AI-generated images on Facebook—and that the platform’s algorithms are recommending this content widely to users who don’t follow the pages doing the posting. Additionally, many users do not seem to recognize that the images are synthetic.
These scammers and spammers are driving audiences to content farms, selling products that don’t appear to actually exist, and appear to be manipulating their audiences in various ways, the researchers concluded. These aren’t brand new practices, but AI image generators seem to be an exceptionally useful tool for such scams, thanks to how cheaply and instantaneously they can create attention-grabbing images. The researchers analyzed AI-generated images being posted by 120 accounts sharing such content and found users have interacted with them hundreds of millions of times. A post including an AI-generated image was even one of the 20 most viewed pieces of content on Facebook in Q3 2023, garnering 40 million views. You can read the paper here.
PROMPT SCHOOL
ConciergeGPT. I usually love looking up restaurants and cafés, but I didn’t have the energy for it the other night. I was planning to spend the next morning hopping between art galleries and knew I’d need somewhere to grab lunch. I wanted to try somewhere new, and I was wishing that the perfect place—somewhere I could also relax and do some reading, and that’d be conveniently along my route—would just pop into my head without all the work. So I put ChatGPT to the test.
“Hi, I need your help finding a café or restaurant along a specific route for lunch tomorrow,” I told ChatGPT, specifically GPT-4, adding details about the destinations I was going to, the general vibe I was looking for, and other preferences.
“It'd be great if the place was affordable and more toward the middle of the route, so it'd be a nice break from the walk. Can you please research the area and give me at least 5 suggestions?” I wrote.
ChatGPT absolutely delivered on the recommendations. Interestingly enough, it replied with a split screen showcasing two separate lists of suggestions, each veering a little bit more into one of the preferences I listed, and asked me to choose which I preferred. For each suggestion, it gave the address and a brief description, though it said it wasn’t able to provide menu prices. All of the recommendations honestly sounded really good to me, and I did end up going to—and enjoying—one of the cafés it suggested.
I didn’t just take ChatGPT’s word for it, however. I had to do a little research on my own—both because I wanted to know their prices and visualize where they were on a map (and because I can’t simply trust one source). I turned to Google Maps, but I wondered if ChatGPT could be at all helpful for visualizing where each of these recommendations fell along my route. So I gave it one more prompt:
“Would it be possible to also make a rough map depicting the route and where each restaurant is along it?”
Readers, it was not possible. The map, which more so resembled the graph of a horrible stock crash, could not have been more wrong. The way ChatGPT situated the destinations in relation to each other was completely inaccurate, and the walking route it drew between them had no awareness of city blocks or buildings that would be in the way.
I was really impressed with ChatGPT’s recommendations, but it all came crashing down with the map request. This type of capability will likely improve with better multi-modality, referring to a model’s ability to use various types of data including images, videos, and graphics as both inputs and outputs. But for now, ask ChatGPT to give you café recommendations; don’t ask it to draw you a map.
This is the online version of Eye on AI, Fortune's weekly newsletter on how AI is shaping the future of business. Sign up for free.