Good morning. If AI is a workplace superpower, perhaps it should come with a warning label? According to a new study by researchers at the University of California—Berkeley, people who use generative AI tools at work get such a productivity boost that they end up doing more work, not less.
“We found that employees worked at a faster pace, took on a broader scope of tasks, and extended work into more hours of the day, often without being asked to do so,” the researchers write in a Harvard Business Review article that’s getting a lot of buzz. The productivity boost may sound like a good thing to some managers, but the researchers warn that this “workload creep can in turn lead to cognitive fatigue, burnout, and weakened decision-making,” and ultimately to lower quality work and turnover. A Faustian bargain for the knowledge worker.
Today’s tech news below.
Alexei Oreskovic
@lexnfx
alexei.oreskovic@fortune.com
Want to send thoughts or suggestions to Fortune Tech? Drop a line here.
Tech's landmark social media addiction trial kicks off

Some of the biggest players in tech are on trial in Los Angeles over allegations their platforms intentionally addict young users. The trial marks the first time tech giants Meta and YouTube will answer to a jury over the allegations—Snapchat and TikTok reached a settlement with the plaintiffs last month. The six-to-eight-week trial will likely feature testimony from Meta CEO Mark Zuckerberg, Instagram head Adam Mosseri, and YouTube CEO Neal Mohan. Legal experts have drawn parallels to the landmark tobacco litigation of the 1990s.
Opening statements kicked off in Los Angeles on Monday, with the plaintiffs' attorney Mark Lanier claiming that the two companies had purposely built "machines designed to addict the brains of children."
Lanier argued that his client, identified by the initials K.G.M., developed mental health problems due to a social media addiction. Meta's defense countered that the plaintiff's struggles stemmed from family difficulties rather than platform design, emphasizing the ongoing scientific debate about whether social media addiction exists.
"This case is about two of the richest corporations in history who have engineered addiction in children's brains," Lanier told the jury. "I'm going to show you the addiction machine that they built, the internal documents that people normally don't get to see, and emails from Mark Zuckerberg and YouTube executives." Meanwhile, Meta is facing another landmark trial that also kicked off on Monday. This one, in New Mexico, accuses the platform of failing to protect children from sexual predators.—Beatrice Nolan
Amazon plans marketplace for publishers and AI firms
Amazon is planning to launch a marketplace where publishers can sell their content to AI companies, according to The Information. The internet company, which develops its own LLMs and offers other companies' models through its AWS cloud service, has mentioned the future content marketplace in slides, according to the report.
The move comes as publishers and AI firms clash over how content should be licensed and paid for amid publisher concerns that AI-driven search and chat tools are eroding traffic and ad revenue. Cloudflare and Akamai launched a similar marketplace effort last year. Microsoft piloted its own version and last week rolled it out more widely. But so far, it’s not clear how many AI companies are buying on these marketplaces and at what volumes. Some large publishers have struck bespoke deals worth millions of dollars per year with OpenAI, Anthropic, and others.—Jeremy Kahn
Watchdog claims OpenAI violated California's new AI law
OpenAI may have violated California’s new AI safety law with the release of its latest coding model, GPT-5.3-Codex, according to allegations from AI watchdog group the Midas Project.
OpenAI CEO Sam Altman has said the new model qualifies as "high" on the company's internal rating system for potential risks—in this case, the risk that it could be used to automate cybersecurity attacks.
OpenAI’s policies require models with high cybersecurity risk to be released with special safeguards. But according to the Midas project, OpenAI did not implement these safeguards before launching GPT-5.3-Codex—a violation of California’s SB 53. The law, which went into effect in January, requires major AI companies to publish and stick to their own safety frameworks.
OpenAI says the Midas Project’s interpretation of its policy is wrong, although it also said that the wording in its framework is “ambiguous” and that it sought to clarify the matter in the safety report that it released with GPT-5.3-Codex last week.—BN
More tech
—2 xAI cofounders leave. 6 of the 12 cofounders are no longer at the company.
—Paramount raises offer for Warner. Will pay Netflix's breakup fee.
—Spotify earnings wow Wall Street. 38 million new users and the stock surges.
—Facebook lets users animate their profile with AI. Wave to your friends!
—Ex-Github CEO raises $60 million seed fund. Startup helps humans and AI agents interact.
—Mark Zuckerberg becomes Florida man. The billionaire buys waterfront Miami home.










