Hello, Fortune senior writer David Meyer here in Berlin, filling in for Jeremy today.
The Chinese authorities are already cracking down on generative A.I. In a series of measures announced Tuesday, the country’s Cyberspace Administration said it wanted to “promote the healthy development and standardized application” of A.I.s that generate text, pictures, sounds, videos, code, and other content, and that means…well, pretty much the kind of rules you’d expect to come from Beijing.
Generative A.I. must emit content that reflects “the core values of socialism” and avoids anything that might subvert state power or “undermine national unity.” False and “extremist” information is out, as is “content that may disrupt economic and social order.” The wish list also includes a lot of things that governments around the world may soon be calling for, when they finally try to catch up with the astonishing pace of generative A.I.’s development—like non-discriminatory datasets and output, and respect for intellectual property rights.
And then there’s this: Chinese companies that want to use generative A.I. to serve the public will first have to submit their tech for an official security assessment.
Cast your mind back a week or two, to when Elon Musk et al. called for a six-month pause in the development of next-generation generative A.I. systems, for safety’s sake. Remember all those people, like Eric Schmidt, who said such a pause would only benefit China? Well, here’s China implementing its own self-limitations.
One can certainly see why Beijing is moving so quickly to regulate generative A.I. tech. These are more or less the same rules the Communist Party applies to the Chinese internet, in keeping with the government’s well-established track record of ensuring that new forms of information distribution comply with its censorship-friendly framework.
But it’s hard to see how the measures won’t seriously hold back big Chinese tech firms like Alibaba Group and SenseTime, which have in the last couple of days laid out major chatbot plans. On the one hand, we have Alibaba Group CEO Daniel Zhang trilling that generative A.I. and cloud have brought us to a “technological watershed moment” as “businesses across all sectors have started to embrace intelligence transformation to stay ahead of the game.” On the other, there’s state-run media warning against “excessive hype” and calling for “an orderly market with standards for information disclosure, to support the long-term development of A.I.” It’s not hard to see why, despite their recent reveals, the likes of Baidu, Alibaba Group and SenseTime all saw their share prices drop today.
If there’s an inherent tension here between international competitiveness and control by the CCP, President Xi Jinping’s Party is likely to win. And that means additional challenges for Chinese companies already hamstrung by Western sanctions curbing access to the powerful hardware needed for A.I. innovation. So I’m not sure that the China threat is such a solid argument against the rest of the world taking a breather to figure out its own regulatory responses to generative A.I.
More A.I. news below.
David Meyer
Twitter: @superglaze
david.meyer@fortune.com
A.I. IN THE NEWS
Quora’s Poe chatbot is becoming more useful. When the public first got invite-free access to Poe (“Platform for Open Exploration”) a couple months back, it was primarily a convenient way for people to interact with a range of other chatbots from the likes of OpenAI and Anthropic. However, it now allows users to create customized bots using prompts, again with those other established chatbots providing the backend, and with Poe hosting the front end. Early examples include bots that are tailored to speak like a pirate, or mildly insult the user, or automatically translate the user’s messages into emojis. Quora CEO Adam D’Angelo: “We hope this new feature can help people who are talented at prompting share their ability with the rest of the world, and provide simple interfaces for everyone to get the most out of A.I.”
Baidu sues Apple over fake ERNIE apps. The Chinese tech giant Baidu has initiated a volley of lawsuits over a bunch of iOS apps that purport to be its ChatGPT-rivaling ERNIE chatbot. The targets include not only the developers of said apps but also Apple itself, for hosting the fakes. Baidu’s official WeChat account for its A.I. division: "At present, Ernie does not have any official app…Until our company's official announcement, any Ernie app you see from App Store or other stores are fake.” Indeed, the only way to access ERNIE currently is to apply to Baidu for a test account. On the weekend there were at least four of these bogus apps in Apple’s App Store, Reuters reported.
Twitter is reportedly fiddling around with generative A.I. It’s not yet clear what Twitter intends to achieve with the technology, but Insider reported today that Elon Musk has purchased some 10,000 GPUs for the project. The article repeatedly points out that Musk was one of the most prominent signatures on that recent open letter calling for an A.I. moratorium, which is certainly worth mentioning. However, until we have more details, it’s hard to gauge whether or not he’s being hypocritical—the letter only called for a pause in the development of models that are more powerful than OpenAI’s GPT-4, and that may not necessarily be the case here, depending on what this Twitter project actually entails.
EYE ON A.I. RESEARCH
Meta last week published a promptable foundation model—called Segment Anything Model, or SAM—that can identify and select objects within pictures, based on what the user types. The model was trained on 11 million “licensed and privacy respecting images,” according to Meta’s paper.
Right now, someone could for example pick out the cat in a picture by typing “cat,” or they could just click on the animal. But as the SAM team makes clear on a dedicated website for the project, SAM could end up taking all sorts of input prompts, such as the gaze from someone’s AR/VR headset. “In the future, SAM could be used to help power applications in numerous domains that require finding and segmenting any object in any image,” the company said in a blogpost that mentioned creative and scientific use cases. The code for running SAM can be found here, and the full underlying dataset here.
FORTUNE ON A.I.
Three of Meta’s top execs—including Mark Zuckerberg—are now spending most of their time on A.I. in a bid to claw into the market—by Eleanor Pringle
A.I. could lead to a ‘nuclear-level catastrophe’ according to a third of researchers, a new Stanford report finds—by Tristan Bove
Advanced A.I. like ChatGPT, DALL-E, and voice-cloning tech is already raising big fears for the 2024 election—by Jeremy Kahn
Artificial intelligence could make a difference for young readers around the world–or make literacy even less equitable—by David Risher
It’s time for Sundar Pichai to step up and be more clear about Google’s A.I. search plans—by David Meyer
BRAINFOOD
A.I.-generated response suggestions (“smart replies”) are super-useful, but there’s a catch. In a Nature paper last week, a group of Cornell University and Stanford University researchers described experiments that found people are more likely to find their conversational partners to be cooperative when those partners use smart replies—the suggested responses tend to have a positive sentiment to them. “A.I.-generated sentiment affects the emotional language used in human conversation,” they wrote. However, people don’t like it when they suspect their correspondent’s use of smart replies: “People who appear to be using smart replies in conversation pay an interpersonal toll, even if they are not actually using smart replies.”
“One explanation is that people might project their negative views of A.I. on the person they suspect is using it,” one of the paper’s coauthors, Cornell associate professor Malte Jung, told The Register. “Another explanation could be that suspecting someone of using A.I. to generate their responses might lead to a perception of that person as less caring, genuine or authentic.”
It will be interesting to see the practical implications for companies taking advantage of the generational A.I. capabilities being baked into Microsoft and Salesforce’s customer relationship management software. Will the coming deluge of automated replies put smiles on their customers’ faces, or set their teeth on edge? And if everyone is suspected of taking the bot route, what could a holdout company do to reassure users that yes, there’s a real person back there?
I’ll leave you with a recommendation for a recent post by Huggingface machine-learning scientist Nathan Lambert, entitled “Behind the curtain: what it feels like to work in A.I. right now.” If anyone’s burying a time capsule anytime soon (do people still do that?) then they might consider dropping a copy in there.
This is the online version of Eye on A.I., a free newsletter delivered to inboxes on Tuesdays and Fridays. Sign up here.