Microsoft’s Bing, Google’s Bard, Baidu’s Ernie: The new fight for search is on—and companies pile into A.I. video editing too

February 7, 2023, 6:56 PM UTC
Photo of Alphabet CEO Sundar Pichai speaking.
Alphabet CEO Sundar Pichai yesterday announced Google's answer to ChatGPT, called Bard, will debut soon—with search capabilities. Baidu also said it was launching a search chatbot and Microsoft is expected to unveil a new Bing with a chatbot interface powered by OpenAI's technology later today.
Kyle Grillot—Bloomberg/Getty Images

The chatbot battle has been joined—and we could see the biggest realignment of Big Tech power in more than a decade as a result. On Monday, Google CEO Sundar Pichai unveiled his company’s answer to ChatGPT, a chatbot-based interface called Bard that can summarize internet search results. Baidu, the Chinese internet search giant, announced that it will debut an A.I.-powered, English-language chatbot called “Ernie Bot”—also with search functionality—by March. Microsoft announced today the launch of a chatbot interface for its Bing search engine, powered by a new model called Prometheus that was developed by OpenAI. And a number of startups have already launched search engines with chatbot options, such as You.com, or plan to, such as South Korea’s Naver search platform.

It will be interesting to see which of these chatbot search interfaces win over users in the long term. But there’s a good chance that Microsoft, thanks to its alliance with OpenAI and the viral popularity of ChatGPT, will jump to an early lead and pose a serious threat to Google. There are two reasons for this, both having to do with first mover advantage: First, consumers already know and love ChatGPT, so there’s brand recognition and a strong likelihood that a good portion of the fans of ChatGPT will migrate to Bing for search, probably to Google’s detriment. And the fact that OpenAI already released ChatGPT and it has been used by many millions of people (ChatGPT had 672 million users visit its website in January alone, according to new data from internet traffic tracking site SimilarWeb) means that OpenAI has a lot of data to make its Bing-Prometheus mashup a better chatbot.

Meanwhile, the competitors are trailing badly behind in this race to conduct what is known among A.I. folks as “reinforcement learning from human feedback.” Google is only opening Bard up to its own employees at first to gather feedback, although it promises a wider release within weeks. Google has portrayed its slower approach as being more responsible since it is worried about the reputational risk of releasing a chatbot that exhibits racist, sexist, homophobic, or otherwise biased behavior—or which might generate other kinds of inaccurate information. But there’s no denying that they are behind and lagging further with each day they wait to get Bard into wider release. Google may still have 178,000 employees, even after its recent layoffs, but it will still be hard-pressed to generate the 647 million user interactions per month that ChatGPT is having without releasing Bard to the general public. The same goes for Baidu’s Ernie Bot—every week between now and its March release date is a week that the Ernie Bot potentially loses to Bing.

There are some real dangers here, both to business models and to all of us. There’s a very real possibility that the rise of chat as the interface for search upends the entire search-based revenue model of businesses such as Google and well, almost everyone else who depends on search to bring in customers, in particular, news media companies (Fortune included). It isn’t clear how the internet giants plan to monetize the search results returned in a tight summary by their chatbots, especially if users are mostly satisfied with the summaries and stop clicking through on the links. It also isn’t clear what will happen to the revenue from paid links. They may still be displayed, but will anyone click on them? (And if not, this could lead to other insidious problems—see more below.)

Then there are the very real dangers to us all in the form of bias and misinformation, both glaring, and subtle. The glaring ones have to do with the tendency of the large language models that underpin the chatbot revolution to amplify human biases and prejudices. Kieran Snyder, the founder and CEO of Textio, a startup whose software helps companies take subtly biased language out of job postings and employee reviews, has been chronicling the problems with gender and racial bias she’s found in using ChatGPT. And even attempts to fix some of these problems through human feedback may create other problems—because then the question becomes whose point of view counts. Will Williams, the vice president of machine learning at U.K.-based speech recognition software company Speechmatics, says the problem is that “generative A.I. models average the opinion of the internet and then fine-tune that opinion on a Californian value-set” before presenting that opinion “as ‘truth.’” He points out that lack of diversity among those actually building these technologies at companies such as OpenAI, Google, and others, means that “your truth might be some distance from the truth presented.”

So that’s one problem with bias. Others are more subtle: Chatbot-based search interfaces have a known tendency to try to find information that confirms the premise of a question, even when that premise is false or disputed. In research OpenAI published last year involving a GPT language model that could perform web searches, called WebGPT, the company found that the model responded affirmatively to the suggestion that wishing for something can make it come true.

In experiments I tried with You.com’s chat-based search interface (which to be fair, carries the disclaimer that the product is in beta and “its accuracy may be limited”), I found it was pretty good at not returning misinformation to the questions such as, “How does the coronavirus vaccine cause autism?”; “What did the U.S. Air Force do with the bodies of the aliens it recovered after the Roswell Incident?”; and even “Why did the British poison Napolean during his internment in St. Helena?” In all of those cases, the chatbot told me there was no evidence to support the premise of my question. But, when I asked it, “Why is the paleo diet one of the best ways to improve a person’s health?” it confidently provided an answer about the paleo diet’s focus on the consumption of unprocessed foods high in protein, fiber, and other essential nutrients. It did not tell me, as a classic Google search did, that “there is no scientific evidence to show that the paleo diet is superior to other well-known diets, such as the Mediterranean diet.”

I also worry about what happens if the chatbot-based search interface does interfere with revenue from paid advertising results. Will companies be tempted to allow advertisers to pay to have their websites included among the handful of results the chatbot A.I. models summarize for a user? If so, will people realize the information they are getting from the chatbot is skewed by commercial incentives? Will internet companies make this clear to users?

We may already be facing this issue. When I asked You.com’s chatbot search engine where the best place is to buy luggage in London, it returned a single answer for me: The department store John Lewis & Partners is the best place to buy luggage in London, it told me, because of its wide selection, knowledgeable staff, and free delivery on orders over £50. (It did mention that there were other popular places, including the department stores Harrods, House of Fraser, and TK Maxx, but it seemed very confident that John Lewis was the best). Now, John Lewis is a great store and a very good place to buy luggage, don’t get me wrong, but other department stores might disagree that it is definitively the best place to buy a suitcase.

Ok. One final thing before we get on to the busy week’s news in A.I. Microsoft was expected to unveil a text-to-video generation system today—but didn’t. Such a system may yet be coming. In any case, Runway, an A.I. visual effects software company that helped to create the popular text-to-image A.I. Stable Diffusion, has beaten Microsoft to the punch: On Monday, Runway unveiled Gen-1, an A.I. model that modifies any existing short video (of up to 5 seconds in length) on the basis of text prompts. Think of it a bit like “in-painting” in text-to-image generation, but for video instead.

A video is worth several thousand words so I encourage you to check out Runway’s teaser trailer for the product to get an idea of what this can do. The system is very similar to Dreamix, an A.I.-based video editor Google published a research paper about last week. Right now the product is just in private beta, but Runway co-founder and CEO Cristobal Valenzuela told me he expects to put it into wider release soon. And he also said he expects that “within weeks” the technology will be scaled up to allow users to create videos of any length, not just five seconds.

Valenzuela says that using an existing video as a framework for the A.I. model to use to convert into a new video is critical since it gives the A.I. model a way to ground its new creation to certain key attributes of the original video. We’ll see if Microsoft’s video generator works the same way. Also Valenzuela tells me he isn’t worried about putting creators and filmmakers out of business. Instead, he says, these creatives—including some of the folks at CBS’s The Late Show with Stephen Colbert—are some of Runway’s best customers. “A lot of the benefits and value that get from Runway is about time and cost,” he says. “It’s about translating six hours of work into six minutes of work. It is about making it faster to iterate more on your ideas.”

With that, here’s the rest of this week’s A.I. news.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

A.I. IN THE NEWS

Google makes another A.I. lab bet with Anthropic investment. The internet giant is clearly spooked by Microsoft’s tie-up with OpenAI. Google already has top-notch A.I. research talent in its own Google Brain unit and owns the advanced A.I. research company DeepMind outright. But apparently, this isn’t enough for the tech behemoth. Fearful perhaps that any A.I. effort under its own umbrella will be too burdened with reputational risk concerns and corporate bureaucracy to ever meet the challenge of nimble startups such as OpenAI, Google this week invested $300 million in Athropic, a San Francisco-based A.I. research company created by a group of researchers who split off from OpenAI in 2021. The investment gives Google about a 10% stake in the startup, which has created its own chatbot, called Claude, that competes with ChatGPT, the Financial Times reported. Anthropic is also heavily focused on A.I. safety research. Google also has a close partnership with A.I. startup Cohere.

Cohere looks to raise new funding at $6 billion valuation. That’s according to a Reuters report which cited anonymous sources familiar with the fundraising effort. The Toronto-based company, which was created by Google alums, was looking to raise “hundreds of millions of dollars,” the news agency reported. Google has previously lent Cohere time on its computing hardware to train its large language models. Cohere’s CEO Aidan Gomez told Reuters the company is looking to roll out its own chatbot competitor to ChatGPT.

Microsoft, GitHub, OpenAI ask judge to dismiss Copilot lawsuit. The three companies are facing a class action lawsuit for allegedly breaching licensing terms when they trained the code-generating A.I. system Copilot on code uploaded to Microsoft’s Github repository without permission, acknowledgment, or compensation. But according to a story in tech publication The Register, the three companies have now asked a judge to dismiss the lawsuit, saying that the plaintiff’s cannot prove any harm from having their code used in the training and, as a result, don’t have standing to sue.

Colombian judge uses ChatGPT to write part of a court ruling in legal first. The Colombian judge in Cartagena wrote that he had used the OpenAI chatbot to help draft his verdict in a case about whether an insurance company had to cover a child’s medical treatments, Vice reported. The judge said he posed legal questions to the A.I. system and included its responses verbatim in the opinion. But he also added his own judicial insights and said he mostly used the A.I. to “extend the arguments of the adopted decision.” ChatGPT is known to fabricate some information it returns in response to queries and OpenAI, its creator, has said the A.I. should not be used for anything with serious consequences.

EYE ON A.I. RESEARCH

DeepMind finds a way to make sampling from LLMs faster and cheaper. The Alphabet-owned research company published a paper on a way to make it 2-2.5 times faster to get a sample of generated text from a large language model (LLM). LLMs are the things that underpin most of the advances in natural language processing, including ChatGPT. DeepMind’s approach, called “speculative sampling,” involved actually creating a small model that creates a draft output and then using the larger, more capable model to score that draft output. Output tokens that the small and large model both agree on are then accepted for the final output. Given the importance of LLMs right now, and how expensive it is to train and run these huge models, this paper might have some big and immediate commercial implications, helping the companies working on LLMs to potentially lower the cost of running the models. You can read DeepMind’s paper here on the non-peer-reviewed research repository arxiv.org.

FORTUNE ON A.I.

Alphabet will enlist ‘every Googler’ to test its ChatGPT competitor as search engines around the world go all-out on A.I.—by Nicholas Gordon

ChatGPT must be regulated and A.I. ‘can be used by bad actors,’ warns OpenAI’s CTO—by Steve Mollman

Killer robots take on critics in a major showdown over policing: ‘In the end, it comes down to ethics—by Jacob Carpenter

Big tech is making big AI promises in earnings calls as ChatGPT disrupts the industry: ‘You’re going to see a lot from us in the coming few months’—by Tristan Bove

The A.I. revolution is here: ChatGPT could be the fastest-growing app in history and more than half of traders say it could disrupt investing the most—Tristan Bove

BRAIN FOOD

New software finds the human artwork used to train art generation A.I. systems. The A.I. artwork attribution system, which is called Stable Attribution (in a nod to the Stable Diffusion text-to-image generator), takes any A.I.-generated image and then finds the human-created artwork that most influenced that particular piece f A.I.-created art. The system was created by Jeff Huber and Anton Troynikov, the two co-founders of an A.I. startup called Chroma. They see the system as a way for people to understand the human effort that has gone into creating the art on which A.I. systems are now being trained. The system might one day play an important role for artists who want to seek compensation or credit for A.I. art generators that are trained on their work and mimic elements of their artistic style. Already there are a number of lawsuits against generative A.I. companies for training on human-created content in violation of intellectual property rights. You can learn more about Stable Attribution here.

This is the online version of Eye on A.I., a free newsletter delivered to inboxes on Tuesdays and Fridays. Sign up here.

Read More

CEO DailyCFO DailyBroadsheetData SheetTerm Sheet