Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward

Google’s new ‘multisearch’ tool lets online shoppers browse for hot new items by searching both images and text

April 7, 2022, 1:00 PM UTC

Google believes that online shoppers are looking for more compelling ways to discover new items to stock their wardrobes, and so is trialing a beta version of an upgraded search function that lets consumers use both images and text to browse for chic new outfits and accessories.

The Alphabet-owned search giant debuted the new feature, called multisearch, on Thursday, making it available for English-language users in the U.S. Unlike Google search, which can handle only either written or visual search prompts, multisearch allows users to perform search queries using images and text at the same time.

The idea is that multisearch will help people more accurately search for items they may have trouble describing, like a version of a certain shoe style they like but in a different color from what they have. The search tool can be used to find items other than clothes, but seems particularly suited for fashion.

Beta users can access the feature using the Google Lens icon located on the far right of Google’s core search bar in the Google smartphone app, on both iPhones and Android devices. After the Lens icon is activated, people can snap a photo of their favorite jeans, for instance, and retrieve a list of similar-looking jeans on sale across various third-party websites. 

People can further refine their search queries by swiping up and clicking the field that says, “Add to your search.” There they can add text like the word “blue,” which will prompt Google to display a list of blue jeans that look similar to the original photo.

Google says the new feature works for screenshots as well, so if people screenshot a photo of an orange dress found on the web, they can then query the word “green” to find green variants of the dress, according to a blog post about the multisearch feature.

Currently the multisearch feature directs users to shopping results, which a Google spokesperson told Fortune is because the company has seen “high user interest” from online shoppers about using visual searches to find new items. 

Google competitors, like Meta’s Instagram and Pinterest, have recently debuted compelling features that help consumers buy goods they see in photos, too, underscoring how consumer tech companies try to cater to users who scour the internet like they are window-shopping.

Google touts that its new multisearch feature is powered by the company’s advanced artificial intelligence (A.I.) search algorithms. It is also researching how the company’s other A.I. technology, the Multitask Unified Model (MUM), can further improve multisearch.

MUM is an example of a type of A.I. software called multimodal A.I., which is rising in popularity with researchers. These multimodal A.I. systems are able to analyze and find patterns within datasets that contain both images and text. 

For instance, this week the A.I. company OpenAI debuted its DALL-E 2 software for researchers, which incorporates multimodal techniques to perform feats like automatically generating images based on text prompts. As my Fortune colleague Jeremy Kahn explained, “All a user has to do is type the command ‘a shiba inu wearing a beret and a black turtleneck,’ and DALL-E 2 spits out dozens of photorealistic variations on that theme.”

As researchers continue to make major advancements in A.I., expect tech companies like Google and OpenAI to incorporate them into products.

Never miss a story: Follow your favorite topics and authors to get a personalized email with the journalism that matters most to you.