How language A.I. transformed Bloomberg’s business—and may change yours too

June 7, 2022, 5:37 PM UTC

A few weeks ago, I promised more from my conversation with Gary Kazantsev, head of quant technology strategy in the office of the chief technology officer at financial news service Bloomberg. Previously, he ran machine learning engineering there. (Full disclosure: I worked at Bloomberg News before Fortune.) Gary, who also teaches courses on machine learning at Columbia University, is a font of knowledge about the current state of artificial intelligence in business.

You might know Bloomberg from its news service, which includes a cable television and a radio channel, as well as a news wire and website. But the company makes most of its money from financial data. Financial institutions subscribe to the “Bloomberg terminal”— once a dedicated piece of hardware, but now a software package that can be accessed online. A subscription gives users access to an immense range of data about stock, bonds, commodities, currencies, as well as the ability to search, parse, and graph that data. There is so much news and data available “on the terminal” that a perennial problem for Bloomberg is that most its customers only ever use very limited subset of functions. Compounding this problem is the fact that until recently users had to memorize obscure three- and four-letter codes to run the terminal’s functions. I remember that as a new employee at Bloomberg in 2011, I spent an entire week in training just to learn the rudiments of using the terminal.

Kazantsev was keen to show me how Bloomberg has, in the past few years, used new capabilities in natural language processing (NLP) to transform how customers find content on the Bloomberg terminal. And the way Bloomberg has deployed NLP holds lessons for other companies hoping to use NLP to change how customers interact with products and the business a whole.

Thanks to advances in NLP, a Bloomberg user no longer needs those obscure codes. She can simply write in the command line, “Find all the U.S. corporate bonds with a yield greater than 4%, a rating better than BBB, and a maturity before 2025,” or “Who are the top five holders of Apple stock?” and the system will provide the answer. Before, getting this information required a time-consuming, multi-step process involving several different commands and, in the case of screening searches for stocks or bonds, filling in fields in a database query interface. The new system also auto-completes as a user is typing, suggesting possible queries—much like the Google-search bar does. This allows a user to discover options—such as a type of analysis or a graphing option—they otherwise may not even realize was available.

Not only have these new NLP capabilities helped Bloomberg’s customers get more out of their product. They have also helped improve how the company’s customer service reps provide answers to these clients. The “question answering” NLP A.I. is used in about 50% of its customer service calls now and in more than a third of cases, the A.I.’s top suggested answer is the one the customer service rep recommends to the customer.

While there’s been a lot of buzz about ultra-large language models, Kazantsev says that Bloomberg’s natural language question-answering functionality is not built from a single ultra-large language model. Instead, it is a modular system using many different components including “a query intent model” that tries to predict which function the user wants to run and a “semantic parser” that tries to classify the relationship between the words in the sentence and then label those words as either entities (essentially proper nouns of some kind) or attributes (is it a date for example?) And then there is a module that Kazantsev says kind of runs that semantic parser in reverse to make the auto-complete suggestions. For some aspects of what Bloomberg does, it uses a “large-ish” natural language model that has been fine-tuned on financial text.

Why doesn’t Bloomberg use an ultra-large language model, of the sort that OpenAI has built with GPT-3? Well, when you get models that are that big—taking in more than 100 billion variables—it takes too long to run each query, Kazantsev says. Each answer would take seconds; Bloomberg needs to generate answers in fractions of a second.

Kazantsev says he’s fascinated by ultra-large language models from a research standpoint—they do seem to have really incredible, emergent properties (such as explaining the logic of jokes without being trained to do so)—but for many practical, business tasks, the problem remains, “what do you do with them?” They are simply too unwieldly to be practical—at least for now.

There are some key lessons for other companies here: The NLP revolution is real and can be transformative. Customers increasingly want to interact with technology in natural language—not complicated codes, or, for that matter, a series of drop-down menus and database fields.

This is true for computer programming too—one of the most impactful things to come out of the NLP revolution may be A.I.-enabled software that lets a person specify in natural language what they want a software program to do and then the A.I. will write the appropriate code. But modular systems made of smaller components are more likely to be how most businesses bring natural language understanding to customers, rather than ultra-large language models.

Finally, before we get to this week’s news, I want to wish a fond farewell and good luck to my co-writer on this newsletter, Jonathan Vanian, who is leaving Fortune after seven years. Look out for him popping up on your TV on CNBC.

Also, a quick correction: In last week’s newsletter, I misspelled the name of Omilert, one of the companies that makes gun detection software. I regret the error.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

A.I. IN THE NEWS

A.I. ethics board members at police tech company resign over plan to arm drones with Tasers. Nine of 13 members of the A.I. ethics board at Axon, which sells technology to law enforcement agencies, resigned last week after the company decided to market drones armed with Tasers despite the board's opposition. Axon CEO Rick Smith proposed Taser drones as a possible answer to school shootings, tech publication Protocol reports. The resigning board members said in a statement that they did not think Taser-armed drones were the right solution. The company said it would pause work on the drones and "engage with key constituencies" to determine its next steps.

Robots that can pick raspberries. One of the most difficult tasks for robots to master is picking soft fruit—and raspberries may just be the toughest challenge there is. A robot needs to have sophisticated computer vision to spot the ripe fruit on the plant and then apply just enough pressure to pluck the berry, without bruising it. Now a startup called Fieldwork Robotics, a spinout from the University of Plymouth in the U.K., has created robots able to master this task. It has deployed them on a farm in Portugal that supplies raspberries to several major British supermarkets, according to a story in The Guardian.

PimEyes facial recognition app could erode privacy and feed "privacy protection" rackets. That is according to The New York Times, which in a trial of the facial recognition software found that PimEyes was able to surface a surprising number of accurate photos from the depths of the Internet. This was true even when the image of the subject used to initiate the search was wearing sunglasses, a mask, or turned away from the camera. But, as the newspaper details, the app sometimes surfaced images that people would prefer to forget—and PimEyes controversially makes money not just for subscriptions to its app but from premium services designed to help people remove unwanted photos from the Internet. At least one person the paper interviewed, who found pornographic photos of herself that were taken when she was young and vulnerable, said this practice was extortive. The paper also found that sometimes PimEyes wrongly identified women in pornographic material, raising serious concerns about the ramifications of such misidentification.

A.I. will increasingly be used to help reduce injuries in professional sports. Algorithms that can analyze video of a player and predict injuries are coming to an arena near you soon, The Wall Street Journal reports. The idea is to use the technology to replace old-fashioned guesswork and even new-fangled wearable devices. These devices feed data back to players and coaches, but can sometimes prove awkward or uncomfortable to wear and produce noisy data, the paper says. The use of the computer vision algorithms raises delicate questions about who owns the data—the player, the team, or the software vendor—and what they can do with it. Some also doubt there's a way to prove the injury prediction software is accurate without putting players at risk in a way that would violate research ethics.

Bureaucratic inertia is holding back A.I. analysis of satellite data. Wired takes a look at why progress in using A.I. systems to analyze satellite imagery to help with everything from insurance claims and disaster recovery to combating deforestation isn't being adopted more quickly. The problem, the publication concludes, is that governments and companies are often hidebound, reluctant to adopt the new analytical methods, and that the political will to act on what the analysis is showing is often lacking.  

EYE ON A.I. TALENT

Ferret, a company in Los Angeles that uses A.I. to provide clients with "real-time, risk-assessment intelligence," has hired Greg Loos as its chief operating officer. Loos was a co-founder and former president of Pondera Solutions, a fraud analytics software company acquired by Thompson Reuters in 2020.

QBE Insurance Group, based in Sydney, Australia, has hired Christopher Bannocks to be its group chief data officer, trade publication Insurance Business Magazine reports. Bannocks, who will be based in London, was previously chief data and analytics officer at food company Danone.

EYE ON A.I. RESEARCH

Using A.I. to "listen" to the health of coral reefs. Scientists have used machine learning to analyze the health of coral reefs from underwater audio recordings. It turns out the healthy coral reefs generate a unique audio signature, which scientists tell Reuters is reminiscent, to human ears, of a "crackling, campfire-like" sound, because of the noise generate by all the life living in, on, and among the coral. From the story: The artificial intelligence (AI) system parses data points such as the frequency and loudness of the sound from the audio clips, and can determine with at least 92% accuracy whether the reef is healthy or degraded, according to the team's study published in Ecological Indicators journal. Researchers hope the system will help scientists track the health of coral reefs worldwide that are under threat from climate change, harmful fishing practices, and pollution. 

FORTUNE ON A.I.

Elon Musk delays Tesla’s A.I. Day to finish work on the Optimus humanoid robot—by Christiaan Hetzner

Roblox is one of the biggest metaverse success stories. So why hasn’t it turned a profit?—by Rob Walker

Current and former Meta staffers describe confusion, disarray and declining confidence in Mark Zuckerberg as Sheryl Sandberg departs—by Jeremy Kahn and Jonathan Vanian

Microsoft breaks with Amazon and Starbucks on unions in vow to voluntarily recognize labor: ‘We have a lot to learn’—by Marco Quiroz-Gutierrez

Tech and crypto firms experienced massive layoffs in May. Here’s how bad it really is—by Andrew Marquardt

BRAIN FOOD

Did an A.I. just invent its own secret language? Giannis Daras, a computer science PhD. student at the University of Texas, Austin, created a Twitter firestorm last week when he tweeted out some findings from a non-peer reviewed research paper he co-wrote with Alexandros Dimakis, a UT Austin professor. Daras claimed to have discovered that DALLE-2, the ultra-large language-to-image generation A.I. built by OpenAI, had created its own strange language. The way DALLE-2 normally works is that a user enters a text prompt, such as "two farmers arguing about vegetables," and DALLE-2 generates images of that scene in different styles. Daras tweeted out some examples in which he found strange text strings that DALLE-2 seemed to have associated with classes of images, such as: the text "Vicootess" with images of vegetables; the text "Apoploe vesrreaitars" with images of birds (or at least, Daras says later, "things that fly"); and the text "Contarra ccetnxniams luryca tanniounons" with "bugs or pests." Daras' tweet thread was soon picked up on by media outlets that ran headlines such as The New York Post's "Artificial intelligence spotted inventing its own creepy language."

Not so fast, chimed in Benjamin Hilton, a researcher at London-based nonprofit 80,000 Hours who also has access to DALLE-2. Hilton said he tried to recreate Daras and Dimakis's results and could not reproduce most of them. (The association between "Apoploe vesrreaitars" and bird-like images being the notable exception.). Hilton speculated that most of what Daras and Dimakis had found was simply random and that in other cases what DALLE-2 may be doing is trying to interpolate between the Latin taxonomy for various animals it has encountered in its training data.

Daras and Dimakis wound up revising their research paper, which was posted on arxiv.org. They basically fell back on the argument that all they had ever really wanted to highlight was the fact that because some of this nonsense text could get DALLE-2 to generate specific kinds of images it means DALLE-2 is susceptible to "adversarial attacks." In other words, someone could deliberately use certain unusual prompts to elicit results most people wouldn't expect. Daras argued these results showed that people would need to be careful how they used DALLE-2 and other such text-to-image generation software as its output is far more unpredictable than most people realize.

That's an interesting point. But it's not an A.I. developing its own a secret language.

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.

Read More

CEO DailyCFO DailyBroadsheetData SheetTerm Sheet