In the early morning hours of Dec. 31, 2019, a company called BlueDot—founded in 2013 by a Toronto-based doctor as a software-as-a-service firm to spot, track, and study the spread of infectious diseases—detected an outbreak of “unusual pneumonia” cases near a market in Wuhan, China. The rest is, sadly, history: BlueDot was the first organization to alert the world to the possibility that a deadly global pandemic could be breaking out.
Just how did BlueDot predict the birth of a new coronavirus outbreak, nine days before even the World Health Organization could do so? The answer lies in the fact that the startup uses data that is constantly updated from numerous sources around the world—including the news media in 67 languages, information on animal and plant diseases, hospital data, public health announcements, blog posts, social media, data on movements of people and goods, and so on. The freshness of the data, as well as its variety, allows BlueDot’s algorithms to pick up weak signals of epidemics from the patterns and correlations it draws.
Blue Dot lived up to its raison d’être because of what I call data alchemy—continuously gathering the freshest possible data from across the world, regardless of its perceived quality and utility, and feeding it to artificial intelligence algorithms so those algorithms could make forecasts and predictions that would otherwise be impossible. Data alchemy offers CEOs a new post-pandemic lens for looking at A.I., showing them how they can use it effectively and how to embed it in decision-making processes.
It has become fashionable to talk about how the COVID-19 crisis has changed everything, but it’s really true when it comes to A.I.’s use. Because of the unprecedented nature of the changes caused by the pandemic—such as new customer buying habits, remote working, employee engagement, shorter supply chains, and new health care norms for employees—some companies have completely reevaluated the kind of data they gather. Using past data for decision-making has become almost impossible because many variables, be they consumer preferences or supply routes, now change from one day to the next. The market leaders have been compelled to shift to data alchemy, which, it turns out, delivers better forecasts.
As the times continue to be volatile, uncertain, complex, and ambiguous, more companies would do well to switch to data alchemy. At present, they tend to use targeted data from a limited number of sources, investing time and resources to find rich data that is relevant to the problems at hand. When companies know what they’re looking for, akin to a close-ended validation process, that approach works well. Thus, the old data-collection methods work when companies know in which haystack they’re looking for a needle, but not when the haystacks themselves are constantly changing.
Fresh, raw, and broad data
Because traditional data mining tends to use richer but older data, it will produce results that are less accurate than those of data alchemy. To improve forecasts, companies must learn to feed their A.I. algorithms with fresh and raw data in an open-ended mode of discovery. There’s no such thing as data overload; the more data that companies collect, the better. A.I. systems have the ability to process enormous amounts of data, and their accuracy increases with volume, variety, and freshness.
Ideally, the data should be:
- Raw. The data should not be preselected or preprocessed.
- Broad. It must come from a wide variety of sources.
- Fresh. The data must be fresh—gathered and analyzed in (close to) real time.
Using data with those three characteristics from the ecosystem will ensure that it is fresh and varied. No data is useless when it comes to forecasting; no one knows where the weak signals of unforeseeable events will originate.
Data alchemy’s other component is, of course, the A.I. algorithm. An objective function, specifying the problem the algorithm is trying to solve, has to be drawn up to optimize decision-making. It must be a combination of different software-engineering techniques, such as natural-language programming, fuzzy logic, and neural networks, so it can tackle different types of data. The A.I. algorithm must also enable mechanisms that facilitate learning between A.I. and human beings.
Data alchemy has provided the pioneers with remarkable resilience. Take, for instance, the predictive analytics system that British Airways has built with the aid of the U.K.’s national data science organization, the Alan Turing Institute. When the COVID-19 pandemic struck last year, British Airways lost most of its customers overnight, with no idea of when they would return. Its flight-scheduling methods became obsolete as travelers altered their plans constantly, borders opened and closed suddenly, and quarantine rules and social distancing norms kept changing—without warning. Because of its data-gathering process, which uses the latest data from a variety of sources, the airline was able to reroute flights and adjust prices as circumstances changed.
As British Airways shows, the practice of data alchemy isn’t just confined to digital natives; a growing number of legacy companies are also updating their A.I.-based decision-making processes.
With the future continuing to be unpredictable, more companies would do well to adopt data alchemy. Some luxury brands have developed data alchemy–based scenarios for inventory management, while logistics companies have used it to develop more agile supply chains. A few smart insurance companies too are turning to data alchemy. Projections about would-be customers’ revenues and profits based on selective data from the past, such as the last five years’ performance, are no longer guides to the future. Only data alchemy enables insurers to make the right decisions. It also speeds up customer-facing decisions by turning multilayered procedures into one seamless process.
Look at China’s Ant Financial, whose approach embodies data alchemy. Its algorithm provides a decision for each loan application—including the interest rate it can offer—from a small or medium enterprise. The algorithm continually refines itself, relying on steady streams of fresh data from over 3,000 sources, such as Taobao, an e-commerce platform in the Alibaba Group. As a result, Ant Financial’s default rate is less than 1.5%—lower than at the average financial-services company that uses traditional data mining—and it will almost certainly drop over time as its algorithm learns and improves. If Ant Financial didn’t rely on fresh data, it would be unsuccessful because of the pace at which change is happening in a post-pandemic world.
How do you get started with data alchemy? Across your organization, identify processes and decisions where the level of accuracy has dropped after the COVID-19 pandemic broke out. That decline in accuracy is probably due to the unprecedented speed and magnitude of changes your company is facing. If you also sense that you may not have all the data you need or haven’t identified all the parameters to make sound decisions in key areas, it’s an additional red flag. Shifting to data alchemy in those areas will generate value quickly.
Data alchemy isn’t going to be a short-lived phenomenon; it will enable quicker, more granular, and more accurate decisions even as the speed of change increases, so it’s bound to lead to permanent changes in decision-making processes. However, if you are able to empathize with the need for data alchemy, so can your rivals. That’s why it’s imperative that you act before they do, and wrest a competitive advantage for your business.
François Candelon is a managing director and senior partner of BCG, and the global director of the BCG Henderson Institute.
More opinion from Fortune:
- Will COVID wipe out standardized college testing?
- Finding diverse talent: Your processes and perceptions are the problem, not the pipeline
- The FDA’s foot-dragging on the AstraZeneca vaccine is indefensible
- How one chief diversity officer is leading in the workplace and beyond
- A 7-step plan for business leaders to win in the post-COVID economy