There was a time when traders relied on reporters to tell them about market news. Today, computers can do a better job: Algorithms are able to detect everything from earnings data to social media sentiment, and are much faster than any keyboard-pounding journalist.
This recent wave of automated information has brought big changes for financial news, and economists and the media industry are still digesting the implications.
“Classic breaking news is a function of technology now. Computers find it, and traders trade on it,” said James Stewart, a journalism professor and New York Times columnist.
Stewart made the observation on Friday at “How the Media Affects Markets,” a Columbia University event that explored new trends in financial reporting.
The use of computers to track market-moving events is hardly new, of course. Organizations like the AP are already using “robots” to report on quarterly earnings, while Bloomberg trading terminals are able to analyze talk on Twitter (TWTR) that could affect a stock price.
But the rapid evolution of big data analytics and natural language processing means that machines are able to ingest and translate more news than ever before.
One example is a project, run by the Data Science Institute, that culls news reports to detect and explain disasters ranging from earthquakes to terrorist attacks. Drawing on tens of thousands of prior stories from the AP and New York Times, the project promises to provide timely, reliable information about a market-moving crisis.
All of this suggests that markets, which in theory perform better with more information, should be more efficient than ever. But despite the wealth of computer-generated data, the market still appears freighted with familiar forms of behavioral bias.
According to research by Paul Tetlock, a Columbia finance professor, investors react to negative news terms like “fear” or “snuffed out” more dramatically than they do to positive ones. And, in a preliminary study of survey data, Yale scholars found the sentiment of investors sways irrationally when the word “crash” appears in the media.
One reason why these old biases persist is because the recent flood of online news might mean more information, but not always better information.
Stewart, the Times reporter, pointed to a proliferation of media reports last week that claimed, prematurely, that the European Central Bank’s new monetary measures had failed. He described many of the reports as a form of “spewed out” journalism that can obscure what is actually happening in the market.
This flood of instant news, which is often more noise than signal, can also be exacerbated by algorithms’ ability to curate and customize news for individual readers.
According to Financial Times journalist Gillian Tett, a broader cultural trend of customization, which began with coffee and music, now extends to information as well. The result, in the case of media, is what Tett describes as “intellectual rabbit holes” and tribal echo chambers.
All this might lead a pessimist to conclude that the growth of computer-driven reporting has failed to improve markets much at all. In the worst case, it means a cascade of unreliable reports that are then amplified in people’s respective filter bubbles. In the longer term, though, it’s likely ongoing improvements in machine learning will help everyone—investors and media outlets alike—in parsing the swarm of news and filtering out the reliable bits.
Finally, the age of automated news is also having an immediate effect on the journalists, who used to be so central in bringing news to the market. According to Tett, the role of reporters today should be to act as “silo busters” who can acquire information from diverse sources and present it in context.
Stewart likewise said he sees an opportunity for reporters, even as technology makes many of their traditional role obsolete.
“The mass of bad information will reinforce importance of reliable journalism,” said Stewart. “For journalists, it’s now about connecting, synthesizing and analyzing.”