CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

Can A.I. help investors find the next hot technology?

June 8, 2021, 4:40 PM UTC

This is the web version of Eye on A.I., Fortune’s weekly newsletter covering artificial intelligence and business. To get it delivered weekly to your in-box, sign up here.

Want to get in on the ground floor of the hottest new technology investment? There’s an app for that. Well, not exactly an app. But there is software.

The idea of scouring public sources of information to find clues to emerging trends is an old one. Human analysts, inside government agencies and businesses, have been doing it for decades. But there is a limit to how much information a human can read. Increasingly, companies are trying to automate this process, with help from machine learning and prediction algorithms. The software is intended to flag technologies that are just about to become commercially viable, allowing investors to place savvy bets.

L’Atelier is a small firm that specializes in providing investing, market, and geopolitical intelligence. Founded in 1978, it is wholly owned by the French bank BNP Paribas. For most of its existence, L’Atelier has relied on human forecasters. “The methodologies available at the time were lower fidelity and less complex, but the world was also less complex,” John Egan, the company’s chief executive, tells me. “Technology changed at a slower pace.”

As the world has become more complicated and faster-moving, L’Atelier has begun to integrate more data and more automation into its approach. In the past year, it created a forecasting engine for its technology prognostications that uses natural language processing to search through hundreds of millions of documents—academic papers, research grants, startup funding data, news stories, and social media posts—in multiple languages.

This NLP software identifies the technology being discussed and feeds this information into an algorithm that provides a score for the significance of that particular information (it accounts for things such as the prestige of the academic journal is which an article has been published or the influence in a particular field of a person tweeting about a particular technology.) Each of these scores is then fed into another algorithm that aggregates them into an overall assessment of how likely the technology is to become commercially important within the next 10 years. “We are not trying to predict future events or future winners,” Egan says. “We just want to identify clear momentum in a particular direction.”

A person using the software can view the weightings applied to each document category. “We wanted the forecast to be as explainable as possible,” Giorgio Tarraf, L’Atelier’s technology intelligence director, says. He said the firm also wanted to allow L’Atelier’s clients to be able to manually adjust the weightings used for the final overall score—for instance, lowering the emphasis given to momentum in grant awards and raising the importance placed on patents—because each customer may have a different investing approach or risk tolerance. Users can also hone the scores to focus on one particular country or region or adjust the time horizon of the forecast to be shorter or longer within the one decade window.

L’Atelier is not the only firm using machine learning to try to create forecasting engines for tech investing. A number of venture capital firms also say they’ve built similar internal tools. And in some cases, those firms claim their A.I. tools don’t just help them identify the technologies that are likely to be good investments, as L’Atelier’s system does, but actually provide insight into the likelihood that a particular startup will succeed. EQT Ventures, the venture capital arm of the Swedish private equity group, has an A.I. system called “Motherbrain” that helps guide its investment decisions. The firm recently told Bloomberg it had tweaked its algorithm to help find more female founders to fund, with the result that it was now screening twice as many of these startups as it had a year before. Moonfire Ventures, a new seed fund in London, also says it is building a machine learning system to help steer its funding decisions.

Startup Amplyfi, based in Cardiff, Wales, has built software that also uses machine learning to forecast technological developments and to find companies that may be worth investing in. In addition, its tech can be used to monitor competitors or, for governments, to look for tech advances with strategic implications. The company was co-founded by Chris Ganje and Ian Jones, who had built a similar forecasting system for BP. Among Amplyfi’s successes: working on behalf of Harvard University researchers. In 2017, the startup’s technology helped uncover a secretive North Korean biological weapons research effort that was making rapid advances with little public scrutiny, while most of the world’s proliferation experts remained focused on the country’s nuclear and missile development programs.

So what tech trends does L’Atelier’s tech forecasting engine think are poised to take off? Within artificial intelligence, it sees continued growth of neural networks, virtual avatars—systems that can act as autonomous agents for an individual in the digital world—and software-based generative adversarial networks (GANs), the same tech the lies behind deepfakes and which is increasingly being used in the entertainment industry and to help other companies create synthetic data of various kinds.

Now, to me, those don’t seem like the kinds of counterintuitive predictions you’d need A.I. to help make. But when human and artificial intelligence concur, it is probably a stronger indicator of a prediction’s accuracy than relying on either human analysis or software alone.

And with that, here’s the rest of this week’s A.I. news.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

A.I. IN THE NEWS

McDonald's tries voice recognition in drive throughs, but the tech may not yet be accurate enough for widescale use. In a pilot program, the company is using A.I.-based voice recognition to take down customer orders at 10 drive throughs in Chicago, according to a CNBC story. But McDonald's CEO Chris Kempczinski told an investment conference that the system is only about 85% accurate in its transcriptions and can only handle about 80% of orders, meaning a human must still step in to handle the remaining fifth. The CEO acknowledged that it was "a big leap" to go from this small trial to being ready to roll out the A.I. order taking tech across McDonald's 14,000 restaurants in the U.S. "with an infinite number of promo permutations, menu permutations, dialect permutations, weather — and on and on and on."

U.N. report mentioning use of lethal autonomous weapons raises eyebrows, concerns. Last week, in the news section of this newsletter, my colleague Jonathan Vanian highlighted a that a United Nations report on the Libyan Civil War published in March has caught the attention of campaigners worried about the advent of "killer robots." Buried in the 550-page report is a single paragraph that says a retreating warlord's forces were attacked by unmanned aerial drones and "lethal autonomous weapons systems" and "suffered significant casualties." Some publications said this was the first time fully autonomous weapons had killed people in combat. But now a closer look at the incident in tech publication The Verge noted that the systems involved seem to have included the STM Kargu-2, a kind of kamikaze attack drone built by a Turkish defense company, and similar "loitering munitions." These are essentially cruise missile that can hang around a designated location and then strike any target meeting a certain criteria within that geographic area. There's a fine and blurry line between these kinds of arms and the lethal autonomous weapons campaigners have been trying to get banned, but as several security analysts The Verge quoted said, loitering munitions have killed people before in several other recent conflicts. "It seems to me that what’s new here isn’t the event, but that the UN report calls them lethal autonomous weapon systems," Ulrike Franke, a senior policy fellow at the European Council on Foreign Relations, said.

Speaking of military uses of A.I....The British Navy has been conducting a major exercise to see if two A.I.-based decision-support systems for its warships can detect and advise on how best to defend against supersonic anti-ship missiles in a major live-fire exercise being conducted off the coasts of Scotland and Norway. Sailors involved in the exercise said that the two systems, one of which provides alerts to help a ship's operations room prioritize incoming threats and another that can recommend weapons to deploy against them or other countermeasures to take, said the A.I. software allowed the ships' crews to react much faster than they normally would be able to, according to a story in Naval News.

Chinese research group builds the biggest language A.I. system yet. The Beijing Academy of Artificial Intelligence (BAAI), a non-profit research institute funded in part by the Chinese government, has created a natural language processing system that takes in 1.75 trillion parameters, or variables, that it uses to learn the relationship between words. That makes the system, which the researchers called WuDao 2.0, the biggest of the ultra-large language models created to date, surpassing systems such as Google's Switch Transformer, which used 1.6 trillion parameters, or OpenAI's GPT-3, which uses a mere 175 billion parameters. These systems are capable of simulating conversational speech and generating text, including poems, long passages of prose, and even recipes. It is unclear, however, how much better WuDao 2.0 is than, say GPT-3, despite taking in all of that additional data.  Nonetheless, an article in the South China Morning Post detailing WuDao 2.0, put the development in the context of fears that the U.S. is beginning to lose ground to China in the technological arms race to develop advanced A.I. 

EYE ON A.I. TALENT

Steve McMahon, the chief information officer at cloud-based big data company Splunk is leaving, according to a post from McMahon on his LinkedIn page.

Joaquin Quiñonero Candela, a well-known A.I. engineer who has been the head of Facebook's Responsible AI research team, is leaving the company, citing burnout and a desire to spend more time with his family, according to a post on his own Facebook page. Candela had been the primary subject of a critical story in M.I.T. Technology Review in March that faulted the company's Responsible AI efforts for failing to address what many see as the most important A.I. ethics issues facing Facebook, namely the social network's role in political polarization and the spread of disinformation and hate online.

London-based A.I. firm Faculty, which helps companies and government build bespoke A.I. systems and consults on A.I. strategy, has hired Janine Lloyd-Jones as marketing and communications director, according to PR Week. She was most recently deputy head of communications at the U.K. Foreign, Commonwealth and Development Office, the British government's foreign service and develop aid organization.

EYE ON A.I. RESEARCH

A.I. that claims to detect COVID-19 from chest X-rays may not be what it seems. Several research teams have claimed to have created A.I. algorithms that can accurately diagnose COVID-19 by analyzing chest X-rays. But now a team of researchers from the University of Washington have taken a closer look at the results of several of these systems and raised some red flags. The problem is one that has bedeviled previous A.I. systems that deal with medical imagery—something called "short cut learning." That's when an algorithm finds a way to successfully categorize images in a particular set of training data without learning the primary skill that those creating the algorithm think they are teaching the algorithm. It turns out, according to a paper the University of Washington researchers published in Nature Intelligence, the COVID-19 X-ray algorithms are very good at honing in on secondary data present in the images that is correlated with a likely COVID-19 diagnosis but has nothing to do with being able to find indicators of disease in the X-ray image itself. For example, in many cases, the algorithms gave a lot of weight to the age of the patient, because it was recorded in the X-ray's metadata. That make sense because people with severe COVID-19 were more likely to be older. But it doesn't mean the A.I. system actually knows how to detect the clinical features of COVID-19 in a patient's lungs. As Alex DeGrave, a medical science student at the university and co-author of the paper, told tech publication The Register, " The shortcut is not wrong per se, but the association is unexpected and not transparent. And, that could lead to an inappropriate diagnosis.”

FORTUNE ON A.I.

Inside the ad, ad, ad, ad world of YouTube—by Aaron Pressman

Facebook puts the final nail in Mark Zuckerberg’s free speech master plan—by Danielle Abril

BRAIN FOOD

What, after all, is the point of A.I. ethics? That is the question Ben Green, a public policy researcher and fellow at the Gerald R. Ford School of Public Policy at the University of Michigan, asks in a new paper posted to the research repository arxiv.org. "Tech ethics is vague and toothless," Green writes. Too often, he finds, "it is subsumed into corporate logics and incentives, and has a myopic focus on individual engineers and technology design rather than on the structures and cultures of technology production. As a result of these limitations, many have grown skeptical of tech ethics and its proponents, charging them with “ethics-washing”: promoting ethics research and discourse to defuse criticism and government regulation without committing to ethical behavior."

One fundamental problem, Green argues, is that a lot of the discussion about A.I. ethics and tech ethics "papers over" any real debate about what constitutes ethical behavior and who gets to make those decisions. "The superficial consensus around abstract ideals may thus be hindering substantive deliberation regarding whether the chosen values are appropriate, how those values should be balanced in different contexts, and what those values actually entail in practice," he writes. 

Green calls for a broader framing of the question of A.I. and tech ethics that is based on what he calls "a sociotechnical analysis," which looks at how technology and society interact. For instance, he notes, in response to concerns about algorithmic discrimination, companies focus on "algorithmic fairness, which often centers narrow mathematical definitions of fairness but leaves in place the structural and systemic conditions that generate a great deal of algorithmic harms." Case in point: you can have a facial recognition system that works equally well for both white and Black faces, but if you then allow that system to be used by a police department that focuses its enforcement resources more in Black neighborhoods than white ones, and in a country where the drug laws may be inherently unfair and the justice system biased, then the outcome of using that facial recognition can still be discriminatory.

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.