Disco, bell bottoms, big hair…and cutting-edge A.I.?
This is the web version of Eye on A.I., Fortune’s weekly newsletter covering artificial intelligence and business. To get it delivered weekly to your in-box, sign up here.
What does a programming language invented in the 1970s have to do with today’s cutting-edge A.I.?
Quite a lot, as it turns out. But first a little background: Most of today’s breakthroughs in artificial intelligence have been the result of neural networks. Self-driving cars and software that can beat the world’s best players at games like Go and Starcraft, write uncannily humanlike prose, and detect breast cancer in mammograms better than an experienced radiologist—that’s all neural networks.
But neural networks have some drawbacks. It is difficult to endow them with existing knowledge: The laws of physics, for instance, or the grammar of a language. Neural networks can learn these rules from scratch, by trial and error, but that takes lots of time, computing power, and data—all of which can be expensive.
Another problem: Neural networks have a tendency towards what data scientists call “overfitting.” That’s when a machine learning model finds correlations in its training data that seem to have predictive power, but which turn out to be spurious in the context of the algorithm’s intended purpose. This problem comes about because neural networks can ingest so much data and encode relationships along so many dimensions, they can always find patterns—but they can’t easily figure out causation.
A famous example: University of Pittsburgh researchers used a neural network to try to predict which patients with pneumonia were most at risk of sudden deterioration. After being trained on historical data, the algorithm falsely classified patients with asthma as extremely low risk. It turned out that (human) doctors, knowing asthma patients were at high risk, were more vigilant with them and intervened earlier and more aggressively. So, yes, in the training data, it looked like asthma correlated with good patient outcomes. But that wasn’t a very helpful correlation for an algorithm that triages pneumonia patients.
Recently, a team of researchers from Johns Hopkins University and Bloomberg came up with a method with the potential to overcome some of these problems. (Full disclosure, I used to work at Bloomberg and know one of the people involved in the research.) Surprisingly at the heart of their solution is Datalog, a logical programming language developed in the late 1970s.
Datalog is a derivative of Prolog, a programming language invented in 1972 by A.I. researchers interested in getting computers to understand French. Five years later, Datalog was specifically designed to create rules for querying a database of facts. Logical programming languages like this were important in that era of A.I. research, when scientists thought the best way to imbue computers with intelligence was through a series of high-level rules or instructions for how the software should manipulate data. This kind of “symbolic A.I.” reached its apogee in the 1970s and early 1980s with so-called “expert systems”—software that tried to mimic the decision-making of human specialists in various fields from accounting to chemistry.
Computer science turned away from symbolic A.I. because, while it was good at logic, it wasn’t very good with perception (is that a cat in the photo? Can Alexa understand my spoken instructions?), struggled with exceptions and edge cases, and couldn’t cope well when data was missing or erroneous. It was also always limited by what experts already knew about the rules and correlations in a system—it couldn’t discover whole new approaches. Plus, the computer programs needed to run such systems were laborious to craft. “We had this hubris that we could write down everything and this sank expert systems in the 1980s,” says Jason Eisner, a computer scientist at Johns Hopkins who, along with his graduate student, Hongyuan Mei, worked on the new research with Bloomberg.
But Eisner says there was a lot about symbolic A.I. that was powerful. “There is a lot of old stuff and it is not bad just because it is old,” he says.
Eisner was particularly interested in trying to create A.I. that could help doctors draw insights from medical data. But there was a problem—different types of medical data are collected at vastly different time scales. A heart monitor takes a reading multiple times per second, while information from a routine medical examination might come only once or twice per year. Meanwhile, most neural network systems assume a consistent time interval.
At a conference, Eisner wound up talking to Gideon Mann, the head of data science in Bloomberg’s office of the chief technology officer, who had a similar problem: making sense of all the noisy data—economic forecasts, news stories, company filings, social media posts—that can affect a company’s stock performance. This information also arrives at very different time steps: Stocks trade at fractions of a second, news come out daily or weekly, social media posts many times a day, and company earning reports only quarterly.
Eisner had an intuition that if you could limit the kinds of correlations the neural network could make, it would both help solve the overfitting problem and allow a single A.I. system to deal more efficiently with data collected at disparate time scales. And Datalog seemed like a good tool for doing so.
Eisner’s student Mei received a three-year PhD. fellowship from Bloomberg that included a 12-week summer internship with the company. Mei spent part of it working on this problem. Along with Eisner and Guanghui Qin, another Johns Hopkins graduate student, and Minjie Xu, a machine learning researcher and software engineer at Bloomberg, Mei created a hybrid system—they call it Neural Datalog Through Time (or NDTT) — that marries Datalog with a neural network.
Datalog is used to delineate possible events that can occur in a time series based on events that have already occurred. But the probability of each event happening is worked out by a neural network. Eisner compares this to a coloring book, in which a human uses Datalog to sketch the outline of the image—the hard facts—but then a neural network colors in the picture based on probabilities—softer tendencies— it’s learned from training data. “The network can only color within the lines,” he says. “But within those boundaries, it can choose what is useful to color, and its choice of colors is beyond what a human could do by hand.”
For instance, imagine a digital assistant that wanted to predict whether Ruth might want to book travel to Chicago for next Thursday. You might limit such a system to take into account certain variables—such as the weather in Chicago and whether Ruth had a board meeting already booked in New York that day—and to discount other variables, such as the score of the Cubs game or the weather in Seattle. But the exact way in which the weather in Chicago influenced the likelihood of Ruth wanting to book the trip would be up to the neural network to figure out.
The researchers tested NDTT on several small test problems. One was based on data about which TV programs 1,000 people watched over an 11-month period. NDTT had to learn to predict what show any given user would want to watch when. Another test was records of a soccer game played with robots, called RoboCup, in which NDTT had to predict the next event in the game—a pass to a certain player, a kick at the goal, or an attempt to steal. In both tests, NDTT had significantly better error rates than pure neural network approaches, including so-called graph-based methods that try to take into account the structure of the data and how it evolves through time. It also outperformed a pure symbolic A.I. approach.
For a long time, critics of pure deep-learning approaches, such as Gary Marcus, have been calling for exactly the sort of hybrid approaches NDTT represents. And for a long time, deep-learning purists scoffed and kept plowing ahead with ever larger neural networks. But there is now a sense that the ground is shifting and hybrid approaches are gaining a tentative, but important, foothold. As Mei, the lead author on the NDTT research, says, if we want to apply A.I. to areas like physics or chemistry—where humans already have some well-proven knowledge of the rules of the system—hybrid approaches have real advantages. “In these fields, machine learning can, help but such fields have a lot of knowledge and sometimes it is risky or dangerous to do learning ignoring that knowledge,” he says. But, he cautions, “What matters is how much faith you have in that knowledge.”
And with that, here’s this week’s A.I. news.
JOIN US: The pandemic has rewritten business. Fortune is hosting a virtual discussion with experts across industries (Intel, Slack, Citi, Universal Pictures) to explore how companies can, through transformative tech such as A.I., become more resilient in a time of intense change. Register here for free to join on September 16 at 2:00-3:00 p.m. EDT.
A.I. IN THE NEWS
Detroit sued over use of facial recognition in false arrest. A Detroit man who was wrongly arrested for assault after a facial recognition system misidentified him as the person seen in a video of the incident is suing the city as well as the police officer who made the decision to run the video through the facial recognition software, according to a report in Motherboard. The man, Michael Oliver, is the second Detroit resident known to have been wrongly arrested due in part to erroneous identifications by the technology.
More face recognition troubles—an Illinois casino fails in bid to have biometric privacy breach lawsuit thrown out. Par-A-Dice Hotel Casino in Peoria, Illinois, and its parent company are facing a lawsuit for violating the state's strict biometric privacy law by using security cameras on its property that also scan and record geometric data of people's faces. Last week, a U.S. District judge ruled against the casino company's attempt to have the suit thrown out and ruled that the plaintiffs did meet the law's pleading standards, Reuters reports.
Google forms a new research institute with the U.S. National Science Foundation to study human-A.I. interaction. The company announced that it is investing $5 million to set up The National Research Institute for Human-A.I. Interaction and Collaboration as well as donating cloud computing resources to the new center. The institute will delve into topics around using A.I. for social benefit and research inclusive design, safety and robustness, and privacy, Google said.
Fear of Chinese advances in A.I. is driving the U.S. and Europe closer together on regulation. That's what a story in Politico Europe claims. The publication reports that while the Trump Administration had, as recently as the start of this year, rejected attempts by other Western governments to form an international body to coordinate A.I. policy and regulation, saying that such an effort was "premature and would hamper innovation," it has since changed its tune due to fears over China's rapid development of A.I. The shift lead to the G7 announcing the creation of the Global Partnership on Artificial Intelligence in June. But while the move is a significant shift, the publication notes that the U.S. and the European Union remain far apart on exactly how they want to regulate A.I., with Washington still wary of prescriptive approaches that are more popular in Europe, and the Europeans skeptical of U.S. support from home-based tech companies.
Google uses a technique developed by DeepMind to improve traffic estimates. Researchers at DeepMind, the London-based artificial intelligence company, have helped their sister company, Google, improve the "estimated time of arrival" predictions in Google Maps, according a blog post from the A.I. company. The technique involves dividing each location in large "supersegments" of adjacent roads that share traffic volume and then using a graph neural network to improve the traffic forecasts across these segments. The new method improve the accuracy of Google Maps ETAs significantly for a number of cities: 54% in Taichung City, Taiwan, 31% in Singapore, 37% in Osaka, 43% in Sydney, and closer to Google's home turf, 22% in San Jose.
Baidu Apollo signs deal with the Chinese city of Guangzhou. Baidu Apollo, the Chinese Internet giant's autonomous vehicle company, has inked an approximately $67 million agreement with Guangzhou-based SCI Group and the Guangzhou Public Transport Group to develop smart transportation technologies for the city, the company announced in a Medium post. The technologies include V2X (vehicle-to-everything wireless communication), connected cars, 5G-powered robobuses and self-driving taxis.
U.K. startup wins government backing to test A.I.-powered breast cancer screening software. Kheiron Medical Technologies, a London-based A.I. startup, has won a British government award to roll out its automated mammography screening tool to 15 clinical testing sites across the U.K. over the next three years, the company and the government announced. In studies it conducted, Kheiron says its software surpasses the U.S. national performance benchmarks for digital mammography screening, equaling the performance of human radiologists. The company's software has also undergone clinical trials in the city of Leeds—although the results of that trial have not yet been published. The startup says the new award will help it benchmark "how to integrate new, cutting edge AI technologies into the [U.K. National Health Service] safely and effectively" and will position it to potentially roll out the software to most of the NHS. The amount of government financial support was not specified, but Kheiron and the other nine companies chosen as finalist in the first U.K. Artificial Intelligence in Health Care Awards receive a share of £50 million ($65.3 million) earmarked for the program. The U.K. and many other European countries are facing an acute shortage of radiologists, which has already lead to delays in mammography screening—a situation made worse by the postponement of routine screenings during the height of the pandemic in the spring.
EYE ON A.I. TALENT
Her Majesty's Revenue and Customs (HMRC), the U.K.'s tax collection authority, has appointed Daljit Rehal as chief digital and information officer, the agency announced. Rehal was previously global digital and data services director at British energy company Centrica.
Data science marketplace Pivigo has named Alex Willard its new chief executive officer, according to Information Age. Willard was previously executive entrepreneur-in-residence at U.K. chipmaker Imagination Technologies.
Data Lab, a Scottish government-funded innovation center for data and A.I., has appointed Mark Wilkinson as its new head of business development, the website of Scottish Business Insider reports. Wilkinson was previously an executive with database and analytics software company Teradata.
A.I.-enabled cybersecurity company Darktrace, based in Cambridge, England, has appointed Luk Janssens as its head of investor relations, according to Cambridge Network. Janssens was previously head of European technology research at investment bank Credit Suisse.
EYE ON A.I. RESEARCH
Dutch researchers show that 'patch camouflage' can be used to trick military image classification systems. Researchers from several Dutch universities, a government applied research lab and the Dutch Ministry of Defense have shown that applying relatively small patches printed with certain patterns to the tops of military aircraft and "other large military assets" can render them undetectable by an A.I. image classification system used to analyze aerial surveillance images, according to a paper recently published on the research repository arxiv.org. "Our results show that adversarial patch attacks form a realistic alternative to traditional camouflage activities, and should therefore be considered in the automated analysis of aerial surveillance imagery," the authors write.
But there are some caveats, the most important of which is: to develop effective patches, the defender needs to have access to the image classification system that is being used to analyze the images. While many commercially-available image classification systems have been built on the same handful of open-source algorithms—and it might theoretically be possible to develop camouflage patches that work against all of them at the same time—military-grade systems may have been built using proprietary algorithms and training data. That's probably why the authors of the paper seem more concerned with the implications for the automated analytics tools the military already uses—in other words, how the "offensive" side can perfect its reconnaissance systems—rather than the implications for the "defender," i.e. suggesting the Netherlands actually start putting specific camouflage patterns on its aircraft. In the case of testing your own system, you already have access to the algorithm and the training data.
It is only a matter of time though before such camouflage begins to appear in the real world.
FORTUNE ON A.I.
TikTok: Everything to know about tech’s biggest soap opera—by Danielle Abril
For a few summers during and right after high school, I worked in a bank. I worked in a few different departments during my time there, but one summer, I worked in mortgage lending and my role was "internal audit."
If this sounds vaguely interesting, you're mistaken. The job consisted entirely of looking through piles of paper mortgage applications and making sure the data from the applications had been entered correctly into a computer database. With some regularity, I had to correct clerical errors. Every once in a while, I'd discover some important piece of information—like the applicant's social security number or her credit report—was missing and I'd flag that up to a supervisor. Once—and only once—during the whole summer, did I discover something more interesting: a case of potential fraud. It wasn't even very interesting fraud. The applicant had the same name as his father, say Jack Smith and Jack Smith Jr., and Jack Smith Jr. had decided it might be a good idea to use his father's social security number and address for the credit check and then later swap back in his own address and other details.
I got to thinking about this story while reading my Fortune colleague Jonathan Vanian's intriguing story about how American Express is turning to generative adversarial networks (or GANS)–the same A.I. technology that's given us deepfake videos—to create synthetic financial data on which to test its own fraud detection algorithms.
It's a fascinating idea. And, in general, fraud detection seems like a perfect use case for A.I. After all, you don't want to have to depend on a bored high school kid on his summer break, with one eye on the clock, counting down to quitting time.
But there are still some issues with using GANS to generate a dataset of synthetic financial data, especially if you're hoping to create data similar to what's found in fraud cases. As Jonathan explains:
"Humans can easily look at A.I.-generated images to see if they resemble the real thing. But with financial data, the technology is so new that there are no 'commonly accepted techniques' that the researchers can use to grade the software...The American Express researchers ended up using statistical techniques to analyze the A.I.-generated data and found that the results were good but not great."
Of course, there's another big problem with using A.I. fraud detection: The best frauds are those that are never detected. So it is pretty difficult to create a training set. I wonder if Amex's system would have caught my Jack Smith Jr.?