CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

It’s Not HAL, But It Sure Does Boost Revenue: Eye on A.I.

October 22, 2019, 1:30 PM UTC

Our views on artificial intelligence tend to be colored by science fiction, whether its HAL from 2001: A Space Odyssey or the Minds from Iain M. Banks’ Culture series. It’s part of the reason there’s so much attention around efforts to create artificial general intelligence—software that can match or exceed human abilities across a wide-range of cognitive tasks. But sometimes that focus, and all the hype around research breakthroughs that seem to be helping us along the path towards that goal, makes it easy to ignore all the seemingly simple, narrow tasks in which machine learning is already having a big impact on bottom lines.

The other day, I interviewed Jean-Cyril Schütterlé, the chief product officer at Sidetrade, a French company that for 20 years has made financial software. Businesses use it to help manage the credit they extend to customers and to collect payments. Recently, Sidetrade created a machine learning-based tool, which it calls Aimie, that helps businesses pick the best strategy for collecting on invoices for any particular customer. “It recommends the best course of action given your available resources,” Schütterlé says, noting, for example, that the software won’t recommend making collection calls to 100 customers if you only have a team large enough to make 50 calls.

The French branch of staffing agency Manpower trialled Sidetrade’s system. Manpower France has to collect 1.3 million invoices from 80,000 companies annually. It began using Aimie for a few customers and eventually ramped it up to 60% of customers that have just one location where Manpower supplies staff. After nine months of testing Aimie, Manpower found that its collections increased 12%, according Laurent Bueno, Manpower France’s credit director.

It’s the perfect example of A.I. that does one narrow thing and that results in an immediate increase to revenues. Aimie is also instructive in several other ways: as with most of today’s A.I., getting Aimie to work well required a lot of data. Schütterlé says Sidetrade trained the software on 230 million business-to-business payment records.

Secondly, Aimie is an example of a point Jonathan has made in this newsletter recently: Often companies don’t need to use the most state-of-the-art A.I. techniques. Aimie didn’t use fancy neural networks and deep learning. Schütterlé says it is built from two older machine learning techniques: a random forest algorithm and a Hungarian Method algorithm.

Finally, A.I. isn’t just about cost-cutting and eliminating jobs. In this case, the credit managers at many of the small-and-medium-sized businesses using Sidetrade’s software were already over-stretched. Aimie hasn’t eliminated their jobs, it’s simply enabled them to perform them more effectively.

Sure, Aimie’s not as sexy as something like HAL. But it doesn’t have to be.

Jeremy Kahn

Subscribe to Eye on A.I. here.


Google puts more A.I. into its devices. Google unveiled its latest lineup of hardware devices last week, including many A.I.-powered features. Its new Pixel 4 mobiles will now process some queries for Google's A.I.-driven digital assistant directly on the phone, without having to transmit data to the cloud. The phone also features an improved facial recognition system for unlocking, one that Google says will both easier to use and more secure.

Controversy over OpenAI's Rubik's Cube demo continues. Last week's newsletter reported on OpenAI's breakthrough training a robotic hand to solve a Rubik's Cube. While video of the robot went viral, some researchers, most notably New York University's Gary Marcus, have accused the A.I. company of overhyping its accomplishment. Among their criticisms: the robot hand mastered the physical dexterity needed to manipulate the Cube, but the solution to the puzzle was dictated by a static Cube-solving algorithm; the hand could only actually solve a fully-scrambled Cube about 20% of the time without dropping it. 

U.S. border agents want facial recognition for body cameras. U.S. Customs and Border Patrol has put out a contract request for body cameras equipped with facial recognition. According to a report in The Register, which obtained a copy of the contracting specs, the system is supposed to allow agents to more easily match people's faces to their identity documents as well as to check them against lists of "people of interest."

A.I.-Powered Product Placements. Chinese tech giant Tencent and London-based A.I.-driven advertising firm Miriad have announced a two-year partnership that will see the two companies working closely together. Miriad uses computer vision technology to spot opportunities for product placements in video content, such as television shows and movies, and then uses other A.I.-based techniques to automatically edit those products or ads into the video images. 


Most of the discussion around deepfakes, realistic-looking fake videos created using widely-available A.I. software, has concerned malicious uses of the technology: from revenge porn to political disinformation. But Hollywood studios are eying the technique too to help generate visual effects and perhaps even, in the future, entire films, The Financial Times reports.  But some involved in the visual effects industry told the newspaper that deepfakes can't yet reliably match the painstakingly rendered, and expensive, computer-generated imagery currently being used and that it remains uncertain when, if ever, they might be able to do so.


Eskalera, a San Francisco-based startup developing an A.I.-powered human resources platform, has hired Dane Holmes, a long-time partner and head of human capital management at Goldman Sachs, to be its new CEO.

Vectra AI, which uses artificial intelligence to detect cybersecurity threats, has hired Dee Clinton, a former executive at Australian telecommunications firm Telstra, to be chief of its Asia-Pacific sales channel.  


Using A.I. to Restore and Decipher Fragmented Ancient Texts
Ancient inscriptions, whether in stone, papyrus or paper, are often fragmentary due to damage and deterioration over the course of thousands of years. Now a team of researchers from DeepMind, the London-based A.I. company owned by Google-parent Alphabet, has used a deep neural network to help fill in the missing pieces of ancient Greek inscriptions. Called Pythia, the system, which looks at the context of missing letters and words, achieved a 30% character-error-rate on a test text, compared to a 57% error rate for Oxford-PHD students in ancient history, according to DeepMind's blog post on the research. 

Optimizing Manufacturing Operations with A.I.
Researchers at Hitachi America's AI research lab in California have used deep reinforcement learning to optimize the movement of goods around a simulated factory floor, according to a paper published in the research repository Arxiv. The researchers created a system of rewards for on-time delivery of items to the next stage in the manufacturing process and penalties for deliveries that were either tardy or too early. Their algorithm, which they called Deep Manufacturing Dispatching, or DMD, outperformed 18 other algorithms, both rule-based ones and different machine learning models, when tested in simulation. 


Flaw in Google’s New Pixel 4 Raises Risk of Snooping While You Sleep — By Alyssa Newcomb

Everything to Know About Google’s New Pixel 4 and Pixel XL Smartphones — By JP Mangalindan

Now Hiring: People Who Can Translate Data Into Stories and Actions— By Anne Fisher


In Medical A.I., Performance on Disease Sub-Sets Really Matters. Two weeks ago, I highlighted an insightful analysis from Luke Oakden-Rayner, a PHD candidate at University of Adelaide's School of Public Health, on the problem with medical A.I. competitions. Now he and co-authors Gustavo Carneiro, from Adelaide, and Jared Dunnmon and Chris Re, from Stanford University, have written a paper examining a serious flaw in the way many medical A.I. algorithms are tested.

The researchers argue that average performance across a broad test set is far less important than how these algorithms perform in detecting the smaller subset of disease features associated with the worst patient outcomes. They contend that human doctors, by contrast, tend to be highly attuned to these outliers, even if humans tend to perform worse than the machines on average across all disease types. For instance, a computer vision algorithm might be great, on average, at spotting anomalies on chest X-rays, but not do as well at finding the specific tumor type that, although rare, if not detected early, has a high mortality rate.

Most medical A.I. algorithms are not rigorously tested on these disease subsets, the researchers say, and as a result, may misrepresent how safe they are. Medical A.I. ought to be assessed for its impact on actual patient care and patient outcomes—much in the way drugs are tested today—not just on how well it performs on a broad test set.