CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

How A.I. is helping doctors triage patients in urgent care

July 28, 2020, 3:59 PM UTC

Healthcare experts are hopeful that artificial intelligence could help doctors figure out which patients need urgent care.

At Stanford, physicians at the university’s healthcare facilities will debut, in a month or so, an experimental machine learning-powered triage system. Unlike the machine learning-powered chatbot used by healthcare giant Providence Health & Services that helps patients schedule appointments, Stanford’s software is intended to be used by internal staff to deal with sudden influxes of patients, which can overburden physicians. 

The heart of the triaging system is a machine-learning algorithm developed by healthcare firm Epic that analyzes data stored in a patient’s electronic health records. Using data like a patient’s respiratory rate, blood count, and heart rate, the machine learning software can predict whether a patient warrants a visit to the intensive care unit.

Stanford physician Ron Li told Fortune that the machine learning model isn’t “telling us something we wouldn’t know.” Doctors studying the same charts that the software is looking at would likely derive the same conclusion. After all, it doesn’t take a brain surgeon to deduce that patients would probably need immediate help if their heart rate suddenly skyrockets. 

When the system debuts, doctors and nurses will receive alerts from their smartphones and computers that the technology has identified a high-risk patient that may need attention, Li said. If a particularly ill patient is staying overnight, for instance, a doctor and nurse handling the nightshift may get an alert from the software to convene at the patient’s bedside to discuss what to do next. Part of the software’s appeal is that it could help nurses and clinicians, who may have differing opinions about interpreting the health of patients, get on the same page about the best way to treat patients in the heat of the moment, a notion that Li refers to as “a shared a mental model.” 

Li sees the ultimate value of the software as changing the way Stanford’s clinicians behave in a way that leads to better care for patients. He acknowledged that it’s hard to quantify behavioral change into some form of statistic that indicates a return on investment, or ROI. Indeed, many companies are struggling to report seeing any “value” from their A.I. investments, since it’s difficult to quantify the worth of a successful project beyond merely reducing costs or generating sales.

Clearly, if Stanford’s upcoming test, or pilot, of the new software leads to a reduction in deaths, that would be a success. “I hope it does,” Li said.

Even if that’s not the case, merely getting clinicians and nurses on the same page will be beneficial, Li said. It’s a smaller milestone than reducing deaths, but a more realistic outcome. 

PS. Although this project was originally intended to help triage COVID-19 patients, Li said that Stanford pivoted so that pilot test would now help triage general patients, mainly because the clinic did not get enough data from COVID-19 patients to train a coronavirus-specific machine learning model.

Jonathan Vanian 


New Zealand sets some standards. New Zealand is claiming that it’s the “first in the world” to set standards intended for public agencies to adhere to when implementing A.I. technologies, The Guardian reported. Still, there is no “enforcement mechanism” to ensure that the agencies follow the standards, the report said. From the article: In it, departments pledge to be publicly transparent about how decision-making is driven by algorithms, including giving “plain English” explanations; to make available information about the processes used and how data is stored unless forbidden by law (such as for reasons of national security); and to identify and manage biases informing algorithms.

Facebook pays up. Facebook will pay $650 million to settle a class-action lawsuit related to the company’s use of facial-recognition technologies and data collection, Fortune’s Jeff John Roberts reported. As Roberts writes, “The Facebook lawsuit came about as a result of a unique state law in Illinois, which obliges companies to get permission before using facial recognition technology on their customers.”

Einstein loses his voice. Salesforce has decided to “retire” its Einstein Voice Assistant technology and is instead focusing on its Anywhere work-collaboration software as a way for people to interact with the company’s software using their voice, tech publication reported. It was just a year ago that Salesforce was heavily promoting its Einstein A.I. voice assistant to businesses, but it was never clear if there were enough compelling reasons to use the technology at work besides a handful of scenarios. Salesforce’s decision to shut it down comes amid the company’s chief scientist Richard Socher leaving Salesforce to start his own company. Socher was instrumental in leading Salesforce’s A.I. projects, including much of the company’s research into natural language processing and voice technologies.

LinkedIn gets deep. LinkedIn said it has made its DeText, or Deep Text, natural language processing technology available in open source. One of DeText’s features is how it allows A.I. researchers use multiple NLP models, trained on their own specific language data, to power different tasks via one system.

Please stop faking it until you are making it. The Securities and Exchange Commission said it charged Shaukat Shamim, the founder and CEO of the startup YouPlus, with “defrauding investors by making false and misleading statements about the company’s finances and sources of revenue.” The startup claimed that it developed machine learning software to analyze videos. Shamim was accused of raising “funds from investors while repeatedly misrepresenting the company’s financial condition.”


Luminar, a startup specializing in self-driving car technology, hired Aaron Jefferson to be the company’s vice president of product. A Luminar spokesperson told Fortune that this is the startup’s first executive position in charge of products, which was previously a research and development-specific role. The new tech-specific hiring was one of several new hires for the startup. Jefferson was previously the vice president of product strategy for Global Electronics.

Dunkin' Brands Group, Inc., which is the parent company of Dunkin' and Baskin-Robbins, hired Philip Auerbach to be the company’s chief digital and strategy officer, a new position. Auerbach was previously the chief commercial officer of travel company Lindblad Expeditions.

BJ’s Wholesale Club Holdings, Inc. picked Monica Schwartz to be the company’s senior vice president and chief digital officer. Schwartz was previously the vice president of online merchandising at Home Depot.


A.I. goes bird watching. Researchers from The French National Centre for Scientific Research, Université de Montpellierand the University of Porto in Portugal published a paper in the Methods in Ecology and Evolution journal about using deep learning to recognize different kinds of birds, specifically “three small bird species, the sociable weaver Philetairus socius, the great tit Parus major and the zebra finch Taeniopygia guttata.”

What's interesting: the technique, or pipeline, for collecting data used for training their deep learning system. The researchers describe in detail how they captured photos of the different birds in both the wild and in captivity using a simple Raspberry Pi processor, the same kind of processor used to teach children the basics of computer science.

The authors programmed the Raspberry PI to take a picture of a bird on a feeder every two seconds. This two-second interval was important because they wanted to “avoid having near‐identical frames of the same bird,” which would lead to “too many near‐identical pictures.” That could lead to a common problem researchers experience when dealing with deep learning: overfitting. In this particular case, overfitting means that this bird-identification system would end up “‘memorizing’ the pictures instead of learning features that are key for recognizing the individuals,” which the authors note would “jeopardize the generalization capability of the models.”

It turns out A.I. practitioners can learn a lot from observing birds.


What the hell just happened to Intel?—By Aaron Pressman

Facebook’s Messenger will allow users to unlock the app with their fingerprints and faces—By Danielle Abril

It’s getting harder to tell state-sponsored hackers and cybercriminals apart—By Robert Hackett


Language takes...somewhat of a leap. The A.I. research community has been captivated by the GPT-3 language system created by OpenAI. Researchers have been playing with the language tool and getting it to produce interesting results, such as automatically generating prime numbers “when prompted with the first 12 primes,” as one deep learning researcher noted via Twitter.

But despite the fascinating results, the GPT-3 system is likely not “a new contender for the most spectacularly newsworthy happening of 2020,” as one observer described in a Bloomberg opinion piece.

There’s still many flaws with the system, such as its propensity to generate biased results, as VentureBeat noted. Indeed, even OpenAI CEO Sam Altman tweeted that the “hype is too much,” a comment seemingly intended to counter the notion that OpenAI over-inflates the capabilities of its technologies, as some A.I. researchers have alleged.   

While the raw GPT-3 system may be fun for researchers to inspect, it’s a long way away from being useful for general business-related tasks. In June, OpenAI released a commercial-version of its language technology for companies, underscoring how A.I. research tools need some extra safeguards and modifications before they can be safely used in corporate settings.

Still, the fact that A.I. researchers are doing some genuinely interesting things with GPT-3—like generating SQL queries in a databases, a task used to obtain insights within datasets—shows the promise of modern-day NLP systems. Cutting-edge NLP systems won’t likely usher in an age of A.I. sentience (at least not today’s systems), but they could inspire developers to use the tech to automate some useful tasks.

I’ll leave you with this Twitter joke from a data scientist about using GPT-3 to automatically create a music video from the prompt, “80s-style dance-pop led by a scrawny white guy on a $28 choreography budget.”