The pandemic is speeding up automation, putting jobs in question

Forced to tighten their belts financially by the coronavirus pandemic, businesses are increasingly using software that automates back-office tasks.

The technology handles repetitive duties like filling in numbers in a spreadsheet or matching invoice data to payment orders. The idea, of course, is for companies to save money by reducing the number of workers they need to handle clerical work.

Although the technology, called robotic process automation, or RPA, has existed for years, recent advances in machine learning and natural language processing have made it possible for it to do more complicated tasks. That includes deciphering financial jargon in PDF documents, analyzing that data, and then using it to fill in information in spreadsheets, which is helpful for cataloging invoices, among other tasks.

Mihir Shukla, CEO of Automation Anywhere, which sells RPA software, pointed to the financial industry as among the many adopting the technology. He cited unspecified banks that are using the technology to help process the flood of small business loans that were handed out by the federal government. 

“It would have taken two years to change 6 million records without the bots,” Shukla said. 

Echoing what other executives have told Fortune, Shukla said that companies have recently cut spending on big IT projects that have no quick payoff. But as Fortune CEO Alan Murray has previously written, RPA projects, in contrast, are “low-hanging fruit, with a potential for quick savings.”

The job-killing aspect of RPA is a sensitive topic. Rather than eliminating jobs, companies that sell the technology prefer to say that it lets customers shift their workers to more creative and higher-paying roles.

But He Wang, a healthcare analyst for CB Insights, recently told Fortune that some companies are primarily interested in automation tech to reduce spending on “human capital,” i.e. people. The pandemic is merely speeding up the phenomenon. 

Jonathan Vanian 


Guilt by association. A class action lawsuit was filed in Illinois against Macy’s, alleging that the retail giant violated the state’s biometrics and data privacy laws by using facial-recognition software sold by the controversial startup Clearview AI, Bloomberg News reported. Macy’s used Clearview’s tools “to identify shoppers from security-camera footage,” the report said. Numerous civil-rights groups and politicians have criticized Clearview AI for what they believe are egregious violations of data privacy. 

Another city says goodbye to facial recognition. City officials in Portland, Maine will no longer be able to use facial recognition tools after local government representatives instituted a ban on the technology, the Portland Press Herald reported, via the Government Technology news website. Other U.S. cities that have debuted facial recognition bans include San Francisco, Oakland, Cambridge, Mass., and Somerville, Mass.

When algorithms become racist. The United Kingdom’s Home Office, a government department that oversees immigration, will discard a decision-making algorithm used to process visa applications that critics have called racist, The Guardian reported. A director for the Joint Council for the Welfare of Immigrants charity said of the so-called streaming algorithm: “This streaming tool took decades of institutionally racist practices, such as targeting particular nationalities for immigration raids, and turned them into software."

These telemarketers just don’t stop. Chinese residents are being bombarded by unsolicited sales calls, thanks to A.I. technologies being increasingly used in corporate call centers, The South China Morning Post reported. One problem for consumers, a tech analyst noted: “After the bot developers sell their technology to companies, they do not care to know whether it is used legally or ethically.” If you were annoyed with unsolicited calls now, just think of how many more you will get after more companies adopt call-center A.I. tech.

The report also cited a Shanghai resident who described her experience with one of the call-center bots:

“At first I didn’t even realise I was speaking to an automated service. I told the ‘person’ that I did not require an energy-boosting product and the voice went on to recommend another, similar product,” she said. “When I asked ‘didn’t you hear what I just said’, the voice did not appear to understand. Just as I was about to hang up the phone, the voice suddenly identified itself as an AI phone assistant.”

The automation of everything. The MIT Technology Review plans to debut a new podcast on Aug. 12 exploring A.I.’s various societal effects, touching on subjects like the influence of decision-making algorithms in education, criminal justice, and work. A description of "In Machines We Trust" from its Apple podcast page says it’s “a podcast about the automation of everything.”


Amwell has picked Serkan Kutan to be the healthcare technology firm’s chief technology officer. Kutan was previously the CTO of Haven and Zocdoc.

Bitfury Surround has hired George McIntyre to be the startup’s chief technology officer. McIntyre was previously the CTO of Active Media Platform.


Crime time. Researchers from University College London’s COMPASS (Computational Security Science) Group published a paper in the Crime Science journal about the possibility of criminals using A.I. for nefarious purposes. Leading the list of most dangerous uses of A.I. is the generative adversarial network (GAN) technique used to create seemingly realistic but fake videos, photos, and audio clips.

Here’s a sample of how people could use deepfake technology for crime:

Delegates envisaged a diverse range of criminal applications for such “deepfake” technology to exploit people’s implicit trust in these media, including: impersonation of children to elderly parents over video calls to gain access to funds; usage over the phone to request access to secure systems; and fake video of public figures speaking or acting reprehensibly in order to manipulate support.

Self-driving cars also have the ability to be used for crime:

Autonomous vehicles would potentially allow expansion of vehicular terrorism by reducing the need for driver recruitment, enabling single perpetrators to perform multiple attacks, even coordinating large numbers of vehicles at once. Driverless cars are certain to include extensive safety systems, which would need to be overridden, so driverless attacks will have a higher barrier to entry than at present, requiring technological skill and organisation. 

And while it’s not ranked as severe as others, A.I.-assisted stalking is another concern:

Use of learning systems to monitor the location and activity of an individual through social media or personal device data. Also considered to encompass other crimes around coercive relationships, domestic abuse, gaslighting etc., and to relate to a current news story concerning the complicity of Western technology companies in the provision of apps for enforcing social norms in repressive societies (Hubbard 2019). Harms were rated as low, not because these crimes are not extremely damaging, but because they are inherently focused on single individuals, with no meaningful scope for operating at scale.



The TikTok effect: U.S. ban could doom the global ambition of Chinese tech—By Naomi Xu Elegant

Here’s what TikTok may look like if Microsoft buys it—By Jonathan Vanian

Why you can trust the new coronavirus contact-tracing apps to safeguard your privacy—David Z. Morris

What would it be like if the Internet suddenly went dark?—By Robert Hackett


The quest for the vaccine. Fortune and Eye on A.I. columnist Jeremy Kahn has authored a gripping read about AstraZeneca’s quest to mass-produce a coronavirus vaccine. Although Kahn’s piece is more of a profile about AstraZeneca and how it came to be considered a key player in fighting COVID-19, it’s fitting to include in this newsletter because AstraZeneca was only able to embark on its ambitious project thanks to a massive “corporate reinvention,” a business and cultural makeover that can help companies carry forth transformational A.I. projects. CEO Pascal Soriot helped lead “AstraZeneca’s remarkable R&D rebound.”

From the article:

AZ has also used new digital technologies to reinvent its approach to clinical trials. Software called Merlin enables AZ to select trial sites 70% faster than before, according to Cristina Duran, chief digital health officer for the R&D division. Merlin produces trial cost estimates in minutes (it used to take days). In a research ecosystem where trial volunteers often skew disproportionately male and white, Merlin helps select trial populations that better reflect real-world demographics. Another software system, called Control Tower, allows managers to get a visual snapshot of all AZ’s trials on a single dashboard, helping them predict problems in patient recruitment. “We’ve taken systems that the company was using for 20 years, and in the past two years we changed them—which is either insane or brave, but it has been successful,” Duran says.

Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up today.

Read More

CEO DailyCFO DailyBroadsheetData SheetTerm Sheet