CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

Robot lawyers are thriving during the pandemic

June 30, 2020, 4:11 PM UTC

This is the web version of Eye on A.I., Fortune’s weekly newsletter covering artificial intelligence and business. To get it delivered weekly to your in-box, sign up here.

A lot of A.I. companies are having a good year—and that includes those who sell A.I.-enabled tech to lawyers.

Last week, I spoke with Jason Brennan, the chief executive officer of U.K.-based legal A.I. company Luminance. He told me the company, which now has more than 250 customers across the globe, including a fifth of the world’s largest 100 law firms, has had a 30% increase in customers since the start of 2020.

Luminance has an interesting history: It was founded in 2015 by a group that included Mike Lynch, the former founder and CEO of one-time star U.K. technology company Autonomy. (Lynch is fighting extradition to the U.S. on fraud charges and is awaiting a verdict in a civil fraud suit in the U.K.) Lynch’s Invoke Capital has been one of Luminance’s biggest funders, but the company is also backed by venture capital firm Talis Capital and the big U.K.-based law firm Slaughter & May (which was also one of Luminance’s first customers and, what’s more, advised Autonomy in the merger with Hewlett-Packard that ultimately landed Lynch in legal hot water).

Luminance’s machine learning platform uses some elements of natural language processing, the kind of machine learning that can understand language, and some elements of clustering, a machine learning discipline focused on, as the name suggests, grouping data based on similarities and differences.

Clustering is one of the unsupervised learning techniques (ones that don’t rely on a pre-labeled training data set) that are increasingly gaining ground in business applications. In the case of Luminance, its system uses both unsupervised learning and supervised methods—while its software knows that a certain set of documents are similar, it doesn’t know what they are until a human lawyer training the system applies a label to them. Once the document set is labelled, the system can learn to predict the right label for akin documents it encounters in the future.

The result is a system that can tell lawyers which documents—and importantly, which clauses within documents—are most similar and which are outliers.

This is important because it turns out that a lot of the “grunt work” of Big Law involves doing exactly what Luminance does: combing through vast troves of documents, trying to find those clauses that might be problematic. Maybe they need to be updated due to a regulatory change. Or maybe they are part of the contracts held by a company that is being acquired and would open up a big liability issue for the buyer. Either way, law firms once deployed small armies of paralegals and junior associates to find them. It used to be that law firms could simply charge for all this labor and pass the cost on to the client. But that hasn’t been true for at least a decade. These days, clients are more likely to demand law firms accept a flat fee for this sort of work, or pay based on some pre-agreed outcome, not on man hours. So firms have had to become much more efficient. Corporate in-house legal departments are also having to do more with less.

That’s good for Luminance. And Brennan tells me that during the pandemic, the company has seen its customers use its A.I. in novel ways. Take the Italian law firm Portolano Cavallo and the U.K. arm of the global firm Dentons. Both firms found their clients needed to quickly determine if any contracts had force majeure clauses and exactly how they could be triggered. “It’s the idea of needing to pivot very quickly to something new and unforeseen,” Brennan says. “Force majeure clauses have been out there forever, but the relevance of it couldn’t be predicted.” Using Luminance, both Dentons and Portolano were able to complete this task in record time—in fact, Portolano says in a client newsletter that it was able to complete a document review that might have taken many days in just 45 minutes.

Brennan says he’s seen other interesting uses such as companies gaming out which creditors might be on the hook in potential bankruptcies, investigating M&A transactions as many industries brace for consolidation, and even preparing for possible class action lawsuits against the cruise line industry.

Like many A.I. company CEOs, Brennan says his customers are not simply using his product to cut costs. He says A.I. is also enabling them to find new revenue streams. For instance, in the past, in big M&A transactions, the likes of the Fortune 500 would engage a big global law firm to conduct a due diligence review of contracts above a certain pre-agreed dollar value. It was simply too onerous and too expensive to look at everything. And, once a merger actually came together, the work of integrating the combined firms’ contracts was often farmed out to smaller law firms, since it was too pricey to get a large one to do it, Brennan says. But now A.I. systems such as Luminance are allowing the big firms to offer comprehensive document reviews and actually retain the post-merger integration work. At the same time, they’re also allowing some smaller firms to compete for work that they wouldn’t have had the manpower to handle before, he says.

So, how long will it be until we have fully robotic lawyers? Well, Brennan thinks the day when software like Luminance will actually perform legal analysis—assessing which clauses are most likely to trigger a legal issue, for instance—is still a long way off. “We want lawyers to be lawyers,” he says. “Right now we are focused on highlighting information to lawyers and letting them do what they do best.”

And with that, here’s the rest of this week’s A.I. news.

Jeremy Kahn
@Jeremyakahn
Jeremy.Kahn@fortune.com

This story has been corrected to clarify that law firm Slaughter & May represented Autonomy in its sale to Hewlett Packard but has not represented Mike Lynch personally. The spelling of Meri Williams’s first name in the “Eye on A.I. Talent” section has also been corrected.

A.I. IN THE NEWS

Democrats introduce a bill to ban facial recognition following false arrest case. A group of Democratic lawmakers have crafted a bill to ban the federal government and U.S. police departments from using facial recognition software. The bill comes in the wake of news that a Black man in Detroit had been arrested in January for a theft he did not commit after facial recognition software erroneously identified him as the person in store surveillance video of the crime. Researchers have found that many leading facial recognition algorithms are less accurate in identifying Black people and others with darker-skin. “In this moment, the only responsible thing to do is to prohibit government and law enforcement from using these surveillance mechanisms,” Senator Edward Markey (D-Mass.) said. For more on the proposed law, check out this story by my Eye on A.I. co-author Jonathan Vanian. 

Trump suspends H1B visa program. Donald Trump issued an executive order temporarily suspending approvals for H1B and other work visas, Tens of thousands of skilled foreign workers, including many in the technology sector such as machine learning engineers and researchers, come to work in the U.S. each year under these programs. My Fortune colleague Grady McGregor has a story on the new policy's potential impact here. Many experts in the geopolitics of A.I. are alarmed by the move, given how reliant the U.S. is on foreign-born tech talent.  

Amazon buys self-driving company Zoox. The Everything Store is paying about $1.2 billion to acquire the startup, according to a report in The Wall Street Journal. Reports of a possible deal had first surfaced a month ago, also in The Journal. Amazon is thought to be interested in self-driving technologies as a way to one day automate its delivery fleet, but for now ,Zoox says Amazon will help roll out the self-driving taxi service it's been developing. Amazon has previously made minority investments in self-driving truck company Rivian Automotive and self-driving company Aurora Innovation, which also is now working on a self-driving truck. Tesla's Elon Musk used the Zoox acquisition as an opportunity to try to troll Jeff Bezos, accusing him on Twitter of being a "copy cat." It is the latest example of Musk's displeasure with Amazon, which he has called a "monopoly" that should be broken up.

Nvidia teams up with Mercedes on A.I.-enabled cars. The chip giant and the carmaker have signed a deal that will have Nvidia's computer chips and software installed in new Mercedes beginning in 2024, according to an Nvidia press release. The partnership will help power some existing features, such as smart cruise control and automatic lane-following-and-changing, and will also allow the cars to be upgraded over time to more and more autonomous-driving functions.

Google launches a new A.I. image app. Google has unveiled a new A.I.-powered app called Keen. It allows users to pull together a lot of different kinds of information on theme, including Google search results and links as well as images. The app then uses machine learning to suggest similar or related information. You can read more about it and play around with the service here

Google also launches a way to test the privacy of A.I. models. Google has released a privacy testing library for Tensorflow, its popular deep learning programming language. It will allow those who use Tensorflow Privacy, a toolkit for privacy-preserving machine learning that Google debuted last year, to test how well various A.I. classifiers maintain data privacy, according to Venture Beat

Didi Chuxing says it will have one million self-driving taxis by 2030. The Chinese company, which runs a ride sharing service similar to Uber, tells the BBC that it plans to have one million driverless taxis operating on public roads by 2030. But, as the BBC story points out, industry watchers have become skeptical of such bold predictions after many other self-driving taxi services are behind on, or have already missed, deadlines for putting large numbers of autonomous vehicles on the road.

EYE ON A.I. TALENT

Samsung Electronics has promoted Sebastian Seung to head its Samsung Research division, ZDNet reported. Seung, a Princeton University computer scientist specializing in A.I. and neuroscience, has helped lead Samsung's A.I. research efforts since joining the company in 2018. He will now oversee all of the company's 15 global R&D hubs and seven A.I. research centers.

MongoDB, the database platform company based in New York, has appointed Mark Porter as chief technology officer, the company said in a press release. Porter was previously CTO at Grab, the high-flying Southeast Asian startup that provides services ranging from food delivery to ride hailing to mobile payments. Porter joined MongoDB's board as a director in February, but will step down to assume his new role, the company said.

Healx, a Cambridge, England-based startup that uses A.I. to accelerate drug discovery for rare diseases, has hired Meri Williams to be its chief technology officer, according to trade publication BusinessCloud. Williams had previously been the CTO at U.K. challenger bank Monzo.

One Stop Systems Inc. has appointed David Raun as its president and chief executive officer, the company said in a press release. Raun had been in the role on an interim basis since February and had previously been president and chief operating officer at Assia, a Silicon Valley-based SaaS software company. The Escondido, California-based One Stop Systems makes high-performance computer systems used for machine learning in the defense sector, ffinance and entertainment industries, and research scientists.

EYE ON A.I. RESEARCH

DeepMind expands its robot control software suite. The London-based A.I. research company, owned by Google parent Alphabet, has open-sourced its latest library of tools for training complex virtual robots in the physics simulator MuJoCo using reinforcement learning. The library includes a number of sophisticated pre-trained robots, including a "Phaero Dog" model, as well as a framework for training a wide range of new kinds of animated creatures. Interestingly, the pre-trained models also include a highly-complex "Rodent" model which the DeepMind researchers say may help biologists and other scientists do virtual experiments that might provide insights into how real rodents think and behave. Training virtual robots in these simulated environments is thought to be an important step towards deploying more complex real robots in real life.

Drone imagery helps count refugees. Researchers at Johns Hopkins University Applied Physics Laboratory, the University of Kentucky, the U.S. Center for Disease Control and Prevention (CDC), and the U.S. Agency for Toxic Substances and Disease Registry have create a dataset designed to train an algorithm to estimate the refugee populations based on overhead images. The training set consists of drone-gathered images of 34 different refugee camps in Bangladesh. In a paper published on research repository Arxiv.org, the scientists say that, so far, a model trained on the dataset has been able to estimate refugee populations with a mean estimation error of 7%, which the researchers imply may need to be improved further to be truly useful in real world settings. Nevertheless, the researchers say their work is "an important step" towards creating tools that will help humanitarian groups respond faster and more effectively to refugee and other humanitarian crises.

FORTUNE ON A.I.

Accenture’s CEO: 5 rules for rethinking digital transformation during COVID-19—by Julie Sweet

New bill would bar federal agencies from using facial-recognition technology—by Jonathan Vanian

How a Bill Gates-backed startup plans to save farming with A.I.—by Aaron Pressman

IBM’s Ginni Rometty: The way we hire must change—and we must do it now—Michal Lev-Ram

Algorithms won’t end racism—they’re making it worse—by Clay Chandler and Eamon Barrett

These big businesses are all boycotting Facebook ads—by Danielle Abril

BRAIN FOOD

Yann LeCun's telling missteps on A.I. bias

Yann LeCun is one of the world's best-known machine learning researchers. He's the pioneer of convolutional neural networks, a Turing Award winner, a distinguished New York University professor and Facebook's Chief A.I. Scientist. But this past week, LeCun demonstrated a major blindspot when it comes to thinking about A.I. bias—as well as how to engage in a constructive dialogue about racial injustice.

LeCun's trouble started when he responded to the controversy surrounding PULSE, a computer vision model created by researchers at Duke University that claimed to be able to take pixelated, low-quality images and smoothly "upscale" them into high-resolution images. The problem was, as my colleague Jonathan Vanian reported last week, when PULSE was fed a pixelated image that most of the world would have easily recognized as a fuzzy headshot of former President Barack Obama, it converted his face into that of a white man. Similar problems occurred when PULSE was fed pixelated images of other well-known nonwhite people, including Mohammed Ali, Samuel L. Jackson and Representative Alexandria Ocasio-Cortez. When an A.I. researcher tweeted that this was a great example of the problem of bias in algorithms, LeCun replied that “ML systems are biased when data is biased.” He also claimed that the consequences of bias were more of an issue for machine learning being deployed in commercial settings than for A.I. researchers. 

This drew the ire of Timnit Gebru, a researcher who co-leads Google's Ethical A.I. team. Gebru is well-known for her work on racial and gender bias in facial recognition systems and other algorithms. She is also a co-founder of the group Black in A.I., which has pushed for better representation of Black people in the field of A.I. research. She tweeted that issues around biased A.I. could not simply be reduced to biased data—and that LeCun should essentially know better.

At first, LeCun did not engage with Gebru—even though he did respond to other, white researchers, which further inflamed the situation. Then he did reply to her in a long series of tweets, but said "people like us" should respond in a "non-emotional and rational manner." At this point, Gebru said that she was refusing to engage further for the sake of her own sanity. Other computer scientists jumped in, one suggesting LeCun read Gebru's published research (LeCun said, "I know this paper") and one accusing him of “gaslighting Black women and dismissing tons of scholarly work.” LeCun posted to his Facebook page his endorsement of an anonymous Twitter user who worried "the argumentative norms of the social justice movement are eroding the ability for people to actually debate ideas."

Eventually, Facebook's head of A.I., Jerome Pesenti, intervened to apologize "for how this situation has escalated." He said Facebook AI Research (which LeCun founded seven years ago) valued Gebru's "trailblazing work and that of the broader AI community focused on issues of bias and race in AI and this is not how we want to show up in these conversations." Meanwhile LeCun, on Monday, announced he was abandoning Twitter forever. This lead to more outraged tweets from LeCun's supporters, who also pleaded with him to reconsider his decision, while his detractors accused LeCun of trying to seize the mantle of victimhood from Gebru, who he had arguably victimized, and who, unlike LeCun, is a member of a historically-victimized racial minority. 

Well, at least Pesenti recognized how bad this was all looking for Facebook. And, as VentureBeat's A.I. reporter Khari Johnson makes clear in a well-written and pointed take on this whole episode (you can read the whole thing here), it comes at a time when Facebook is facing a growing advertiser boycott over its unwillingness to do more to police hate speech as well as mounting concerns over its lack of employee diversity. (As Johnson points out, the two problems are likely closely interlinked.)

I won't further pile on LeCun or Facebook here, but I will re-emphasize the points Gebru was making because they are critical for any organization thinking about deploying A.I. There is a worrying tendency among both A.I. researchers and companies pushing A.I. software to look for "silver bullet" solutions to these problems. I see researchers publishing papers on ways to automatically detect—and then supposedly automatically correct—bias in datasets, or advancing mathematical techniques for determining that an algorithm's output is fair. At best, these tools are only part of the solution. At worst, they lead to complacency, false confidence and an abdication of moral and ethical responsibility. 

As Gebru sought to highlight, fairness is about more than just biased inputs to A.I. systems or even biased outputs—it is also about biased outcomes. It has to do with why a business wants to use the algorithm in the first place—what is its purpose and intent—as well as how exactly that tool is being used. The scientists and engineers developing A.I. systems and the business executives deploying them must not be allowed to elide moral and ethical responsibility by shrugging and saying, well, we ran a test and found the data wasn't biased.

Ensuring ethical outcomes requires critical thinking, values and judgment. There is no algorithm for that.