“First thing we do, let’s kill all the lawyers.” That line is from Shakespeare, of course, but it might as well have been spoken by ChatGPT, or perhaps the new Bing’s unhinged alter-ego Sydney, given the way OpenAI’s large language models are being rapidly incorporated into legal technology designed to make lawyers obso…well, the creators of the technology swear up and down that the point isn’t to eliminate lawyers but to make them “more efficient.” These legal language tools are one of the first examples of the kind of “copilot” technology that we are likely to see being rolled out across almost every profession in the coming months and years.
Last week, I spoke to two of the half-dozen or so legal tech startups and law firms seeking to use OpenAI’s large language models. One of the companies, Casetext, a San Francisco-based legal software company founded back in 2013, has just debuted a product it calls “CoCounsel” that uses OpenAI’s large language models to create a kind of legal search engine with a chatbot interface. Casetext says CoCounsel can perform seven key legal tasks, including searching a database, reviewing documents, summarizing those documents, reviewing contracts for compliance with a policy, extracting data from contracts, drafting a legal research memo, and helping lawyers prepare for a deposition, in part by suggesting questions to ask a witness.
The press release announcing CoCounsel came with some glowing endorsements from representatives of the law firms that had beta tested the product, including John Polson, the managing partner at Fisher Phillips, an Atlanta-headquartered law firm with offices all over the world and more than 500 lawyers. “The power of this tool to help our attorneys provide far more efficient legal research, document review, drafting, and summarizing, has already resulted in immediate, sustained benefits to our clients, and we have only scratched the surface of what it has to offer,” Polson was quoted as saying.
In a demo of CoCounsel for Fortune, Casetext’s three cofounders, Jake Heller, Laura Safdie, and Pablo Arredondo, showed me how the A.I. system could answer the question “Is interpretive dance protected by the First Amendment?” The chatbot provided a concise and cogent answer and it was able to cite the most relevant recent cases that addressed the issue. In another example, Heller got CoCounsel to search for evidence of deception in a huge cache of corporate emails. In that case, the software worked almost too well—it flagged some emails in which executives were forwarding one another jokes that contained references to someone hiding something from their spouse—but better a few false positives than having it miss a smoking gun.
“This is a pretty big paradigm shift in the way that law will be practiced,” Heller tells me. He says it will mean that firms will no longer have to deploy teams of paralegals and junior lawyers to comb through vast document dumps during fast-paced mergers and acquisitions or in preparation for complex litigation. “It frees attorneys up to do things that only attorneys can do, like helping you with legal strategy or deciding whether to settle a case,” Arredondo says. “The attorney still has a central role here.” Safdie tells me that she thinks the biggest impact of this technology may actually be in helping smaller law firms and sole practitioners compete more effectively against much bigger, well-resourced ones. She also says it will give a major leg up to legal aid lawyers and beleaguered public defenders.
One thing Casetext’s Heller says the company is shying away from for the moment is allowing CoCounsel to draft legal briefs, even though Casetext’s founders acknowledge that the underlying large language models could probably be used for this purpose. “We look to arm you with the facts and the law,” Arredondo says. But “crafting a brief in a persuasive way for a judge,” is something where a lawyer’s training, experience, and human intuition often play key roles. “How exactly to phrase something and what argument to bring, lawyers like to do that work,” Safdie says. Heller chimes in, “this affords them the time to write the best brief possible.”
But others are looking more seriously at using OpenAI’s technology for legal writing. Harvey, another San Francisco-based startup that was cofounded by former DeepMind A.I. research scientist Gabe Pereya and former antitrust and securities lawyer Winston Weinberg, is using OpenAI’s technology to help draft contracts and client memos for major law firms. Harvey received $5 million in funding from backers that include OpenAI’s Startup Fund and Google’s senior vice president of research Jeff Dean, among others. It just signed its first major partnership with the international law firm Allen & Overy.
Daren Orzechowski, the global cohead of Allen & Overy’s technology practice, was—like the Casetext founders—keen to tell me that the firm’s decision to trial Harvey was not so much about reducing the need for lawyers as allowing them to work more efficiently. “It’s a tool, not a replacement,” he says. “You still need to have lawyers checking the output.” He also was at pains to tell me that the technology would need to be used “responsibly,” in ways that preserve client confidentiality and trust. Orzechowski says Allen & Overy is looking at a lot of possible use cases for Harvey, but for now, was mostly deploying it to provide a first draft of contract clauses and to draft legal research memos.
For years, law firms have been under pressure about their fees. Traditionally, lawyers have billed for their time. But in recent years, many large corporations have sought to try to reduce their legal costs by negotiating fixed fee arrangements for certain amounts of work. In other cases, corporations hire specialized auditing firms to scrutinize invoices and challenge instances of alleged overbilling. In part, these trends are what have driven big law firms to get serious about adopting new technology. But I wonder if these new A.I. legal eagles could significantly accelerate the downward pressure on legal fees. After all, how much will a client be willing to pay for something that they suspect was generated at the press of a button by A.I. software? Maybe clients will even be tempted to buy the software themselves and rely less and less on outside firms. Orzechowski demurred when I asked him about this. “I think our fees will always be between us and our clients,” he says. But, when I put this same question to Casetext’s cofounders, Arredondo did allow that there was likely “to be an impact on billing paradigms.”
The other impact A.I. legal assistants may have is one that’s likely to apply to other fields too as A.I. copilots become more prevalent: They disrupt training practices. It used to be that one of the ways large law firms trained young associates was by having them do the kind of laborious—and often boring—legal trenchwork of digging through millions of pages of documentation in an M&A deal searching for the few aberrant contracts that might pose a problem for the acquiring company, or by performing a similar needle-in-a-haystack search in the course of discovery, or by asking them to do basic legal research on the most relevant case law and prepare a memo for a more senior attorney. In other words, all of the things that the new legal chatbots are so good at. If junior lawyers no longer have to do any of these things, how will they learn how to be lawyers? Well, Orzechowski, who says he “is obsessed with training and making it better,” says that this old stereotype about how big law firms trained young lawyers, to the extent it was true, was probably not a great way to instill knowledge to begin with. The new software, he says, might provide an impetus to rethink the way big law firms use junior lawyers, finding more opportunities to “have them use the smarts they got in law school.”
It’s a rethink about training and development that a lot more professions and industries are going to have to confront as A.I. copilots come to more and more fields.
With that, here’s the rest of this week’s A.I. news.
Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com
Correction, March 8: An earlier version of this story misspelled Casetext co-founder Pablo Arredondo’s last name. It also erroneously capitalized the letter ‘t’ in Casetext’s company name.
A.I. IN THE NEWS
Salesforce, Microsoft add OpenAI-based enterprise applications. Salesforce rolled out a bunch of generative A.I. tools for its software this week, combining its own proprietary A.I. models with OpenAI’s large language tools to create what it is calling EinsteinGPT. The technology will allow people to use natural language queries to pull answers from Salesforce’s CRM software as well as to perform tasks such as drafting emails to sales prospects and summarizing conversation threads in Slack. Not to be outdone, Microsoft, which has a close partnership with OpenAI, also rolled out OpenAI-based enhancements for its own enterprise software, including Dynamic 365, its enterprise resource planning and customer relationship management software, and its Power Apps software. My Fortune colleague David Meyer has more on the developments in today’s Data Sheet newsletter.
Secretive A.I. startup Inflection seeks to raise $675 million in venture capital. That’s according to a report in the Financial Times that cited sources familiar with the company’s fundraising efforts. The secretive startup was cofounded by former DeepMind cofounder Mustafa Suleyman and Reid Hoffman, the billionaire cofounder of LinkedIn and a partner at Silicon Valley venture capital firm Greylock. The startup, which raised $225 million in an initial funding round last year, has not yet released a product but has said it is working on a “new way for humans to interact with computers.”
And in other Reid Hoffman news: The venture capitalist recently announced in a post on LinkedIn that he was stepping down from the board of OpenAI’s non-profit foundation, citing increasing potential conflicts of interest as he invests in a portfolio of A.I. startups that are either customers of OpenAI or seeking to compete with the San Francisco-based advanced A.I. research shop.
Andreessen Horowitz invests in chatbot maker Character.ai at a reported $1 billion valuation. The company, which uses large language models to create chatbots that can imitate the conversational style of anyone from President Biden to fictional characters such as Nintendo character Mario, has received $200 million in funding led by Andreessen Horowitz, the Financial Times reported. The newspaper said it was the venture firm’s first investment in a generative A.I. company. It also said the deal valued Character.ai at more than $1 billion. Character.ai’s cofounders were part of the team that built LaMBDA, Google’s advanced A.I. chatbot, which it is using to power its Bard search service and other chatbot integrations.
Voice cloning technology is increasingly being used in scams. The Washington Post reports on the alarming trend of scammers using voice cloning A.I. software to impersonate people. The scammers often call a victim and pretend to be a loved one in distress, asking them to send money. Such impostor scams were already growing, with over 36,000 reports of people being swindled in this way in the U.S. in 2022. But new voice cloning software that only requires a few-second sample of someone’s voice in order to credibly mimic them is making such crimes easier to pull off.
Meta’s LLaMa large language model leaked. Someone leaked the software code and model weights for Meta’s powerful large language model LLaMa to 4chan, Vice reported. Just a week earlier, Meta had launched the model, making it available to academics and researchers to use for non-commercial purposes. Meta had intended the model to allow researchers who don’t have the computer resources to train such a powerful large language model to continue to experiment with such systems. It was meant to be a counter-weight to the large language models from OpenAI, Microsoft, Google, and others, which are typically only accessible through an API that does not allow a user to see the underlying code or adjust the model weights directly. But with the entire model now leaked online, anyone will be able to take the model, tweak it, and use it for any purpose.
EYE ON A.I. RESEARCH
Hungary emerges as a major test bed for A.I. in breast cancer detection. That’s according to a New York Times story that chronicled the success that U.K.-based startup Kheiron Medical Technologies has been having in the European nation. The country does better than many at breast cancer screening and in 2021 it began testing several different A.I. systems at five hospitals and clinics that collectively perform more than 35,000 screenings annually. The A.I. tools are being used in conjunction with human radiologists to help pinpoint lesions they may have missed. Scans flagged by the A.I. system are then re-reviewed by senior radiologists, who can order additional tests or tissue biopsies if they feel they are warranted. In one Hungarian clinic, Kheiron’s system was found to increase cancer detection rates by 13%. Across five sites run by one Hungarian medical group using Kheiron’s technology, 22 cases were documented since 2021 in which the A.I. software identified cancer previously missed by radiologists, with about 40 additional cases where this may be the case under review, the newspaper reported.
FORTUNE ON A.I.
Researchers used artificial intelligence to detect Alzheimer’s risk with over 90% accuracy and could transform how medicine is practiced—by Tristan Bove
Google’s head of ChatGPT rival Bard reassures employees it’s ‘a collaborative A.I. service’ and ‘not search’—by Steve Mollman
OpenAI rolls out ChatGPT for business customers—by Jeremy Kahn
BRAINFOOD
Should “foundation” models be regulated—and how? That’s the question facing European Union lawmakers as they debate the bloc’s landmark A.I. Act. Google and Microsoft are among the big technology companies that have been waging a no-holds-barred lobbying campaign to try to convince European lawmakers to exempt “general purpose” A.I. systems, such as many of the large models that make most generative A.I. possible, from the proposed law. The final form of that legislation is currently being negotiated behind closed doors, but it imposes a graduated set of burdens on those deploying A.I. technology based on risk assessments. Tech publication Tech Crunch picks up on the story of Big Tech’s lobbying from a report issued by the lobbying transparency group Corporate Europe Observatory which found large U.S.-based tech companies had engaged in a range of direct and indirect lobbying tactics, including the semi-covert use of interest groups that seemed to be independent but were quietly funded by Big Tech.
If general-purpose A.I. systems are exempted, the burden for assessing and mitigating risks associated with A.I. will largely fall to other companies, including startups, that are users, but not creators, of these general A.I. models. Google, in a lobbying document the group obtained, told lawmakers that the general purpose systems themselves are not “high-risk” and that any risk would arise only because of the circumstances in which an end user deployed that system. But these end users might have little insight into the risks they are unwittingly taking on by using large “foundation models” since they often have no understanding of the data used to train the model or the circumstances under which a model might produce unexpected or unintended results.
What do you think? Who should be responsible for ensuring the safety of general-purpose A.I.?
This is the online version of Eye on A.I., a free newsletter delivered to inboxes on Tuesdays and Fridays. Sign up here.