LexisNexis exec says it’s ‘a matter of time’ before attorneys lose their licenses over using open-source AI pilots in court

By Nino PaoliNews Fellow
Nino PaoliNews Fellow

    Nino Paoli is a Dow Jones News Fund fellow at Fortune on the News desk.

    Getty Images

    A growing number of AI-created flaws found in legal documents submitted to courts have brought attorneys under increased scrutiny.

    Courts across the country have sanctioned attorneys for misuse of open-source LLMs like OpenAI’s ChatGPT and Anthropic’s Claude, which have made up “imaginary” cases, suggested that attorneys invent court decisions to strengthen their arguments, and provided improper citations to legal documents. 

    Experts tell Fortune more of these cases will crop up—and along with them steep penalties for the attorneys who misuse AI.

    Damien Charlotin, a lawyer and research fellow at HEC Paris, runs a database of AI hallucination cases. He’s tallied 376 cases to date, 244 of which are U.S. cases.

    “There is no denying that we were on an exponential curve,” he told Fortune.

    Charlotin pointed out that attorneys can be particularly prone to oversights, as individuals in his profession delegate tasks to teams, oftentimes don’t read all of the material collected by coworkers, and copy and paste strings of citations without proper fact-checking methods. Now AI is making the practice more apparent as attorneys adjust to the new tech.

    “We have a situation where these (open-source models) are making up the law,” Sean Fitzpatrick, LexisNexis North America, UK & Ireland CEO, told Fortune. “The stakes are getting higher, and that’s just on the attorney’s side.”

    Fitzpatrick, a proponent of purpose-built AI applications for the legal market, admits the tech giants’ low-cost pilot chatbots are good for things like summarizing documents and writing emails. But for “real legal work” like drafting motions, the models “can’t do what lawyers need them to do,” Fitzpatrick said. 

    For example, drafting courtroom-ready documents for cases that could involve Medicaid coverage decisions, Social Security benefits, or criminal prosecutions cannot afford to have AI-created mistakes, he added.

    Other risks

    Entering sensitive information into the open-source models also risks breach of attorney-client privilege

    Frank Emmert, executive director of the Center for International and Comparative Law at Indiana University and legal AI expert, told Fortune that open-source models can receive privileged information from attorneys that use them.

    If someone else knows that, they could reverse engineer a contract between a client and attorney, for instance, using the right prompts.

    “You’re not gonna find the full contract, but you’re going to find enough information out there if they have been uploading these contracts,” Emmert said. “Potentially you could find client names… or at least, you know, information that makes the client identifiable.”

    If uploaded without permission by an attorney, this can become findable, publicly available information, since the open-source models don’t protect privilege, Fitzpatrick said. 

    “I think it’s only a matter of time before we do see attorneys losing their license over this,” he said.

    Fitzpatrick said models like his company’s generative tool Lexis+ AI, which inked a seven-year contract as an information provider to the federal judiciary in March, may be the answer to risks of hallucinations and client privacy.

    LexisNexis doesn’t train its LLMs on our customers’ data and prompts are encrypted. Plus, the tech is “most equipped” to solve hallucination issues since it pulls from a “walled garden of content,” or a closed, proprietary system that’s updated everyday, Fitzpatrick said.

    Still, LexisNexis doesn’t claim to maintain privilege and recognizes that obligation always rests with the attorney, the company said.

    But experts tell Fortune AI used for legal purposes inherently comes with risks, open source or not.

    AI’s legal infancy

    Emmert says he categorizes models into three baskets: open-access tools like ChatGPT, in-house applications he refers to as “small language models,” and ”medium language models” like LexisNexis’ product.

    Fear of mistakes have pushed firms to restrict use of open-source models and instead develop in-house applications, which are basically a server in the firm where attorneys upload their contracts and documents and start training an AI model on them, Emmert said.

    But compared to the vast amount of data available to open-source models, in-house applications will always have inferior answers, Emmert said.

    He said medium sized models can be used to help with contract drafting, document review, evidence evaluation, or discovery procedures, but are still limited in what they can pull from in comparison to the open internet.

    “And the question is, can we fully trust them? … One, that they’re not hallucinating, and second, that the data really remains privileged and private,” Emmert said.

    He said that if he was part of a law firm, he would hesitate to contract with this type of provider and spend a lot of money for something that is still in its infancy and may end up not being really useful.

    “Personally, I believe that these AI tools are fantastic,” Emmert said. “They can really help us get more work done at a higher level of quality with significantly lower investment of time.”

    Still, he warned the industry is in a new era that requires accelerated education on something that was quickly adopted without being totally understood. 

    “Starting in academia but continuing in the profession, we need to train every lawyer, every judge, to become masters of artificial intelligence—not in the technical sense, but using it,” Emmert said. “That’s really where the challenge is.”

    Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.