Hello and welcome to Eye on AI.
The outlook for AI in the courtroom didn’t look too promising last year when multiple state prosecutors threatened a startup CEO with jail time if he went through with plans to equip a defendant with an “AI lawyer” that would give real-time legal guidance via an earpiece during proceedings. When two New York lawyers were sanctioned for submitting a legal brief that included fictitious case citations generated by ChatGPT, it looked even more bleak. Yet courts, judges, and law firms worldwide have begun adopting generative AI without any real standards or guidelines.
Brazil’s government, for example, made headlines earlier this summer for tapping ChatGPT to analyze in-progress cases, flag lawsuits to act on, and surface trends and suggestions for action. In Argentina, judges are using LLMs to generate summaries of their decisions in plain language. India’s Supreme Court is using AI to translate legal documents between English and 10 local vernaculars. And as my Eye on AI colleague Jeremy Kahn and I have both been reporting, law firms and corporate legal divisions are embracing a new crop of startups offering AI legal copilots.
With so much AI sweeping through the legal system, UNESCO is now calling for formal guidelines for the use of AI in courts and tribunals. The organization, a division of the United Nations focused on science, education, and culture, published its final draft of guidelines aimed at helping ensure use of AI technologies by courts and tribunals aligns with the fundamental principles of justice, human rights, and the rule of law. The organization is seeking feedback on the guidelines from legal professionals and the public through Sept. 5 before releasing the final version in November.
While AI tools can be helpful in the judicial system, they can also “undermine human rights, such as fair trial and due process, access to justice and effective remedy, privacy and data protection, equality before the law, and non-discrimination, as well as judicial values such as impartiality, independence, and accountability,” reads a document introducing the draft guidelines. “Moreover, the misuse of AI systems may undermine society’s trust in the judicial system. AI tools are not a substitute for qualified legal reasoning, human judgment, or tailored legal advice.”
In a 2023 survey of judicial operators, UNESCO found that while 44% are already using AI tools such as ChatGPT for work-related activities, only 9% reported that their organization issued guidelines or provided AI-related training. In addition to this disparity, the document cites the adoption of new regulations like the EU AI Act as a major reason why such guidelines are more urgent. The EU AI Act—which went into full force last week—classifies AI systems intended to be used by judicial authorities or for interpreting the law as “high risk” and thus subject to various requirements around risk management and human oversight.
UNESCO’s guidelines are broken down into advice for organizations that are part of the judiciary and for individual members of the judiciary. For the former, recommendations for adopting AI tools include evaluating the necessity and appropriateness of using the technology for specific tasks and assessing the impact of AI systems on human rights and other topics before deploying them. The guidance also states that judiciary bodies should choose AI systems that offer greater transparency into their training data, obtain information from the AI system’s developers and providers about its limits and risks, require systems to allow for human intervention and ensure the developer agrees to collaborate with algorithmic audits commissioned by the organization to external parties. UNESCO also calls for more stringent data privacy protections, more robust data governance frameworks, improved cybersecurity, and the continuous publication of impact evaluations and performance reports. Another section recommends guidelines specific to generative AI, including ensuring the authenticity and integrity of content produced by such systems, knowing their limitations, and banning some uses altogether.
“When the terms of use of a generative AI tool indicate that the user’s prompts will be used by the provider to train its models or that third parties can access these prompts, then the use of such tool should be prohibited or restricted,” the document reads, adding that using AI in certain sensitive areas, such as the unilateral generation of binding legal decisions, should also be banned.
Guidelines for individuals center more around being aware of the uses and limitations of AI tools, avoiding over-reliance on them, and verifying that any outputs from AI systems are accurate. For transparency, it also states individuals should provide meaningful information about when and how they use AI tools, as well as let interested parties or clients challenge decisions taken with or supported by AI systems. The guidelines also specifically call out LLMs as unreliable both as search engines and for legal analysis.
Overall, the UNESCO draft recommendations represent the most comprehensive guidance for AI in the legal system yet. While several U.S. state bars including California, New York, New Jersey, and Florida have issued their own, the finalized UNESCO recommendations can help jurisdictions that are still working on navigating these fast-moving and increasingly thorny issues.
Many legal departments have been quick to embrace the efficiencies that AI could provide for their often tedious work, but it’s clear AI in the legal realm comes with various risks as well—both to individuals navigating the legal system and practitioners and judiciaries themselves. UNESCO is focused on the impact on human rights, but lawyers, judges, and governments should be concerned for themselves too (as those two New York lawyers who trusted ChatGPT to write their brief can attest). The American Bar Association recently warned lawyers to beware of deepfakes, which can be used for everything from fabricating evidence to making ransom demands and could put them at risk of malpractice if they fail to detect them. Even U.S. Supreme Court Chief Justice John Roberts concluded the court’s 2023 year-end report with thoughts on AI’s role in the legal systems and some words of caution.
“Any use of AI requires caution and humility,” he wrote.
And with that, here’s more AI news.
Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com
Correction: Last week’s edition (Aug. 1) misspelled the name of one of the startups OpenAI acquired in June. The company name is Rockset, not Rocketset.
AI IN THE NEWS
Palantir and Microsoft partner to deploy GPT-4 and other AI tools to U.S. defense and intelligence agencies. That’s according to FedScoop. The agreement between the companies will make an array of AI and analytics services available to U.S. defense and intelligence agencies in classified environments. Despite criticism of how AI could be used to accelerate warfare or infringe on civil liberties, the deal further cements the use of AI into defense and surveillance. Palantir has garnered intense scrutiny over the years for its secretive operations, surveillance and predictive policing technology products, and for its long-running contract with U.S. Immigration and Customs Enforcement (ICE) to track undocumented immigrants.
ByteDance-owned Faceu Technology launches a text-to-video generating app. That’s according to Reuters. The app, called Jimeng AI, is now available for Chinese users in Apple’s App Store. Since OpenAI unveiled its Sora model that can generate videos based on short text prompts in February (which it still has yet to release), Chinese startups have been racing to offer similar technologies. Several have since launched text-to-video models that can create short clips and are easily accessible to users in app form.
More people are returning Humane’s AI Pin than are buying it. That’s according to The Verge. The company has hit $1 million in returns of its wearable LLM-in-a-box against only $9 million in sales, not counting 1,000 purchases that were canceled before shipping. Humane disputed the numbers to The Verge but didn’t provide any specifics about the supposed inaccuracies. Reviews for the AI Pin have been resoundingly negative, so the high rate of returns is not exactly a shock. The company raised over $200 million from Silicon Valley investors, however, and has seen executive turnover in recent months.
FORTUNE ON AI
Intel, now struggling against AI competitors, turned down an opportunity to own 15% of OpenAI —by Marco Quiroz-Gutierrez
Some European users will get access to Apple Intelligence after all —by David Meyer
Autodesk’s ‘discovery mentality’ gives managers permission to roll out AI experiments —by John Kell
Can Southeast Asian countries work together to regulate AI? ‘We love each other and yet…we’re always looking over our shoulder’ —by Nicholas Gordon
AI CALENDAR
Aug. 12-14: Ai4 2024 in Las Vegas
Aug. 28: Nvidia earnings
Sept. 25-26: Meta Connect in Menlo Park, Calif.
Dec. 8-12: Neural Information Processing Systems (Neurips) 2024 in Vancouver, British Columbia
Dec. 9-10: Fortune Brainstorm AI San Francisco (register here)
EYE ON AI NUMBERS
80
That’s how many U.S. research teams have so far been awarded access to computing power and other AI resources through National AI Research Resource (NAIRR), a pilot program for a national AI infrastructure established by President Joe Biden’s 2023 executive order on AI. The White House shared the progress in a recent update, also reporting that federal agencies completed all of the actions called for in the executive order in its first 270 days on schedule. The White House also announced it’s almost halfway to its goal of bringing 500 AI hires into the federal government by the end of fiscal 2025, having made over 200 hires already.