Hello and welcome to Eye on AI.
The outlook for AI in the courtroom didn’t look too promising last year when multiple state prosecutors threatened a startup CEO with jail time if he went through with plans to equip a defendant with an “AI lawyer” that would give real-time legal guidance via an earpiece during proceedings. When two New York lawyers were sanctioned for submitting a legal brief that included fictitious case citations generated by ChatGPT, it looked even more bleak. Yet courts, judges, and law firms worldwide have begun adopting generative AI without any real standards or guidelines.
Brazil’s government, for example, made headlines earlier this summer for tapping ChatGPT to analyze in-progress cases, flag lawsuits to act on, and surface trends and suggestions for action. In Argentina, judges are using LLMs to generate summaries of their decisions in plain language. India’s Supreme Court is using AI to translate legal documents between English and 10 local vernaculars. And as my Eye on AI colleague Jeremy Kahn and I have both been reporting, law firms and corporate legal divisions are embracing a new crop of startups offering AI legal copilots.
With so much AI sweeping through the legal system, UNESCO is now calling for formal guidelines for the use of AI in courts and tribunals. The organization, a division of the United Nations focused on science, education, and culture, published its final draft of guidelines aimed at helping ensure use of AI technologies by courts and tribunals aligns with the fundamental principles of justice, human rights, and the rule of law. The organization is seeking feedback on the guidelines from legal professionals and the public through Sept. 5 before releasing the final version in November.
While AI tools can be helpful in the judicial system, they can also “undermine human rights, such as fair trial and due process, access to justice and effective remedy, privacy and data protection, equality before the law, and non-discrimination, as well as judicial values such as impartiality, independence, and accountability,” reads a document introducing the draft guidelines. “Moreover, the misuse of AI systems may undermine society's trust in the judicial system. AI tools are not a substitute for qualified legal reasoning, human judgment, or tailored legal advice.”
In a 2023 survey of judicial operators, UNESCO found that while 44% are already using AI tools such as ChatGPT for work-related activities, only 9% reported that their organization issued guidelines or provided AI-related training. In addition to this disparity, the document cites the adoption of new regulations like the EU AI Act as a major reason why such guidelines are more urgent. The EU AI Act—which went into full force last week—classifies AI systems intended to be used by judicial authorities or for interpreting the law as “high risk” and thus subject to various requirements around risk management and human oversight.
UNESCO’s guidelines are broken down into advice for organizations that are part of the judiciary and for individual members of the judiciary. For the former, recommendations for adopting AI tools include evaluating the necessity and appropriateness of using the technology for specific tasks and assessing the impact of AI systems on human rights and other topics before deploying them. The guidance also states that judiciary bodies should choose AI systems that offer greater transparency into their training data, obtain information from the AI system’s developers and providers about its limits and risks, require systems to allow for human intervention and ensure the developer agrees to collaborate with algorithmic audits commissioned by the organization to external parties. UNESCO also calls for more stringent data privacy protections, more robust data governance frameworks, improved cybersecurity, and the continuous publication of impact evaluations and performance reports. Another section recommends guidelines specific to generative AI, including ensuring the authenticity and integrity of content produced by such systems, knowing their limitations, and banning some uses altogether.
“When the terms of use of a generative AI tool indicate that the user’s prompts will be used by the provider to train its models or that third parties can access these prompts, then the use of such tool should be prohibited or restricted,” the document reads, adding that using AI in certain sensitive areas, such as the unilateral generation of binding legal decisions, should also be banned.
Guidelines for individuals center more around being aware of the uses and limitations of AI tools, avoiding over-reliance on them, and verifying that any outputs from AI systems are accurate. For transparency, it also states individuals should provide meaningful information about when and how they use AI tools, as well as let interested parties or clients challenge decisions taken with or supported by AI systems. The guidelines also specifically call out LLMs as unreliable both as search engines and for legal analysis.
Overall, the UNESCO draft recommendations represent the most comprehensive guidance for AI in the legal system yet. While several U.S. state bars including California, New York, New Jersey, and Florida have issued their own, the finalized UNESCO recommendations can help jurisdictions that are still working on navigating these fast-moving and increasingly thorny issues.
Many legal departments have been quick to embrace the efficiencies that AI could provide for their often tedious work, but it’s clear AI in the legal realm comes with various risks as well—both to individuals navigating the legal system and practitioners and judiciaries themselves. UNESCO is focused on the impact on human rights, but lawyers, judges, and governments should be concerned for themselves too (as those two New York lawyers who trusted ChatGPT to write their brief can attest). The American Bar Association recently warned lawyers to beware of deepfakes, which can be used for everything from fabricating evidence to making ransom demands and could put them at risk of malpractice if they fail to detect them. Even U.S. Supreme Court Chief Justice John Roberts concluded the court’s 2023 year-end report with thoughts on AI’s role in the legal systems and some words of caution.
“Any use of AI requires caution and humility,” he wrote.
And with that, here’s more AI news.
Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com
Correction: Last week’s edition (Aug. 1) misspelled the name of one of the startups OpenAI acquired in June. The company name is Rockset, not Rocketset.