Guest post by Victoria Fang, a JD candidate at Stanford Law. Before law school, Fang worked as a patent examiner at the USPTO in the medical imaging and diagnostics space.
In the past year, two “ChatGPT lawyers,” a California eviction law firm, Michael Cohen, and a Colorado attorney have each made headlines for making the same mistake—citing fake cases in legal filings. In attempts to speed up their legal research, these lawyers used generative AI tools like ChatGPT and Google Bard that “hallucinated” nonexistent case law.
Indeed, use of generative AI by litigants raises issues of accuracy and confidentiality. ChatGPT has been known to “hallucinate” and has other limitations, including being limited to information on the internet before certain date cutoffs and not actively searching the internet or dedicated legal databases for new information.
Courts have responded to the increased use of generative AI by litigants through judge- or case-specific guidance, standing orders, and local rules, which I have summarized in this spreadsheet. These court mandates have been collated from various news articles, Ropes & Gray’s Artificial Intelligence Court Order Tracker launched in January 2024, and independently searching uscourts.gov. As summarized in the catalog, only a few courts or judges outright prohibit the use of AI. Among courts that do not prohibit the use of AI, some courts require disclosure and/or certification, and others make clear that verification by a human is required. A number of judges even put a special emphasis on confidentiality. More recently, judges have begun requiring litigants to keep a record of the prompts and responses they used, in case issues arise.