Saturday, May 25, 2024

Catalog of Court-Mandated AI Disclosures (cf. USPTO Guidance)

Guest post by Victoria Fang, a JD candidate at Stanford Law. Before law school, Fang worked as a patent examiner at the USPTO in the medical imaging and diagnostics space.

In the past year, two “ChatGPT lawyers,” a California eviction law firm, Michael Cohen, and a Colorado attorney have each made headlines for making the same mistake—citing fake cases in legal filings. In attempts to speed up their legal research, these lawyers used generative AI tools like ChatGPT and Google Bard that “hallucinated” nonexistent case law. 

Indeed, use of generative AI by litigants raises issues of accuracy and confidentiality. ChatGPT has been known to “hallucinate” and has other limitations, including being limited to information on the internet before certain date cutoffs and not actively searching the internet or dedicated legal databases for new information. 

Courts have responded to the increased use of generative AI by litigants through judge- or case-specific guidance, standing orders, and local rules, which I have summarized in this spreadsheet. These court mandates have been collated from various news articles, Ropes & Gray’s Artificial Intelligence Court Order Tracker launched in January 2024, and independently searching uscourts.gov. As summarized in the catalog, only a few courts or judges outright prohibit the use of AI. Among courts that do not prohibit the use of AI, some courts require disclosure and/or certification, and others make clear that verification by a human is required. A number of judges even put a special emphasis on confidentiality. More recently, judges have begun requiring litigants to keep a record of the prompts and responses they used, in case issues arise.

The diversity in courts’ responses demonstrates how tricky it can be to set policies reinforcing age-old principles in legal ethics and regulating legal practice. Things that can go wrong when using generative AI, such as inaccuracy or breach of confidentiality, are also issues when humans draft legal documents. At the same time, requiring disclosure can be useful to flag risks to decision-makers and remind practitioners to take extra care.

Agencies are also faced with the issue of if and how to regulate the use of AI in practice before them. For example, the USPTO released guidance in February 2024 and further guidance in April 2024 regarding practice before the PTAB, TTAB, and patent examining corps. The USPTO’s recent guidance on use of AI-based tools in practice before the USPTO can be directly compared to courts’ responses in the above-linked catalog.1

  • Prohibition? No. “[T]here is no prohibition against using these computer tools in drafting documents for submission to the USPTO.”
  • Disclosure? Might be required. No “general obligation to disclose to the USPTO the use of such tools,” but “[a] duty to disclose the use of such tools is implicated when the use rises to the level of materiality under 37 CFR 1.56(b)” or “unless specifically requested by the USPTO” under 37 CFR 1.105 or 37 CFR 11.52.
  • Certification? Yes under 37 CFR 11.18(b).
  • Human Verification? Required via certification. “[P]arties presenting a paper to the USPTO are under a duty to review the information in the paper and correct any errors.”
    • “Simply relying on the accuracy of an AI tool is not a reasonable inquiry.”
    • Claim Drafting. “In situations where an AI tool is used to draft patent claims, the practitioner is under a duty to modify those claims as needed to present them in patentable form before submitting them to the USPTO.”
    • IDS Disclosure. “Regardless of where prior art is found, submitting an IDS without reviewing the contents may be a violation of 37 CFR 11.18(b).”
  • No disclosure of confidential information? Required to “be cognizant of the risks and take steps to ensure confidential information is not divulged.”
  • Maintain record of prompts? Not discussed.

The USPTO’s guidance regarding disclosure of the use of AI tools in patent prosecution remains relatively muddy. When exactly does use of computer tools for document drafting rise to the level of 1.56(b) materiality? Section III.A of the USPTO’s guidance attempts to provide examples of using AI tools in the patent context that might be material to patentability or be inconsistent with a patentability position taken by an applicant, but it remains unclear how the duty to disclose the AI use is “implicated” in those situations.

I can imagine generative AI tools can generate inaccurate statements about the technology underlying an invention, draft disclosures that do not comply with the enablement or written description requirements of 35 U.S.C. § 112, and contribute ideas beyond what the listed inventors actually explained in an invention disclosure. In the latter example, the USPTO’s guidance indicates a practitioner should disclose if any claims did not have a significant contribution by a human inventor, which might include disclosing AI use.2 In the first two cases, however, the USPTO’s guidance indicates that the practitioner should take “extra” and “appropriate care” in the area of § 112 and verify the patent application for accuracy and compliance before submission,3 and it is unclear that the duty to disclose the AI use is implicated or required in those situations. 

In another example, a practitioner may input ten prior art patent numbers into a generative AI system (or the AI tool may self-identify prior art patent numbers based on an inputted invention disclosure), and the AI system drafts a Background section based on the known prior art. If the practitioner then asks the AI system to remove the specific prior art patent numbers from the Specification (leaving only a vague or general description of the background art) and those prior art could alone or together establish unpatentability, that would be material. However, this scenario appears to “implicate” the duty to disclose the prior art, not necessarily the duty to disclose use of the AI tool.  

The USPTO guidance’s vague language does not resolve much regarding when AI use must be disclosed. If the negative consequences of failing to disclose material information are narrow or unlikely, 1.56(b) materiality may not be enough to protect against the risks of using AI that the guidance intends to cabin. At the same time, the guidance helps put practitioners on notice about some specific risks of using AI tools in the patent and trademark contexts, and policymakers can continue to consider the tricky policy questions around calibrating the scope and strength of the duty to disclose AI use.


1. The USPTO in its April 2024 guidance’s background section even cites to Eastern District of Pennsylvania Judge Baylson’s standing order on AI in a footnote (as is done here).

2. In a separate guidance on inventorship for AI-assisted inventions, the USPTO indicated, “For example, in applications for AI-assisted inventions, this information could include evidence that demonstrates a named inventor did not significantly contribute to the invention because the person’s purported contribution(s) was made by an AI system.” At the same time, the guidance only mentions “potential” negative consequences if the duty to disclose material information is not satisfied.

3. “In situations where the specification and/or drawings of the patent application are drafted using AI tools, practitioners need to take extra care to verify the technical accuracy of the documents and compliance with 35 U.S.C. 112. Also, when AI tools are used to produce or draft prophetic examples, appropriate care should be taken to assist the readers in differentiating these examples from actual working examples.”

No comments:

Post a Comment