I have been giving some talks on my article, “Keeping ChatGPT a Trade Secret While Selling It Too,” which is now published in the Berkeley Technology Law Journal. The article addresses a legal puzzle: How can companies protect generative AI technology through trade secret law, while also selling new AI products to the public? I have gotten some really interesting questions from audience members on AI and trade secrecy. I thought I'd share them along with my answers. If you disagree with my answers or how I am characterizing the technology, I'd really love to hear your thoughts.
Q: If I use ChatGPT to find someone else's trade secrets, can I be liable for misappropriating those trade secrets?
A: Yes, potentially. The answer would depend on, first, whether your use of AI is deemed an "improper means" of acquisition, and, second, whether the information is a trade secret anymore at all (e.g. has it been rendered "readily ascertainable through proper means" since it can be discovered using AI?
Beware: The fact that you used AI to get the secrets might itself count against you. I have a recent empirical article, co-authored with Joseph Avery (Miami) and Mike Schuster (Georgia), which strongly suggests that there is an “anti-AI bias” and that judges and jurors are more likely to find that a defendant “misappropriated” trade secrets if artificial intelligence was used to accomplish the task. I blog on that here.
Q: Can model weight parameters be trade secrets?
A: Potentially. We know model weights are viewed by AI companies as trade secrets because that's a big part of what they release when they market models as "open source." For example, OpenAI has released the weight parameters for its “open source” foundation model, but not for its other ChatGPT models. The weight parameters are treated as trade secrets. Llama, developed by Meta AI, also releases some model weights for “open source” models.
Q: Can system prompts be trade secrets?
A: Potentially. System prompts or "system prompt code" refers to instructions given to an AI model to guide its interactions with users. One company (OpenEvidence) is calling its system prompt code a "crown jewel" trade secret. OpenEvidence has brought multiple lawsuits against defendants it alleges are engaging in "prompt injection attacks." The case is ongoing. One of the defendants (Pathway Medical) filed a motion to dismiss. Doximity now has also filed a motion to dismiss, which is a very interesting read. Doximity has now purchased Pathway.
Q: How should businesses protect their AI-related trade secrets? Should they get patents or rely on trade secrecy?
A: The standard factors that are considered seem to weigh in favor of trade secrecy for at least some aspects of AI inventions, such as algorithms, code, training data, model architecture, model weights, and fine-tuning information like system prompts. The primary reason is the sheer ease of maintaining secrecy. Additional factors include the potential for a longer term length, the undesirability of disclosure through the patent system (in as soon as 18 months), patent eligibility challenges, and the likely difficulty of enforcing any resulting patent given difficulty of detecting infringements and a narrow scope. Thanks to Ken Corsello for this last point. As Ken pointed out to me, if the information (e.g. an algorithm) qualifies as a trade secret, it has to be "not-readily ascertainable," and if that is the case, then it would probably be very hard to detect infringements in a patent lawsuit based on that same information.
Q: Will patent eligibility challenges for AI -related technology affect the calculus of whether to choose trade secret instead?"
A: It could, yes. The Patent & Trademark Office has not categorically excluded AI-inventions, but eligibility challenges raised by Section 101 "abstract idea" rejections as well as enablement and obviousness challenges does this shift the calculus towards trade secrecy versus patenting. Moreover, the Patent & Trademark Office's negative patentability stance on non-human-inventions (e.g. invention purely developed by an AI with no human involvement) might make trade secrecy attractive. See Thaler v. Vidal (Fed. Cir. 2022) ("...the Patent Act requires an "inventor" to be a natural person..."). Trade secrecy, in contrast, has no human inventor requirement. So for purely AI-generated inventions, where patents (and copyrights) may not be available, trade secrecy provides an alternative. Here's a phenomenal blog post on this issue by Sedona Conference WG 12 members Erik Weibust and Dean Pelletier, Protecting AI-Generated Inventions as Trade Secrets Requires Protecting the Generative AI as Well.
Q: Do courts consider whether AI companies who distribute generative AI models to the public have failed to take "reasonable measures" to protect their secrets?
A: Yes, they do, and will likely do so for upcoming AI cases. This issue came up in the OpenEvidence filings I mentioned. As I've noted, information that can easily be extracted from an AI through simple prompting, without much time, cost, of effort, is quite arguably not the subject of "reasonable" secrecy precautions. That said, courts give great deference to attempts to keep information factually secret (e.g. compiling code to make it hard to figure out, requiring all users to log in with passwords, and adopting other cybersecurity measures). Courts also give great deference to contractual measures, such as requiring all users to adhere to "terms of use" that restrict what users can do with the underlying technology. On the other hand, some courts have held in the software context that releasing software features that are plainly visible to a user without a confidentiality provision constitutes failure to take reasonable measures and forfeits trade secrecy.
Q: If I give an AI tool like ChatGPT my own trade secrets, could OpenAI adopt this information as their own trade secret?
A: Ideally not, but it could be very hard to detect this and to enforce your rights. Just hypothetically, imagine that you discuss your trade secret with ChatGPT, and that OpenAI gets ahold of this information. OpenAI should not be the rightful owner. Under 18 U.S.C. § 1839(4), the “owner” of a trade secret is defined as "the person or entity in whom or in which rightful legal or equitable title to, or license in, the trade secret is reposed." So even though trade secret law has no "originality" requirement (like in copyright, where you simply cannot claim copyright in information you derived from another), only licensees or those with "rightful" title can be owners of trade secrets. Thus, given that it was taken without a license or authorization, your trade secret should not be deemed rightfully "owned" by the taker. You might even have a trade secret claim of your own against OpenAI.
But there is a problem. The terms of use for ChatGPT does not promise users confidentiality. There is a separate terms of use governing the Enterprise Licenses OpenAI offers to businesses, which does contain mutual confidentiality protections. But there is no longer any no such confidentiality guarantee in the terms of use that applies to ordinary users. So this would make it very hard for you to sue to enforce your rights.
Note that users can, in theory, "opt out" of having a model train on their information. The general terms of use has an "Opt out" clause, stating: "If you do not want us to use your Content to train our models, you can opt out by following the instructions in this article. Please note that in some cases this may limit the ability of our Services to better address your specific use case." If you find out how to use that feature, let me know...