I’ve been busy with lots of interesting conferences and workshops in the past few weeks, and since I wrote out detailed notes for two of them, I thought I would post them for people who weren’t able to attend. First, my comments from the We Robot conference two weeks ago at Stanford:
Ryan Abbott’s Everything is Obvious is part of an interesting series of articles Ryan has been working on related to how developments in AI and computing affect legal areas such as patent law. In an earlier article, I Think, Therefore I Invent, he provocatively argued that creative computers should be considered inventors for patent and copyright purposes. Here, he focuses on how these creative computers should affect one of the most important legal standards in patent law: the requirement that an invention not be obvious to a person having ordinary skill in the art.
Ryan’s definition of “creative computers” is purposefully broad. The existing creative computers he discusses are all narrow or specific AI systems that are programmed to solve particular problems, like systems from the 1980s that were programmed to design new microchips based on certain rules and IBM’s Watson, which is currently identifying novel drug targets for pharmaceutical research. And Ryan thinks patent law already needs to change in response to these developments. But I think his primary concern is the coming of artificial general intelligence that surpasses human inventors.
I have some skepticism that computers will really be rivaling human inventors in what patent law recognizes as inventorship anytime soon, as opposed to aiding human inventors in similar ways to many other technologies. I am not going to focus on this because many in this audience are more qualified than I am to opine on the capabilities of inventive machines, but I will note that you should make sure you’re asking the right question. In particular, being an “inventor” for patent purposes is very different from standards of authorship for scientific papers: all the credit goes to those who come up with the initial idea, known as “conception,” and the hard work of figuring out whether it works is known as “reduction to practice” and isn’t enough to get listed on the patent. And there are prominent examples of inventions for which the lead author on the relevant scientific paper doesn’t qualify as an inventor on the patent. So the legal question under current doctrine is whether computers are really involved in that conception stage.
I’ll let you all ask Ryan about that during the Q&A, since I think it is more interesting for me to assume Ryan is right and that we either do or will have computers meeting that standard of independent conception, and to think about what that means for patent law.
For those who aren’t familiar with patent law, one of the most important legal hurdles to getting a patent is the requirement that the invention be “nonobvious.” The basic legal test is (1) figure out all the “prior art”—everything that’s already been done related to the invention, (2) figure out what makes the invention different from the prior art, and (3) decide whether those differences would be obvious to an ordinary researcher in that field—a hypothetical person, like the “reasonable person” in tort law, but in patents it’s the “person having ordinary skill in the art.” Note that this test doesn’t tell us anything about what it really means for the differences to be “obvious” to that ordinary researcher, and this continues to be the key point of disagreement on how the test should be implemented.
I’ll come back to this because I think it is something Ryan needs to grapple with, but first let’s think about Ryan’s proposal: He argues that the right doctrinal lever to deal with creative computers is the definition of the person of ordinary skill. To deal with the fact that humans inventors are already relying on artificial intelligence for many tasks, he suggests that the factors considered when determining the level of ordinary skill should include “technologies used by active workers.” That means that once the standard means of research in a field includes creative machines, the test would judge obviousness from the perspective of a researcher using such a machine. And when human inventors are replaced by machines, the person of ordinary skill should be replaced by an inventive machine.
To facilitate this test, Ryan proposes a new requirement that patent applicants disclose when a machine contributes to conception of an invention.
He sees two key benefits: (1) Increased predictability because the issue would become whether machines could reproduce the subject matter of a patent application. (2) Substantively better results because the standard would be raised to account for increased machine invention. And he notes that if we get to the point that every invention is easily reproduced by commonly used computers, then everything will be obvious under this standard, and that’s a good thing because patents will no longer be needed to incentivize innovation.
I’m not sure that either benefit is really clear. On predictability, as Ryan notes, there are many ways his test could be implemented in practice. One could make somewhat arbitrary decisions to constrain the test, such as giving a specific AI system like Watson a particular problem to solve and the relevant prior art and see if it is able to come up with the invention in a specified amount of time, such as 24 hours. There would be problems inherent in all of these judgments, and I think there are good reasons that we don’t currently assess obviousness from the perspective of some actual human. But more importantly, the outcome would still often depend on how you define the problem to be solved that you feed into the machine, and I’m not sure how you take out the human judgment involved in that decision. For many patentable inventions, the difficult part of the inventive process is actually coming up with the problem, and I don’t think that changing the perspective from which we then assess that problem makes it easier to figure out how patent law should deal with this kind of problem-identifying creativity.
And on the substantive standard, while I completely agree with Ryan that as computers change the costs of invention, the obviousness standard should adjust to deal with this, I’m not sure that changing the perspective from which we assess obviousness from an ordinary researcher to an ordinary inventive machine is the right doctrinal lever. Under current doctrine, there aren’t actually many cases that seem to turn on how skilled the hypothetical person having ordinary skill in the art really is—and I think this gets back to the issue of what it really means for an invention to be “obvious” to that researcher. So as a practical matter, I don’t think that asking courts to assess obviousness from the perspective of a computer that is even more skilled than the ordinary human researcher will have that much effect.
But I think Ryan’s broader point that computer inventors should affect how we think about obviousness is absolutely right, and that it actually helps illustrate a longstanding tension in obviousness doctrine and scholarship. As a formal legal matter, the obviousness inquiry is currently based on a cognitive approach, focused on the degree of cognitive difficulty in conceiving the invention. But this approach doesn’t provide any normative foundation, much less a clear doctrinal test, for making this judgment. To get that normative foundation, scholars typically turn to the economic rationale underlying the doctrine: the nonobviousness requirement is supposed to weed out inventions that we would get anyway even if we didn’t have a patent system, such that the costs of granting a patent are greater than the incentive benefit they provide.
Some scholars have argued that courts should be more explicit about the fact that this economic inquiry is what is really underlying the test—it’s the right standard as a policy matter, it can fit within the statutory language, it’s not actually harder to implement than the current test, and it makes clearer that courts and patent examiners should consider inputs like alternative incentive mechanisms and outcomes like widespread independent invention when making this assessment. I think that’s right, and the rise of creative computers strains the tension even more, though even if this is not made explicit I think it is implicit in what courts do, and that the current obviousness test has the flexibility to adapt to this issue.
Ryan may be right that some day, as AI improves, everything will be obvious in the economic sense that patents are not needed to efficiently incentivize any innovations. This is closely related to my colleague Mark Lemley’s argument in his 2015 article IP in a World Without Scarcity, in which he notes that IP may be less necessary in a world where creation, reproduction, and distribution are cheap. But both Mark and Ryan note that patents may still be needed in some areas, such as pharmaceuticals, and I think that’s right—pharma is actually a perfect example to illustrate the tension between the cognitive and economic approaches and how this won’t be solved by changing our standard to use the perspective of an inventive machine.
Many promising drug candidates are already obvious from a cognitive perspective, and perhaps will be more so as computers get better at early-stage drug development. But patents are still vitally necessary in the current pharmaceutical industry not for coming up with these inventions in the first place, but for giving pharma companies confidence that after they run clinical trials, they will be able to recoup those investments through above-marginal-cost pricing. Ryan notes that there are many other policy tools for incentivizing innovation, and in theory these certainly could be used to replace patents in areas where they are currently economically necessary and yet cognitively trivial. But I don’t know that throwing patents out of our innovation policy toolkit is the right approach.
In any case, how the patent system should adapt to AI is obviously a super important problem, and I’m glad we have someone as creative and thoughtful as Ryan focusing on it, since we don’t yet have AI that is capable of thinking through these kinds of problems.