Pages

Tuesday, December 22, 2015

Burk: Is Dolly patentable subject matter in light of Alice?

Dan Burk's work should already be familiar to those who follow patentable subject matter debates (see, e.g., here, here, and here). In a new essay, Dolly and Alice, he questions whether the Federal Circuit's May 2014 In re Roslin decision—holding clones such as Dolly to not be patentable subject matter—should have come out differently under the Supreme Court's June 2014 decision in Alice v. CLS Bank. Short answer: yes.

Burk does not have kind words for either the Federal Circuit or the Supreme Court, and he reiterates his prior criticism of developments like the gDNA/cDNA distinction in Myriad. His analysis of how Roslin should be analyzed under Alice begins on p. 11 of the current draft:
[E]ven assuming that the cloned sheep failed the first prong of the Alice test, the analysis would then move to the second prong to look for an "inventive concept" that takes the claimed invention beyond an attempt to merely capture the prohibited category of subject matter identified in the first step. . . . The Roslin patent claims surely entail such an inventive concept in the method of creating the sheep. The claims recite "clones," which the specification discloses were produced by a novel method that is universally acknowledged to have been a highly significant and difficult advance in reproductive technology—an "inventive concept" if there ever was one . . . [which] was not achieved via conventional, routine, or readily available techniques . . . .
But while Burk thinks Roslin might have benefited from the Alice framework, he also contends that this exercise demonstrates the confusion Alice creates across a range of doctrines, and particularly for product by process claims. He concludes by drawing an interesting parallel to the old Durden problem of how the novelty of a starting material affects the patentability of a process, and he expresses skepticism that there is any coherent way out; rather, he thinks Alice "leaves unsettled questions that will haunt us for years to come."

Tuesday, December 15, 2015

3 New Copyright Articles: Buccafusco, Bell & Parchomovsky, Grimmelmann

My own scholarship and scholarly reading focuses most heavily on patent law, but I've recently come across a few interesting copyright papers that seem worth highlighting:
  • Christopher Buccafusco, A Theory of Copyright Authorship – Argues that "authorship involves the intentional creation of mental effects in an audience," which expands copyrightability to gardens, cuisine, and tactile works, but withdraws it from aspects of photographs, taxonomies, and computer programs.
  • Abraham Bell & Gideon Parchomovsky, The Dual-Grant Theory of Fair Use – Argues that rather than addressing market failure, fair use calibrates the allocation of uses among authors and the public. A prima facie finding of fair use in certain categories (such as political speech) could only be defeated by showing the use would eliminate sufficient incentives for creation.
  • James Grimmelmann, There's No Such Thing as a Computer-Authored Work – And It's a Good Thing, Too – "Treating computers as authors for copyright purposes is a non-solution to a non-problem. It is a non-solution because unless and until computer programs can qualify as persons in life and law, it does no practical good to call them 'authors' when someone else will end up owning the copyright anyway. And it responds to a non-problem because there is nothing actually distinctive about computer-generated works."
Are there other copyright pieces posted this fall that I should take a look at?

Update: For readers not on Twitter, Chris Buccafusco added some additional suggestions:

Tuesday, December 8, 2015

Bernard Chao on Horizontal Innovation and Interface Patents

Bernard Chao has posted an interesting new paper, Horizontal Innovation and Interface Patents (forthcoming in the Wisconsin Law Review), on inventions whose value comes merely from compatibility rather than improvements on existing technology. And I'm grateful to him for writing an abstract that concisely summarizes the point of the article:
Scholars understandably devote a great deal of effort to studying how well patent law works to incentive the most important inventions. After all, these inventions form the foundation of our new technological age. But very little time is spent focusing on the other end of the spectrum, inventions that are no better than what the public already has. At first blush, studying such “horizontal” innovation seems pointless. But this inquiry actually reveals much about how patents can be used in unintended, and arguably, anticompetitive ways.
This issue has roots in one unintuitive aspect of patent law. Despite the law’s goal of promoting innovation, patents can be obtained on inventions that are no better than existing technology. Such patents might appear worthless, but companies regularly obtain these patents to cover interfaces. That is because interface patents actually derive value from two distinct characteristics. First, they can have “innovation value” that is based on how much better the patented interface is than prior technology. Second, interface patents can also have “compatibility value.” In other words, the patented technology is often needed to make products operate (i.e. compatible) with a particular interface. In practical terms, this means that an interface patent that is not innovative can still give a company the ability to foreclose competition.
This undesirable result is a consequence of how patent law has structured its remedies. Under current law, recoveries implicitly include both innovation and compatibility values. This Article argues that the law should change its remedies to exclude the latter kind of recovery. This proposal has two benefits. It would eliminate wasteful patents on horizontal technology. Second, and more importantly, the value of all interface patents would be better aligned with the goals of the patent system. To achieve these outcomes, this Article proposes changes to the standards for awarding injunctions, lost profits and reasonable royalties.
The article covers examples ranging from razor/handle interfaces to Apple's patented Lightning interface, so it is a fun read. And it also illustrates what seems like an increasing trend in patent scholarship, in which authors turn to remedies as the optimal policy tool for effecting their desired changes.

Wednesday, December 2, 2015

Sampat & Williams on the Effect of Gene Patents on Follow-on Innovation

Bhaven Sampat (Columbia Public Health) and Heidi Williams (MIT Econ) are two economists whose work on innovation is always worth reading. I've discussed a number of their papers before (here, here, here, here, and here), and Williams is now a certified genius. They've posted a new paper, How Do Patents Affect Follow-On Innovation? Evidence from the Human Genome, which is an important follow-up to Williams's prior work on gene patents. Here is the abstract:
We investigate whether patents on human genes have affected follow-on scientific research and product development. Using administrative data on successful and unsuccessful patent applications submitted to the US Patent and Trademark Office, we link the exact gene sequences claimed in each application with data measuring follow-on scientific research and commercial investments. Using this data, we document novel evidence of selection into patenting: patented genes appear more valuable — prior to being patented — than non-patented genes. This evidence of selection motivates two quasi-experimental approaches, both of which suggest that on average gene patents have had no effect on follow-on innovation.
Their second empirical design is particularly clever: they use the leniency of the assigned patent examiner as an instrumental variable for which patent applications are granted patents. Highly recommended.