Monday, July 16, 2018

What do Generic Drug Patent Settlements Say about Patent Quality?

An interesting study about Orange Book patents challenged both under Hatch-Waxman and Inter Partes Review caught my eye this week, but perhaps not for the ordinary reasons. One of the hot topics in drug patent challenges today is reverse payments: when the patentee pays the generic to stop a challenge. The Supreme Court has ruled that these payments can constitute antitrust violations. Though the drug companies give reasons, I'll admit that I've always been skeptical of these types of payments.

One of the key questions is whether the patent was going to survive. Most seem to assume that if a company pays to settle, then the patent was likely going to be invalidated. That's where the draft, Maintaining the Balance: An Empirical Study on Inter Partes Review Outcomes of Orange Book-Listed Drug Patents and its Effect on Hatch-Waxman Litigation, by Tulip Mahaseth (a recent Northwestern Law grad) comes in. Here is the abstract from SSRN:
The Hatch-Waxman Act intended to strike a delicate balance between encouraging pioneer drug innovation and promoting market entry of affordable generic versions of pioneer drugs by providing a streamlined pathway to challenge validity of Orange Book patents in federal district courts. In 2012, the America Invents Act introduced Inter Partes Review (IPR) proceedings which provide a faster, cheaper pathway to challenge Orange Book patents than Hatch-Waxman district court litigation. IPRs also have a lower evidentiary burden of proof and broader claim construction standard, which should make it easier, in theory, to obtain patent invalidation in IPRs as compared to Hatch-Waxman litigation. This empirical study on IPR outcomes of Orange Book patents in the past six years shows that both generic manufacturers and patent owners obtain more favorable final decisions in IPRs as compared to their Hatch-Waxman litigation outcomes because the rate of settlement in IPRs is much lower than in Hatch-Waxman litigation. Moreover, generic manufacturers do not appear to be targeting Orange Book patents in IPRs during their drug exclusivity period. Only 2 out of more than 400 IPRs against Orange Book patents were filed by generic petitioners during the patents’ New Chemical Entity exclusivity period. About 90% of the 230 Orange Book patents challenged in IPR proceedings were also challenged in Hatch-Waxman litigation. It is likely that generic manufacturers are not deterred from Hatch-Waxman litigation because of the lucrative 180-day exclusivity period, which gives the first generic filer 180 days to exclusively market their generic version without competition from other generics when the Orange Book drug patent is successfully invalidated in a subsequent district court proceeding. Therefore, IPR proceedings do not appear to be disrupting the delicate balance sought by the Hatch-Waxman Act. Instead, the IPR process has provided generic manufacturers a dual track option for challenging Orange Book patents by initiating Hatch-Waxman litigation in district courts and also pursuing patent invalidity in IPRs before the Patent Trial and Appeal Board, which has reduced rate of settlements resulting in more patents being upheld and invalidated.
There's a lot of great data in this paper, comparing Orange Book IPRs with non-Orange Book IPRs, including comparison of win rates and settlement rates.

But I want to focus on one seemingly minor point: as the number of IPRs has increased, the rate of settlement has decreased. And, more important, the decreasing rate of settlement has led to more invalidation and more affirmance of patents.

This result gives a nice window into how we might view settlements. Traditional Priest-Klein analysis says that this is exactly what we should see - that the previously settled cases were 50/50. But proving this is harder, and this data set would allow for a nice differences-in-differences analysis in future work.

Additionally, a split among outcomes implies that the settlements were not necessarily because the patentee believed the patent was at risk.  If anti-competitive settlements were ruling the day, I would have predicted that most of the (recent) non-settlements would have resulted in patent invalidation. Then again, it is possible that a 50% chance was risky enough to merit a reverse payment settlement in the past. Regardless of how one comes out on this issue, this study provides some helpful details for the argument.

Tuesday, July 10, 2018

How did TC Heartland Affect Firm Value?

In Recalibrating Patent Venue, Colleen Chien and I did a nationwide study of forum shopping in patent cases (shocker - everybody did it, and not just in Texas), and predicted that many patent cases would shift from the Eastern District to the District of Delaware. And, lo, it has come to pass. Delaware is super busy. This has been good for us at Villanova (only 30 miles away from the court), as our students are getting some great patent experience in externships and internships.

But how much did firms value not being sued in Texas? The TC Heartland case is a clear shock event, so an event study can measure this. In Will Delaware Be Different? An Empirical Study of TC Heartland and the Shift to Defendant Choice of Venue, Ofer Eldar (Duke Law) and Neel Sukhatme (Georgetown Law) examine this question. The article is forthcoming in Cornell Law Review and a draft is on SSRN. Here is the abstract:
Why do some venues evolve into litigation havens while others do not? Venues might compete for litigation for various reasons, such as enhancing their judges’ prestige and increasing revenues for the local bar. This competition is framed by the party that chooses the venue. Whether plaintiffs or defendants primarily choose venue is crucial because, we argue, the two scenarios are not symmetrical.
The Supreme Court’s recent decision in TC Heartland v. Kraft Foods illustrates this dynamic. There, the Court effectively shifted venue choice in many patent infringement cases from plaintiffs to corporate defendants. We use TC Heartland to empirically measure the impact of this shift using an event study, which measures how the stock market reacted to the decision. We find that likely targets of “patent trolls”— entities that own and assert patented inventions but do not otherwise use them—saw their company valuations increase the most due to TC Heartland. This effect is particularly pronounced for Delaware-incorporated firms. Our results match litigation trends since TC Heartland, as new cases have dramatically shifted to the District of Delaware from the Eastern District of Texas, previously the most popular venue for infringement actions.
Why do investors believe Delaware will do better than Texas in curbing patent troll litigation? Unlike Texas, Delaware’s economy depends on attracting large businesses that pay high incorporation fees; it is thus less likely to encourage disruptive litigation and jeopardize its privileged position in corporate law. More broadly, we explain why giving defendants more control over venue can counterbalance judges’ incentives to increase their influence by encouraging excessive litigation. Drawing on Delaware’s approach to corporate litigation and bankruptcy proceedings, we argue that Delaware will compete for patent litigation through an expert judiciary and well- developed case law that balances both patentee and defendant interests.
As I discuss below, I have a like/dislike reaction to this paper.

Monday, June 25, 2018

The False Hope of WesternGeco

The Supreme Court issued its opinion in WesternGeco last week. The holding (7-2) was relatively straightforward: if an infringer exports a component in violation of 35 USC 271(f)(2) (that is, the component has no substantial noninfringing use), then the presumption of extraterritoriality will not bar damages that occur overseas. And that's about all it ruled. It left harder questions, like proximate cause, for another day.

I spent the end of the week and weekend reading commentary on the case (and tussling a bit on Facebook and Twitter). A couple blog posts worth checking out are Tim Holbrook's and Tom Cotter's. I had just a few thoughts to add.

Monday, June 18, 2018

Evidence on Patent Disclosure via Depository Libraries

When I first started practice, the place to go for patents was the Patent Depository Library at the Sunnyvale Public Library. Not only did they have copies of all the patents, they had other disclosures, like the IBM Technical Disclosure series. For those who wonder whether people actually read patents, I can attest that I never went to that library and found it empty. Many people, mostly individual inventors who did not want to pay for Delphion or some other electronic service, went there to look at the prior art. Sadly, the library ceased to be at the end of 2017. Widespread free availability on the Internet, plus a new USPTO center in San Jose siphoned off all the traffic.

Rather than rely on my anecdotal evidence, a new NBER paper examines the role of Patent Depository Libraries as evidence of patent disclosure. Jeffrey Furman (Boston U. Strategy & Policy Dept), Markus Nagler, and Martin Watzinger (both of Ludwig Maximillian U. in Munich) have posted Disclosure and Subsequent Innovation: Evidence from the Patent Depository Library Program to NBER's website (sorry, it's a paywall unless you've got .gov or .edu rights). The abstract is here:
How important is information disclosure through patents for subsequent innovation? Although disclosure is regarded as essential to the functioning of the patent system, legal scholars have expressed considerable skepticism about its value in practice. To adjudicate this issue, we examine the expansion of the USPTO Patent and Trademark Depository Library system between 1975 to 1997. Whereas the exclusion rights associated with patents are national in scope, the opening of these patent libraries during the pre-Internet era yielded regional variation in the costs to access the technical information (prior art) disclosed in patent documents. We find that after a patent library opens, local patenting increases by 17% relative to control regions that have Federal Depository Libraries. A number of additional analyses suggest that the disclosure of technical information in the patent documents is the mechanism underlying this boost in patenting: the response to patent libraries is significant and of important magnitude among young companies, library opening induces local inventors to cite more geographically distant and more technologically diverse prior art, and the library boost ceases to be present after the introduction of the Internet. We find that library opening is also associated with an increase in local business formation and job creation, which suggests that the impact of libraries is not limited to patenting outcomes. Taken together, our analyses provide evidence that the information disclosed in patent prior art plays an important role in supporting cumulative innovation.
 The crux of the study is the match to other, similar areas with Federal Depository (but not patent) Libraries. The authors acknowledge that the opening of a patent library might well be a leading indicator of expected future patenting, but the authors discount this by arguing that the Patent Libraries would have had to somehow predict the exact year of increased patenting, and then apply in advance of that date and get approved just in time. The odds of this seem low, especially when the results are localized to within 15 miles of the library (and no further).

The first core finding, that patenting increased, is ambiguous normatively. The authors discuss enhanced innovation, but equally likely alternatives are that people just got excited about patenting or that innovation already occurring was more easily patented. That said, they find the same quality, which implies that the patenting wasn't simply frivolous.

The second finding is more important: that the types of citations and disclosures changed (and that those changes disappeared when patents were more readily available on the Internet). This finding implies that somebody was reading these patents. The question is who. A followup study looking at how the makeup of inventorship changed would be interesting. Were the additional grants solo inventors or large companies? Who used these libraries?

Even without answering this question, this study was both useful and interesting, as well as a bit nostalgic.

Monday, June 11, 2018

Measuring Patent Thickets

Those interested in the patent system have long complained of patent thickets as a barrier to efficient production of new products and services. The more patents in an area, the argument goes, the harder it is to enter. There are several studies that attempt to measure the effect of patent thickets, with some studies arguing that thickets can ease private ordering. I'd like to briefly point out another (new) one. Charles deGrazia (U. London, Royal Holloway College), Jesse Frumkin, Nicholas Pairolero (both of USPTO) have posted a new draft on SSRN, called Embracing Technological Similarity for the Measurement of Complexity and Patent Thickets. Here is the abstract:
Clear and well-defi ned patent rights can incentivize innovation by granting monopoly rights to the inventor for a limited period of time in exchange for public disclosure of the invention. However, when a product draws from intellectual property held across multiple firms (including fragmented intellectual property or patent thickets), contracting failures may lead to suboptimal economic outcomes (Shapiro 2000). Researchers have developed several measures to gauge the extent and impact of patent thickets. This paper contributes to that literature by proposing a new measure of patent thickets that incorporates patent claim similarity to more precisely identify technological similarity, which is shown to increase the information contained in the measurement of patent thickets. Further, the measure is universally computable for all patent systems. These advantages will enable more accurate measurement and allow for novel economic research on technological complexity, fragmentation in intellectual property, and patent thickets within and across all patent jurisdictions.
 The authors use natural language processing to determine overlap in patent claims (and just the claims, arguing that's where the thicket lies) for both backward and forward citations in "triads" - patents that all cite each other. Using this methodology, they compare their results to other attempts to quantify complexity and find greater overlap in more complex technologies - a sign that their method is more accurate. Finally, they validate their results by regressing thickets against examination characteristics, showing that the examination factors more likely to come from thickets (e.g. pendency) are correlated with greater thickets.

This is an interesting study. The use of citations (versus technological class) will always be a limitation because not every patent in a thicket winds up being cited by others. However, the method used here (using forward and backward citations) is better than the alternative, which is using only blocking prior art.

The real question is what to do with all this information. Can it be applied beyond mere study of which areas have thickets? I suppose it could be helpful for portfolio purchases, and maybe to help decisions about whether to enter into a new technology.

Wednesday, June 6, 2018

A Couple Thoughts on Apple v. Samsung (part ?100?)

I've done a few interviews about the latest Apple v. Samsung design patent jury verdict, but journalistic space means I only get a couple sentences in. So, I thought I would lay out a couple points I see as important. We'll see if they hold up as predictions.

There's been a lot written about the case, so I won't rehash the epic story. Here's the short version. The design patent law affords the winning plaintiff all of the profits on the infringing article of manufacture. The Supreme Court ruled (reversing about 100 years of opposite practice) that the article of manufacture could be less than the entire accused device for sale. Because the original jury instructions did not consider this, the Court remanded for a determination of what the infringing article of manufacture was in this case (the design patents covered the shape of the phone and the default screen). The Federal Circuit remanded, and the District Court decided that, yes, in fact, the original jury instructions were defective and ordered a retrial of damages.

The District Court adopted the Solicitor General's suggested test to determine what the article of manufacture was, determined that under that test it was a disputed fact question, and sent it to the jury. Apple asked for $1 billion. Samsung asked for $28 million. The jury awarded $533 million, which is more than $100 million more than the damages were before the Supreme Court ruled.

After the trial, one or more jurors stated that the entire phone was the article of manufacture because you can't get the screen without the rest of the phone. I suppose that the half a billion is deducting expenses that Apple didn't want to deduct.

So, here are my points:

Tuesday, May 29, 2018

New Ways to Determine Patent Novelty

Jonathan Ashtor (now at Paul, Weiss) has completed a few quality empirical studies in the past. His new foray is a new and creative way to determine novelty. It's on SSRN, and the abstract is here:
I construct a measure of patent novelty based on linguistic analysis of claim text. Specifically, I employ advanced computational linguistic techniques to analyze the claims of all U.S. patents issued from 1976-2014, nearly 5 million patents in total. I use the resulting model to measure the similarity of each patented invention to all others in its technology-temporal cohort. Then, I validate the resulting measure using multiple established proxies for novelty, as well as actual USPTO Office Action rejections on grounds of lack of novelty or obviousness. I also analyze a set of pioneering patents and find that they have substantially and significantly higher novelty measures than other patents.
Using this measure, I study the relationship of novelty to patent value and cumulative innovation. I find significant correlations between novelty and patent value, as measured by returns to firm innovation and stock market responses to patent issuance. I also find strong correlations between novelty and cumulative innovation, as measured by forward citations. Furthermore, I find that patents of greater novelty give rise to more important citations, as they are more frequently cited in Office Action rejections of future patents for lack of novelty or obviousness. I also investigate how novelty relates to the USPTO examination process. In particular, I find that novelty is an inherent feature of a patented invention, which can be measured based on the claim text of either an issued patent or an early-stage patent application.
Next, I use this measure to analyze the characteristics of novel patents. I find that novelty is an effective dimension along which to stratify patents, as key patent characteristics vary significantly across the distribution of novelty measures. Moreover, novel patents are closely linked to basic scientific research, as measured by public grant funding and citations to non-patent scientific literature.
Finally, I use this measure to observe trends in novelty over a forty-year timespan of American innovation. This reveals a noticeable, albeit slight, trend in novelty in certain technology fields in recent years, which corresponds to technological maturation in those sectors.
 I'm skeptical of measures of patent quality by claim language only, but I like how he has used office actions to validate the measure. I think people will have to study this to see how it holds up, but I think it's an interesting and creative first step toward objectively judging quality.

Thursday, May 24, 2018

Brian Soucek on Aesthetic Judgment in Law

As noted in my last post, one of the most quoted lines in copyright law is from Justice Holmes's 1903 opinion in Bleistein: "It would be a dangerous undertaking for persons trained only to the law to constitute themselves final judges of the worth of pictorial illustrations." This aesthetic neutrality principle has found purchase far beyond copyright law. But in a compelling new article, Aesthetic Judgment in Law, Professor Brian Soucek challenges this dogma: "Almost no one thinks the government should decide what counts as art or what has aesthetic value. But the government often does so, and often, it should." Soucek's article may have flown under the radar for most IP scholars because he does not typically focus on copyright law, but it is well worth a look.

Tuesday, May 22, 2018

Examining the Role of Patents in Firm Financing

Just this morning, an interesting new literature review came to my mailbox via SSRN. In Is There a Role for Patents in the Financing of New Innovative Firms?, Bronwyn Hall (Berkeley economics) provides an extremely thorough, extremely helpful literature review on the subject. It's on SSRN, and the abstract is here:
It is argued by many that one of the benefits of the patent system is that it creates a property right to invention that enables firms to obtain financing for the development of that invention. In this paper, I review the reasons why ownership of knowledge assets might be useful in attracting finance and then survey the empirical evidence on patent ownership and its impact on the ability of firms to obtain further financing at different stages of their development, both starting up and after becoming established. Studies that attempt to separately identify the role of patent rights and the underlying quality of the associated innovation(s) will be emphasized, although these are rather rare.
This paper caught my eye for a few reasons.

Sunday, May 20, 2018

Barton Beebe on Bleistein

Barton Beebe’s recent article, Bleistein, the Problem of Aesthetic Progress, and the Making of American Copyright Law, was already highlighted on this blog by Shyamkrishna Balganesh, but I wanted to add a few thoughts of my own because I really enjoyed reading it—it is a richly layered dive into the intellectual history of U.S. copyright law, and a wonderful piece to savor on a weekend.

In one sense, this is an article about one case’s role in U.S. copyright law, but it uses that case to tackle a fundamental question of copyright theory: what does it mean “to promote the Progress”? Beebe’s goal is not just to correct longstanding misunderstandings of Bleistein; as I understand it, his real point is that we can and should “assess[] aesthetic progress according to the simple propositions that aesthetic labor in itself is its own reward and that the facilitation of more such labor represents progress.” He thinks Justice Holmes’s invocation of “personality” in Bleistein represents a normatively attractive “third way” between judges assessing aesthetic merit and simply leaving this judgment to the market—that aesthetic progress is shown “by the mere fact that someone was willing to make the work, either for sale or otherwise, and that in making it, someone had invested one’s personality in the work.”

This personality-centered view of copyright seems similar to the Hegelian personality theory that was drawn into IP by Peggy Radin and elaborated by Justin Hughes, though at times it seems more like Lockean theories based on the author’s labor. I think he could have done more to explain how his theory relates to this prior literature, and also how it’s different from a utilitarian theory that recognizes the value creators get from creating (à la Jeanne Fromer’s Expressive Incentives). In any case, I think Beebe’s take is interesting, particularly with the connection he draws to John Dewey’s American pragmatist vision of aesthetic progress.

Wednesday, May 16, 2018

Mark McKenna: Trademark Counterfeiting And The Problem Of Inevitable Creep

One of my favorite events at Akron Law this past school year was hearing Professor Mark McKenna deliver the Oldham Lecture on his fascinating paper, Criminal Trademark Enforcement And The Problem Of Inevitable Creep.  The completed article, forthcoming in the Akron Law Review, is now available on SSRN.

The story, in Mckenna's telling, is simple. There is a criminal remedy for trademark "counterfeiting" because, most people would agree, using an identical trademark for goods or services that are identical to the trademark owner's is an economically and morally worse act than ordinary trademark infringement. A modern-day example of this atrocious crime is the company that has been hawking dysfunctional "Philips Sonicare" toothbrush replacement heads on Amazon.com. Consumers buy them thinking they are the real thing, and are sorely disappointed when the brush heads do not work. But to deserve the classification as criminal, as a legal matter, the act of counterfeiting must be proven "beyond a reasonable doubt" to fit within the exact text of the relevant statute, the Trademark Counterfeiting Act. According to McKenna, courts have veered from the statutory text, and are instead expanding criminal counterfeiting beyond Congressional authorization. Thus, the article's reference in its title to "inevitable creep."

There are parts of this well-done article with which people are likely to agree, and other parts with which people are likely to strongly disagree.

Tuesday, May 15, 2018

A Focus on Innovators Instead of Innovations

I noticed this week that my sometimes co-author Colleen Chien (Santa Clara) has posted the abstract for a new paper called Innovators on SSRN:
This Article argues for a shift in how we view and use the patent system, to a way of understanding and cultivating innovators that patent, not just patented innovation, for three reasons. First, who is innovating and where has relevance to a myriad of current social and policy debates, including the participation of women and minorities in innovation, high-skilled immigration, and national competitiveness. Second, though largely overlooked by academics, America’s patent system has long been innovator-, not only innovation-driven, and scholarly engagement can improve the quality of relevant policymaking. Third, the application of new computational tools to open patent datasets makes it possible to more easily approximate and track salient details about innovators that patent - including the geography and settings in which they innovate and the personal demographic traits of innovators - enabling the tailoring and tracking of impacts of interventions on disparate groups of innovators. This Article details why and how to do so by applying novel empirical methods to profiling patentees, revealing broad shifts over the past four decades, and demonstrating—through three mini-case studies pertaining to diversity in the technology sector, the promotion of small and individual inventors, and innovation in medical diagnostic technologies—how improving our understanding of innovators can improve our promotion of innovation.
A draft isn't available yet, but hopefully one will be soon. My thoughts on this abstract, though, are "hear, hear!" I think that too little attention has been paid to the people who innovate. There is, to be sure, a rich history of historians and economic historians who have focused on these points. Zorina Khan, Naomi Lamoreaux, and Ken Sokoloff (z''l) come to mind. In law, Adam Mossoff has provided several case studies and Chris Beauchamp has done outstanding historical work highlighting innovators in their time. Mark Lemley leveraged some historical work in an article about simultaneous inventing, and others have looked at those same innovators to tell competing stories.

But much of this work is historical. Of late, as the abstract notes, it's all about the what: What inventions? What classes? What litigation? How many claims? I think people clamor for stories about innovators; I believe my most downloaded (by far) SSRN paper, Patent Troll Myths, resonated because it looked hard at the innovators - individuals to small entities to large companies. Dan Burk looks at innovators (but without data) in Do Patents Have Gender? 

I'm sure there are examples I'm not thinking of, but more data and analysis in this area would be welcome. Patents exist in service to their inventors, and so it makes sense to understand who those are to better understand whether patents are achieving their goals...or even what the goals are.

Tuesday, May 8, 2018

When Should Actors Get Copyrights in their Performances?

In Garcia v. Google, the en banc Ninth Circuit ruled that actors can basically never obtain a copyright in their performances. I was one of, say, ten people troubled by this decision. My IP academic colleagues will surely recall (too) long debates on the listserv on this issue. It turns out that another of the ten is Justin Hughes (Loyola LA), who has now written an article exploring when and why actors might reasonably claim copyright in a performance. The article, called Actors as Authors in American Copyright Law, is on SSRN and is forthcoming in the Connecticut Law Review. The abstract is here:
Among the different kinds of works eligible for copyright, audiovisual works are arguably the most complex, often involving scores of contributors – screenwriters, directors, actors, cinematographers, producers, set designers, costume designers, lighting technicians, etc. Some countries expressly recognize which categories of these contributors are entitled to legal protection, whether copyright, ‘neighboring rights,’ or statutory remuneration. But American copyright law does not. Given that the complex relationship among these creative contributors is usually governed by contract, there is – for such a large economic sector – relatively little case law on issues of authorship in audiovisual works. This is especially true on the question of dramatic performers as authors of audiovisual works.
This Article provides the first in-depth exploration of whether, when, and how actors are authors under American copyright law. After describing how case law, government views, and scholarly commentary support the conclusion that actors are authors, the Article turns to the strange saga of the Ninth Circuit’s 2015 en banc Garcia v. Google decision – a decision more about fraud and fatwas than clear conclusions on how copyright law applies to acting. The Article then uses some simple thought experiments to establish how dramatic performers generally meet both the Constitutional and statutory standard for “authorship.” Finally, the Article reviews the various filters that prevent actors-as-authors legal struggles and how, when all else fails, we can consider actors as joint authors of the audiovisual works embodying their dramatic performances.
The article presents a detailed and nuanced view of what it means to be an author, as well as a good discussion of the development of the law in this area. As the abstract alludes to, it turns out that much of our view of actor protections is based on how things have been done and expediency (e.g. work made for hire) rather than a detailed examination of authorship in film.

For example, it has always been unclear to me why we protect a musician's performance of pre-scripted music in a sound recording, but not an actor's performance of a pre-scripted movie in an audiovisual work. The statute allows for both protections, and the primary reason seems to be that we don't think it's right.

Similarly, joint authorship is very strange. In Aalmuhammed v. Lee, the Ninth Circuit ruled that, despite the contribution of several elements and several scenes, one must be either a full joint author or nothing. There is no in-between. Like Garcia v. Google, this appears to be for expedience (and Hughes examines several other reasons), as well as a view that the only "work" can be the final work, and not each scene before it is pieced together, a legal fiction in the modern era of copyrightability in unpublished works.

This article explores much of the thinking I had at the time of Garcia v. Google, so those who favored that ruling will likely think it is as crazy as they thought I was. However, I think the article is still worth a read, if only to pinpoint where you think it goes astray, if it does.

Friday, May 4, 2018

Academic IP Conferences

Three years ago, I posted some general advice about academic IP conferences, including links to sites that compile IP conference information. Most of that advice still stands, but the definitive IP conference compilation site has moved to https://emptydoors.com/conferences, where it is maintained by Professor Saurabh Vishnubhakat—who was a wonderful member of a conference panel I moderated last week.

The most recent entry on Saurabh's site is interesting: the AALS Remedies Section has a call for papers for a program on IP remedies at the Jan. 2019 AALS Annual Meeting in New Orleans. Abstracts are due June 1.

Tuesday, May 1, 2018

Jake Sherkow Guest Post: What the CRISPR Patent Appeal Teaches Us About Legal Scholarship

Guest post by Professor Jake Sherkow of New York Law School, who is currently a Visiting Scholar at Stanford Law School.

Yesterday, the Federal Circuit heard oral argument in the dispute between the University of California and the Broad Institute over a set of fundamental patents covering CRISPR-Cas9, the revolutionary gene-editing technology. Lisa has been kind enough to invite me to write a few words here about the dispute, and I thought I’d take that generous opportunity to discuss two aspects of yesterday’s argument: the basics of the appeal and, given that this blog is devoted to legal scholarship about patent law, what the argument can teach us, if anything, about IP scholarship in general. I think the short answer to the second question is, Quite a lot, although perhaps not for obvious reasons.

Measuring the Value of Patent Disclosure

How valuable is patent disclosure? It's a perennially asked question. There are studies, like Lisa's, that attack the problem using surveys, and the conventional wisdom seems to be that there are niche areas that read patents, but for the most part patent disclosure holds little value because nobody reads them.

Deepak Hegde (NYU Stern), Kyle Herkenhoff (Minn. Econ), and Chenqi Zhu (NYU Stern PhD candidate) have decided to attack the problem from a different angle: using the AIPA (which required patent disclosure at 18 months) as a natural experiment. The paper is on SSRN, and the abstract is here:
How does the disclosure of technical knowledge through patents affect knowledge diffusion, follow-on invention, and patenting? We study this by analyzing the American Inventor's Protection Act (AIPA), which required U.S. patent applications filed after November 28, 2000 to be published 18 months after filing, rather than at grant, and advanced the disclosure of most U.S. patents by about two years. We estimate AIPA’s causal effect by using a counterfactual sample of identical European “twins” (of U.S.patents) which were not affected by the U.S. policy change and find that AIPA (i) increased the rate and magnitude of knowledge diffusion associated with U.S. patents (ii) increased overlap between technologically distant patents and decreased overlap between similar patents. Patent abandonments and scope decreased, while patent clarity improved, after AIPA. The findings are consistent with the predictions of our theoretical framework which models AIPA as provisioning current information about related technologies to inventors. The information, in turn, reduces follow-on inventors’ R&D and patenting costs. Patent disclosure promotes knowledge diffusion and clearer property rights while reducing R&D duplication.
This was a clever project. There have been AIPA studies before, but none that try to measure the value of the diffusion, so far as I know. What makes it go is the matching with European patents (which had always been published), which allows for their measurements to be independent of quality of invention.

Friday, April 27, 2018

Berkeley Remarks on a Patent Small-Claims Tribunal

As noted in my earlier post today with my remarks from We Robot, I’ve been busy with lots of interesting conferences and workshops in the past few weeks. Because I wrote out detailed notes for two of them, I thought I would post them for people who weren’t able to attend. Here are my comments from the fantastic BCLT/BTLJ symposium on the administrative law of IP, where I was on a panel discussing IP small-claims tribunals:

You have already heard some insightful comments from Ben Depoorter and Pam Samuelson on copyright small-claims courts, including their analysis of problems with the proposed Copyright Alternatives in Small-Claims Enforcement (CASE) Act of 2017, as well as their thoughts on how a more narrowly tailored small-claims system might be beneficial. The main justification for introducing such a tribunal is that high litigation costs prevent claimants from pursuing valid small claims.

I’m here to provide some perspective from the patent law side, and the short version of my comments is that the idea of a patent small-claims court seems mostly dead in the United States, and I don’t see a reason to revive it.

We Robot Comments on Ryan Abbot's Everything is Obvious

I’ve been busy with lots of interesting conferences and workshops in the past few weeks, and since I wrote out detailed notes for two of them, I thought I would post them for people who weren’t able to attend. First, my comments from the We Robot conference two weeks ago at Stanford:

Ryan Abbott’s Everything is Obvious is part of an interesting series of articles Ryan has been working on related to how developments in AI and computing affect legal areas such as patent law. In an earlier article, I Think, Therefore I Invent, he provocatively argued that creative computers should be considered inventors for patent and copyright purposes. Here, he focuses on how these creative computers should affect one of the most important legal standards in patent law: the requirement that an invention not be obvious to a person having ordinary skill in the art.

Ryan’s definition of “creative computers” is purposefully broad. The existing creative computers he discusses are all narrow or specific AI systems that are programmed to solve particular problems, like systems from the 1980s that were programmed to design new microchips based on certain rules and IBM’s Watson, which is currently identifying novel drug targets for pharmaceutical research. And Ryan thinks patent law already needs to change in response to these developments. But I think his primary concern is the coming of artificial general intelligence that surpasses human inventors.

Tuesday, April 24, 2018

Naruto, the Article III monkey

The Ninth Circuit released its opinion in the "monkey selfie" case, reasonably ruling that Naruto the monkey doesn't have standing under the Copyright laws. The opinion dodges the hard questions about who can be an author (thus leaving for another day questions about artificial intelligence, for example) by instead focusing on mundane things like the ability to have heirs. As a result, it's not the strongest opinion, but one that's hard to take issue with.

But I'd like to focus on an issue that's received much less attention in the press and among my colleagues. The court ruled that Naruto has Article III standing because there is a case or controversy. I'll admit that I hadn't thought about this angle, having instead gone right to the copyright authorship question (when you're a hammer, everything looks like a nail). But I guess when you're an appellate court, that whole "jurisdiction and standing section" means something even though we often skim that in our non-civ pro/con law/fed courts classes in law school.

I'll first note that the court is doubtful that PETA has standing as "next friend." Footnote 3 is a scathing indictment of its actions in this case, essentially arguing that PETA leveraged the case for its own political ends rather than for any benefit of Naruto. Youch! More on this aspect here. The court also finds that the copyright statute does not allow for next friend standing, a completely non-shocking result given precedent.

Even so, the court looks to whether Naruto has individual standing even without some sort of guardian. Surprisingly enough, this was not an issue of first impression. The Ninth Circuit had already ruled that a group of whales had Article III standing. From this, the court very quickly decides that Naruto has standing: the allegation of ownership in the photograph easily creates a case or controversy.

Once again, the best part is in the footnotes. I'll reproduce part of note 5 here:
In our view, the question of standing was explicitly decided in Cetacean. Although, as we explain later, we believe Cetacean was wrongly decided, we are bound by it. Short of an intervening decision from the Supreme Court or from an en banc panel of this court, [] we cannot escape the proposition that animals have Article III standing to sue....
[The concurrence] insightfully identifies a series of issues raised by the prospect of allowing animals to sue. For example, if animals may sue, who may represent their interests? If animals have property rights, do they also have corresponding duties? How do we prevent people (or organizations, like PETA) from using animals to advance their human agendas? In reflecting on these questions, Judge Smith [in the concurrence] reaches the reasonable conclusion that animals should not be permitted to sue in human courts. As a pure policy matter, we agree. But we are not a legislature, and this court’s decision in Cetacean limits our options. What we can do is urge this court to reexamine Cetacean. See infra note 6. What we cannot do is pretend Cetacean does not exist, or that it states something other, or milder, or more ambiguous on whether cetaceans have Article III standing.
I was glad to see this, because when I read the initial account that Article III standing had been granted, I wondered why the court would come to that decision and thought of many of these questions (and more - like what if there's no statute to deny standing, like diversity tort liability).

I'll end with perhaps my favorite part of the opinion: the award of attorneys' fees. The award itself is not surprising, but the commentary is. It notes that the court does not know how or whether the settlement in the case dealt with the possibility of such an award, but also that Naruto was not part of such a settlement. It's unclear what this means. Can Slater collect from Naruto? How would that happen? Can Slater collect from PETA because Naruto was not part of the settlement? The court, I'm sure, would say to blame any complexity on the whale case.

Sunday, April 22, 2018

Chris Walker & Melissa Wasserman on the PTAB and Administrative Law

Christopher Walker is a leading administrative law scholar, and Melissa Wasserman's excellent work on the PTO has often been featured on this blog, so when the two of them teamed up to study how the PTAB fits within broader principles of administrative law, the result—The New World of Agency Adjudication (forthcoming Calif. L. Rev.)—is self-recommending. With a few notable exceptions (such as a 2007 article by Stuart Benjamin and Arti Rai), patent law scholars have paid relatively little attention to administrative law. But the creation of the PTAB has sparked a surge of interest, including multiple Supreme Court cases and a superb symposium at Berkeley earlier this month (including Wasserman, Rai, and many others). Walker and Wasserman's new article is essential reading for anyone following these recent debates, whether you are interested in specific policy issues like PTAB panel stacking or more general trends in administrative review.

Monday, April 16, 2018

Comprehensive Data about Federal Circuit Opinions

Jason Rantanen (Iowa) has already blogged about his new article, but I thought I would mention it briefly has well. He has created a database of data about Federal Circuit opinions. An article describing it is forthcoming in the American University Law Reviw on SSRN and the abstract is here:
Quantitative studies of the U.S. Court of Appeals for the Federal Circuit's patent law decisions are almost more numerous than the judicial decisions they examine. Each study painstakingly collects basic data about the decisions-case name, appeal number, judges, precedential status-before adding its own set of unique observations. This process is redundant, labor-intensive, and makes cross-study comparisons difficult, if not impossible. This Article and the accompanying database aim to eliminate these inefficiencies and provide a mechanism for meaningful cross-study comparisons.

This Article describes the Compendium of Federal Circuit Decisions ("Compendium"), a database created to both standardize and analyze decisions of the Federal Circuit. The Compendium contains an array of data on all documents released on the Federal Circuit's website relating to cases that originated in a federal district court or the United States Patent and Trademark Office (USPTO)-essentially all opinions since 2004 and all Rule 36 affirmances since 2007, along with numerous orders and other documents.

This Article draws upon the Compendium to examine key metrics of the Federal Circuit's decisions in appeals arising from the district courts and USPTO over the past decade, updating previous work that studied similar populations during earlier time periods and providing new insights into the Federal Circuit's performance. The data reveal, among other things, an increase in the number of precedential opinions in appeals arising from the USPTO, a general increase in the quantity-but not necessarily the frequency-with which the Federal Circuit invokes Rule 36, and a return to general agreement among the judges following a period of substantial disuniformity. These metrics point to, on the surface at least, a Federal Circuit that is functioning smoothly in the post-America Invents Act world, while also hinting at areas for further study.
The article has some interesting details about opinions and trends, but I wanted to point out that this is a database now available for use in scholarly work, which is really helpful. The inclusion of non-precedential opinions adds a new wrinkle as well. Hopefully some useful studies will come of this

Tuesday, April 10, 2018

Statute v. Constitution as IP Limiting Doctrine

In his forthcoming article, "Paths or Fences: Patents, Copyrights, and the Constitution," Derek Bambauer (Arizona), notices (and provides some data to support) a discrepancy in how boundary and limiting issues are handled in patent and copyright. He notes that, for reasons he theorizes, big copyright issues are often "fenced in" by the Constitution - that is the constitution limits what can be protected. But patent issues are often resolved by statute, because the Constitution creates a "path" which Congress may follow. Thus, he notes, we have two types of IP emanating from the same source, but treated differently for unjustifiable reasons.

The article is forthcoming in Iowa Law Review, and is posted on SSRN. The abstract is here:
Congressional power over patents and copyrights flows from the same constitutional source, and the doctrines have similar missions. Yet the Supreme Court has approached these areas from distinctly different angles. With copyright, the Court readily employs constitutional analysis, building fences to constrain Congress. With patent, it emphasizes statutory interpretation, demarcating paths the legislature can follow, or deviate from (potentially at its constitutional peril). This Article uses empirical and quantitative analysis to show this divergence. It offers two potential explanations, one based on entitlement strength, the other grounded in public choice concerns. Next, the Article explores border cases where the Court could have used either fences or paths, demonstrating the effects of this pattern. It sets out criteria that the Court should employ in choosing between these approaches: countermajoritarian concerns, institutional competence, pragmatism, and avoidance theory. The Article argues that the key normative principle is that the Court should erect fences when cases impinge on intellectual property’s core constitutional concerns – information disclosure for patent and information generation for copyright. It concludes with two examples where the Court should alter its approach based on this principle.
The article is an interesting theory piece that has some practical payoff.

Wednesday, April 4, 2018

Tun-Jen Chiang: Can Patents Restrict Free Speech?

Guest post by Jason Reinecke, a 3L at Stanford Law School whose work has been previously featured on this blog.

Scholars have long argued that copyright and trademark law have the potential to violate the First Amendment right to free speech. But in Patents and Free Speech (forthcoming in the Georgetown Law Journal), Professor Tun-Jen Chiang explains that patents can similarly restrict free speech, and that they pose an even greater threat to speech than copyrights and trademarks because patent law lacks the doctrinal safeguards that have developed in that area.

Professor Chiang convincingly argues that patents frequently violate the First Amendment and provides numerous examples of patents that could restrict speech. For example, he uncovered one patent (U.S. Patent No. 6,311,211) claiming a “method of operating an advocacy network” by “sending an advocacy message” to various users. He argues that such “advocacy emails are core political speech that the First Amendment is supposed to protect. A statute or regulation that prohibited groups from sending advocacy emails would be a blatant First Amendment violation.”

Perhaps the strongest counterargument to the conclusion that patents often violate free speech is that private enforcement of property rights is generally not subject to First Amendment scrutiny, because the First Amendment only applies to acts of the government, not private individuals. Although Professor Chiang has previously concluded that this argument largely justifies copyright law’s exemption from the First Amendment, he does not come to the same conclusion for patent law for two reasons.

Monday, April 2, 2018

Masur & Mortara on Prospective Patent Decisions

Judicial patent decisions are retroactive. When the Supreme Court changed the standard for assessing obviousness in 2007 with KSR v. Teleflex, it affected not just patents filed after 2007, but also all of the existing patents that had been filed and granted under a different legal standard—upsetting existing reliance interests. But in a terrific new article, Patents, Property, and Prospectivity (forthcoming in the Stanford Law Review), Jonathan Masur and Adam Mortara argue that it doesn't have to be this way, and that in some cases, purely prospective patent changes make more sense.

As Masur and Mortara explain, retroactive changes might have benefits in terms of imposing an improved legal rule, but these changes also have social costs. Most notably, future innovators may invest less in R&D because they realize that they will not be able to rely on the law preserving their future patent rights. (Note that the private harm to existing reliance interests from past innovators is merely a wealth transfer from the public's perspective; the social harm comes from future innovators.) Moreover, courts may be less likely to implement improvements in patent law from the fear of upsetting reliance interests. Allowing courts to choose to make certain changes purely prospectively would ameliorate these concerns, and Masur and Mortara have a helpful discussion of how judges already do this in the habeas context.

The idea that judges should be able to make prospective patent rulings (and prospective judicial rulings more generally, outside habeas cases) seems novel and nonobvious and right, and I highly recommend the article. But I had lots of thoughts while reading about potential ways to further strengthen the argument:

Wednesday, March 28, 2018

Oracle v. Google Again: The Unicorn of a Fair Use Jury Reversal

It's been about two years, so I guess it was about time to write about Oracle v. Google. The trigger this time: in a blockbuster opinion (and I never use that term), the Federal Circuit has overturned a jury verdict finding that Google's use of 37 API headers was fair use and instead said that said reuse could not be fair use as a matter of law. I won't describe the ruling in full detail - Jason Rantanen does a good job of it at Patently-O.

Instead, I'll discuss my thoughts on the opinion and some ramifications. Let's start with this one: people who know me (and who read this blog) know that my knee jerk reaction is usually that the opinion is not nearly as far-reaching and worrisome as they think. So, it may surprise a few people when I say that this opinion may well be as worrisome and far-reaching as they think.

And I say that without commenting on the merits; right or wrong, this opinion will have real repercussions. The upshot is: no more compatible compiler/interpreters/APIs. If you create an API language, then nobody else can make a competing one, because to do so would necessarily entail copying the same structure of the input commands and parameters in your specification. If you make a language, you own the language. That's what Oracle argued for, and it won. No Quattro Pro interpreting old Lotus 1-2-3 macros, no competitive C compilers, no debugger emulators for operating systems, and potentially no competitive audio/visual playback software. This is, in short, a big deal.

So, what happened here? While I'm not thrilled with the Court's reasoning, I also don't find it to be so outside the bounds of doctrine as to be without sense. Here are my thoughts.

Tuesday, March 27, 2018

Are We Running out of Trademarks? College Sports Edition

As I watched the Kansas State Wildcats play the Kentucky Wildcats in the Sweet Sixteen this year, it occurred to me that there are an awful lot of Wildcats in the tournament (five, to be exact, or nearly 7.5% of the teams).  This made me think of the interesting new paper by Jeanne Fromer and Barton Beebe, called Are We Running Out of Trademarks? An Empirical Study of Trademark Depletion and Congestion. The paper is on SSRN, and is notable because it is the rare a) IP and b) empirical paper published by the Harvard Law Review. The abstract of the paper is here:
American trademark law has long operated on the assumption that there exists an inexhaustible supply of unclaimed trademarks that are at least as competitively effective as those already claimed. This core empirical assumption underpins nearly every aspect of trademark law and policy. This Article presents empirical evidence showing that this conventional wisdom is wrong. The supply of competitively effective trademarks is, in fact, exhaustible and has already reached severe levels of what we term trademark depletion and trademark congestion. We systematically study all 6.7 million trademark applications filed at the U.S. Patent and Trademark Office (PTO) from 1985 through 2016 together with the 300,000 trademarks already registered at the PTO as of 1985. We analyze these data in light of the most frequently used words and syllables in American English, the most frequently occurring surnames in the United States, and an original dataset consisting of phonetic representations of each applied-for or registered word mark included in the PTO’s Trademark Case Files Dataset. We further incorporate data consisting of all 128 million domain names registered in the .com top-level domain and an original dataset of all 2.1 million trademark office actions issued by the PTO from 2003 through 2016. These data show that rates of word-mark depletion and congestion are increasing and have reached chronic levels, particularly in certain important economic sectors. The data further show that new trademark applicants are increasingly being forced to resort to second-best, less competitively effective marks. Yet registration refusal rates continue to rise. The result is that the ecology of the trademark system is breaking down, with mounting barriers to entry, increasing consumer search costs, and an eroding public domain. In light of our empirical findings, we propose a mix of reforms to trademark law that will help to preserve the proper functioning of the trademark system and further its core purposes of promoting competition and enhancing consumer welfare.
The paper is really well developed and interesting. They consider common law marks as well as domain names. Also worth a read is Written Description's own Lisa Larrimore Ouellette's response, called Does Running Out of (Some) Trademarks Matter?, also in Harvard Law Review and on SSRN.

Wednesday, March 21, 2018

Blurred Lines Verdict Affirmed - How Bad is It?

The Ninth Circuit ruled on Williams v. Gaye today, the "Blurred Lines" verdict that found infringement and some hefty damages. I've replied to a few of my colleagues' Twitter posts today, so I figured I'd stop harassing them with my viewpoint and just make a brief blog post.

Three years ago this week, I blogged here that:
People have strong feelings about this case. Most people I know think it was wrongly decided. But I think that copyright law would be better served if we examined the evidence to see why it was wrongly decided. Should the court have ruled that the similarities presented by the expert were simply never enough to show infringement? Should we not allow juries to consider the whole composition (note that this usually favors the defendant)? Should we provide more guidance to juries making determinations? Was the wrong evidence admitted (that is, is my view of what the evidence was wrong)?
But what I don't think is helpful for the system is to assume straw evidence - it's easy to attack a decision when the court lets the jury hear something it shouldn't or when the jury ignores the instructions as they sometimes do. I'm not convinced that's what happened here; it's much harder to take the evidence as it is and decide whether we're doing this whole music copyright infringement thing the right way.
My sense then was that it would come down to how the appeals court would view the evidence, and it turns out I was right. I find this opinion to be...unremarkable. The jury heard evidence of infringement, and ruled that there was infringement. The court affirmed because that's what courts do when there is a jury verdict. There was some evidence of infringement, and that's enough.

To be clear, I'm not saying that's how I would have voted were I on the jury. I wasn't in the courtroom.

So, why are (almost) all my colleagues bent out of shape?

First, there is a definite view that the only thing copied here was a "vibe," and that the scenes a faire and other unprotected expression should have been filtered out. I am a big fan of filtration; I wrote an article on it. I admit to not being an expert on music filtration. But I do know that there was significant expert testimony here that more than a vibe was copied (which was enough to avoid summary judgment), and that once you're over summary judgment, all bets are off on whether the jury will filter out the "proper" way. Perhaps the jury didn't; but that's not what we ask on an appeal. So, the only way you take it from a jury is to say that there was no possible way to credit the plaintiff's expert that more than a vibe was copied. I've yet to see an analysis based on the actual evidence in the case that shows this (though I have seen plenty of folks disagreeing with Plaintiff's expert), though if someone has one, please point me to it and I'll gladly post it here. The court, for its part, is hostile to such "technical" parsing in music cases (in a way that it is not in photography and computer cases). But that's nothing new; the court cites old law for this proposition, so its hostility shouldn't be surprising, even if it is concerning.

Second, the court seems to double down on the "inverse ratio" rule:
We adhere to the “inverse ratio rule,” which operates like a sliding scale: The greater the showing of access, the lesser the showing of substantial similarity is required.
This is really bothersome, because just recently, the court said that the inverse ratio rule shouldn't be used to make it easier to prove improper appropriation:
That rule does not help Rentmeester because it assists only in proving copying, not in proving unlawful appropriation, the only element at issue in this case
I suppose that you can read the new case as just ignoring Rentmeester's statement, but I don't think so. First, the inverse ratio rule, for better or worse, is old Ninth Circuit law, which a panel can't simply ignore. Second, it is relevant for the question of probative copying (that is, was there copying at all?), which was disputed in this case, unlike Rentmeester. Third, there is no indication that this rule had any bearing on the jury's verdict. The inverse ratio rule was not part of the instruction that asked the jury to determine unlawful appropriation (and the Defendants did not appear to appeal the inverse ratio instruction), nor was the rule even stated in the terms used by the court at all in the jury instructions:
The defendants appealed this instruction, but only on filtration grounds (which were rejected), and not on inverse ratio type grounds.

In short, jury determinations of music copyright is messy business. There's a lot not to like about the Ninth Circuit's intrinsic/extrinsic test (I'm not a big fan, myself). The jury instructions could probably be improved on filtration (there were other filtration instructions, I believe).

But here's where I end up:
  1. This ruling is not terribly surprising, and is wholly consistent with Ninth Circuit precedent (for better or worse)
  2. The ruling could have been written more clearly to avoid some of the consternation and unclarity about the inverse ratio rule (among other things)
  3. This ruling doesn't much change Ninth Circuit law, nor dilute the importance of Rentmeester
  4. This ruling is based in large part on the evidence, which was hotly disputed at trial
  5. If you want to win a copyright case as a defendant, better hope to do it before you get to a jury. You can still win in front of the jury, but if it doesn't go your way the appeal will be tough to win.

Tuesday, March 20, 2018

Evidence on Polarization in IP

Since my coblogger Lisa Ouellette has not tooted her own horn about this, I thought I would do so for her. She, Maggie Wittlin (Nebraska), and Greg Mandel (Temple, its Dean, no less) have a new article forthcoming in UC Davis L. Rev. called What Causes Polarization on IP Policy? A draft is on SSRN, and the abstract is here:
Polarization on contentious policy issues is a problem of national concern for both hot-button cultural issues such as climate change and gun control and for issues of interest to more specialized constituencies. Cultural debates have become so contentious that in many cases people are unable to agree even on the underlying facts needed to resolve these issues. Here, we tackle this problem in the context of intellectual property law. Despite an explosion in the quantity and quality of empirical evidence about the intellectual property system, IP policy debates have become increasingly polarized. This disagreement about existing evidence concerning the effects of the IP system hinders democratic deliberation and stymies progress.
Based on a survey of U.S. IP practitioners, this Article investigates the source of polarization on IP issues, with the goal of understanding how to better enable evidence-based IP policymaking. We hypothesized that, contrary to intuition, more evidence on the effects of IP law would not resolve IP disputes but would instead exacerbate them. Specifically, IP polarization might stem from "cultural cognition," a form of motivated reasoning in which people form factual beliefs that conform to their cultural predispositions and interpret new evidence in light of those beliefs. The cultural cognition framework has helped explain polarization over other issues of national concern, but it has never been tested in a private-law context.
Our survey results provide support for the influence of cultural cognition, as respondents with a relatively hierarchical worldview are more likely to believe strong patent protection is necessary to spur innovation. Additionally, having a hierarchical worldview and also viewing patent rights as property rights may be a better predictor of patent strength preferences than either alone. Taken together, our findings suggest that individuals' cultural preferences affect how they understand new information about the IP system. We discuss the implications of these results for fostering evidence-based IP policymaking, as well as for addressing polarization more broadly. For example, we suggest that empirical legal studies borrow from medical research by initiating a practice of advance registration of new projects-in which the planned methodology is publicly disclosed before data are gathered-to promote broader acceptance of the results.
This work follows Lisa's earlier essay on Cultural Cognition in IP.  I think this is a fascinating and interesting area, and it is certainly seems to be more salient as stakes have increased. I am not without my own priors, but I do take pride in having my work cited by both sides of the debate.

The abstract doesn't do justice to the results - the paper is worth a read, with some interesting graphs as well. One of the more interesting findings is that political party has almost no correlation with views on copyright, but relatively strong correlation with views on patenting. This latter result makes me an odd duck, as I lean more (way, in some cases) liberal but have also leaned more pro-patent than many of my colleagues. I think there are reasons for that, but we don't need to debate them here.

In any event, there is a lot of work in this paper that the authors tie to cultural cognition - that is, motivated reasoning based on priors. I don't have an opinion on the measures they use to define it, but they seem reasonable enough and they follow a growing literature in this area. I think anyone interested in current IP debates (or cranky about them) could learn a few things from this study.

Tuesday, March 13, 2018

Which Patents Get Instituted During Inter Partes Review?

I recently attended PatCon 8 at the University of San Diego Law School. It was a great event, with lots of interesting papers. One paper I enjoyed from one of the (many) empirical sessions was Determinants of Patent Quality: Evidence from Inter Partes Review Proceedings by Brian Love (Santa Clara), Shawn Miller (Stanford), and Shawn Ambwani (Unified Patents). The paper is on SSRN and the abstract is here:
We study the determinants of patent “quality”—the likelihood that an issued patent can survive a post-grant validity challenge. We do so by taking advantage of two recent developments in the U.S. patent system. First, rather than relying on the relatively small and highly-selected set of patents scrutinized by courts, we study instead the larger and broader set of patents that have been subjected to inter partes review, a recently established administrative procedure for challenging the validity of issued patents. Second, in addition to characteristics observable on the face of challenged patents, we utilize datasets recently made available by the USPTO to gather detailed information about the prosecution and examination of studied patents. We find a significant relationship between validity and a number of characteristics of a patent and its owner, prosecutor, examiner, and prosecution history. For example, patents prosecuted by large law firms, pharmaceutical patents, and patents with more words per claim are significantly more likely to survive inter partes review. On the other hand, patents obtained by small entities, patents assigned to examiners with higher allowance rates, patents with more U.S. patent classes, and patents with higher reverse citation counts are less likely to survive review. Our results reveal a number of strategies that may help applicants, patent prosecutors, and USPTO management increase the quality of issued patents. Our findings also suggest that inter partes review is, as Congress intended, eliminating patents that appear to be of relatively low quality.
 The study does a good job of identifying a variety of variables that do (and do not) correlate with whether the PTO institutes a review of patents. Some examples of interesting findings:
  • Pharma patents are less likely to be instituted
  • Solo/small firm prosecuted patents are more likely to be instituted
  • Patents with more words in claim 1 (i.e. narrower patents) are less likely to be instituted
  • Patents with more backward citations are more likely to be instituted (this is counterintuitive, but consistent with my own study of the patent litigation)
  • Patent examiner characteristics affect likelihood of institution
There's a lot of good data here, and the authors did a lot of useful work to gather information that's not simply on the face of the patent. The paper is worth a good read. My primary criticism is the one I voiced during the session at PatCon - there's something about framing this as a generalized patent quality study that rankles me. (Warning, cranky old middle-age rambling ahead) I get that whether a patent is valid or not is an important quality indicator, and I've made similar claims. I just think the authors have to spend a lot of time/space (it's an 84 page paper) trying to support their claim.

For example, they argue that IPRs are more complete compared to litigation, because litigation has selection effects both in what gets litigated and in settlement post-litigation. But IPRs suffer from the same problem. Notwithstanding some differences, there's a high degree of matching between IPRs and litigation, and many petitions settle both before and after institution.

Which leads to a second point: these are institutions - not final determinations. Now, they treat institutions patents where the claims are upheld as non-instituted, but with 40% of the cases still pending (and a declining institution rate as time goes on) we don't know how the incomplete and settled institutions look. More controversially, they count as low quality any patent where any single claim is instituted.  So, you could challenge 100 claims, have one instituted, and the patent falls into the "bad" pile.

Now, they present data that shows it is not quite so bad as this, but the point remains: with high settlements and partial invalidation, it's hard work to make a general claim about patent quality. To be fair, the authors point out all of these limitations in their draft. It is not as though they aren't aware of the criticism, and that's a good thing. I suppose, then, it's just a style difference. Regardless, this paper is worth checking out.

Friday, March 9, 2018

Sapna Kumar: The Fate Of "Innovation Nationalism" In The Age of Trump

One of the biggest pieces of news last week was that President Trump will be imposing tariffs on foreign steel and aluminum because, he tweets, IF YOU DON'T HAVE STEEL, YOU DON'T HAVE A COUNTRY.  Innovation Nationalism, a timely new article by Professor Sapna Kumar at University of Houston School of Law, explains the role that innovation and patent law play in the "global resurgence of nationalism" in the age of Trump. After reading her article, I think Trump should replace this tweet with: IF YOU DON'T HAVE PATENTS, YOU DON'T HAVE A COUNTRY.

Tuesday, March 6, 2018

The Quest to Patent Perpetual Motion

Those familiar with my work will know that I am a big fan of utility doctrine. I think it is underused and misunderstood. When I teach about operable utility, I use perpetual motion machines as the type of fantastic (and not in a good way) invention that will be rejected by the PTO as inoperable due to violating the laws of thermodynamics.

On my way to a conference last week, I watched a great documentary called Newman about one inventor's quest to patent a perpetual motion machine. The trailer is here, and you can stream it pretty cheaply (I assume it will come to a service at some point):
The movie is really well done, I think. The first two-thirds is a great survey of old footage, along with interviews of many people involved in the saga. The final third focuses on what became of Newman after his court case, leading to a surprising ending that colors how we should look at the first part of the movie. The two acts work really well together, and I think this movie should be of interest to anyone, and not just patent geeks.

That said, I'd like to spend a bit of time on the patent aspects, namely utility doctrine. Wikipedia has a pretty detailed entry, with links to many of the relevant documents. The federal circuit case, Newman v. Quigg, as well as the district court case, also lay out many of the facts. The claim was extremely broad:
38. A device which increases the availability of usable electrical energy or usable motion, or both, from a given mass or masses by a device causing a controlled release of, or reaction to, the gyroscopic type energy particles making up or coming from the atoms of the mass or masses, which in turn, by any properly designed system, causes an energy output greater than the energy input.
Here are some thoughts:

First, the case continues what I believe to be a central confusion in utility. The initial rejection was not based on Section 101 ("new and useful") but on Section 112 (enablement to "make and use"). This is a problematic distinction. As the Patent Board of Appeals even noted: "We do not doubt that a worker in this art with appellant's specification before him could construct a motor ... as shown in Fig. 6 of the drawing." Well, then one could make and use it, even if it failed at its essential purpose. Now, there is an argument that the claim is so broad that Newman didn't enable every device claimed (as in the Incandescent Lamp case), but that's not what the board was describing. The section 101 defense was not added until 1986, well into the district court proceeding. The district court later makes some actual 112 comments (that the description is metaphysical), but this is not the same as failing to achieve the claimed outcome. The Federal Circuit makes clear that 112 can support this type of rejection: "neither is the patent applicant relieved of the requirement of teaching how to achieve the claimed result, even if the theory of operation is not correctly explained or even understood." But this is not really enablement - it's operable utility! The 112 theory of utility is that you can't enable someone to use and invention if it's got no use. But just about every invention has some use. I write about this confusion in my article A Surprisingly Useful Requirement.

Second, this leads to another key point of the case. The failed claim was primarily due to the insistence on claiming perpetual motion. Had Newman claimed a novel motor, then the claim might have survived (though there was a 102/103 rejection somewhere in the history). One of the central themes of the documentary was that Newman needed this patent to commercialize his invention, so others could not steal the idea. He could not share it until it was protected. But he could have achieved this goal with a much narrower patent that did not claim perpetual motion. That he did not attempt a narrower patent is quite revealing, and foreshadows some of the interesting revelations from the end of the documentary.

Third, the special master in the case, William Schuyler, had been Commissioner of Patents. He recommended that the Court grant the patent, finding sufficient evidence to support the claims. It is surprising that he would have issued a report finding operable utility here, putting the Patent Office in the unenviable position of attacking its former chief.

Fourth, the case is an illustration in waiver. Newman claimed that the device only worked properly when ungrounded. More important, the output was measured in complicated ways (according to his own witnesses). Yet, Newman failed to indicate how measurement should be done when it counted: "Dr. Hebner [of National Bureau of Standards] then asked Newman directly where he intended that the power output be measured. His attorney advised Newman not to answer, and Newman and his coterie departed without further comment." The court finds a similar waiver with respect to whether the device should have been grounded, an apparently key requirement. These two waivers allowed the courts to credit the testing over Newman's later objections that the testing was improperly handled.

I'm sure I had briefly read Newman v. Quigg at some point in the past, and the case is cited as the seminal "no perpetual motion machine" case. Even so, I'm glad I watched the documentary to get a better picture of the times and hooplah that went with this, as well as what became of the man who claimed to defy the laws of thermodynamics.

Monday, March 5, 2018

Intellectual Property and Jobs

During the 2016 presidential race, an op ed in the New York Times by Jacob S. Hacker, a professor of political science at Yale, and Paul Pierson, a professor of political science at the University of California, Berkeley, asserted that "blue states" that support Democratic candidates, like New York, California, and Massachusetts, are "generally doing better" in an economic sense than "red states" that support Republican candidates, like Mississippi, Kentucky, and (in some election cycles) Ohio. The gist of their argument is that conservatives cannot honestly claim that "red states dominate" on economic indicators like wealth, job growth, and education, when the research suggests the opposite. "If you compare averages," they write, "blue states are substantially richer (even adjusting for cost of living) and their residents are better educated."

I am not here to argue over whether blue states do better than red states economically. What I do want to point out is how professors Hacker and Pierson use intellectual property – and in particular patents – in making their argument. Companies in blue states, they write, "
do more research and development and produce more patents[]" than red states. Indeed, "few of the cities that do the most research or advanced manufacturing or that produce the most patents are in red states." How, they ask rhetorically, can conservatives say red states are doing better when most patents are being generated in California?*

Hacker and Pierson's reasoning, which is quite common, goes like this. Patents are an indicator of innovation. Innovation is linked to economic prosperity. Therefore, patents – maybe even all forms of intellectual property – are linked to economic prosperity.

In my new paper, Technological Un/employment, I cast doubt on the connection between intellectual property and one important indicator of economic prosperity: employment.

This post is based on a talk I gave at the 2018 Works-In-Progress Intellectual Property (WIPIP) Colloquium at Case Western Reserve University School of Law on Saturday, February 17.

Saturday, March 3, 2018

PatCon8 at San Diego

Yesterday and today, the University of San Diego School of Law hosted the eighth annual Patent Conference—PatCon8—largely organized by Ted Sichelman. Schedule and participants are here. For those who missed it—or who were at different concurrent sessions—here's a recap of my live Tweets from the conference. (For those who receive Written Description posts by email: This will look much better—with pictures and parent tweets—if you visit the website version.)

Friday, March 2, 2018

Matteo Dragoni on the Effect of the European Patent Convention

Guest post by Matteo Dragoni, Stanford TTLF Fellow

Recent posts by both Michael Risch and Lisa Ouellette discussed the recent article The Impact of International Patent Systems: Evidence from Accession to the European Patent Convention, by economists Bronwyn Hall and Christian Helmers. Based on my experience with the European patent system, I have some additional thoughts on the article, which I'm grateful for the opportunity to share.

First, although Risch was surprised that residents of states joining the EPC continued to file in their home state in addition to filing in the EPO, this practice is quite common (and less unreasonable than it might seem at first glance) for at least three reasons:
  1. The national filing is often used as a priority application to file a European patent (via the PCT route or not). This gives one extra year of time (to gain new investments and to postpone expenses) and protection (to reach 21 years instead of 20) than merely starting with an EPO application.
  2. Some national patent offices have the same (or very similar) patenting standards as the EPO but a less strict application of those standards de facto when a patent is examined. Therefore, it is sometimes easier to obtain a national patent than a European patent.
  3. Relatedly, the different application of patentability standards means that the national patent may be broader than the eventual European patent. The validity/enforceability of these almost duplicate patents is debatable and represents a complex issue, but a broader national patent is often prima facie enforceable and a valid ground to obtain (strong) interim measures.

Wednesday, February 28, 2018

How Difficult is it to Judge Patentable Subject Matter?

I've long argued that the Supreme Court's patentable subject matter jurisprudence is inherently uncertain, and that it is therefore nearly impossible to determine what is patentable. But this is only theory (a well grounded one, I think, but still). A clever law student has now put the question to the test. Jason Reinecke (Stanford 3L) got IRB approval and conducted a survey in which he asked patent practitioners about whether patents would withstand a subject matter challenge. A draft is on SSRN, and the abstract is here:
In four cases handed down between 2010 and 2014, the Supreme Court articulated a new two-step patent eligibility test that drastically reduced the scope of patent protection for software inventions. Scholars have described the test as “impossible to administer in a coherent, consistent way,” “a foggy standard,” “too philosophical and policy based to be administrable,” a “crisis of confusion,” “rife with indeterminacy,” and one that “forces lower courts to engage in mental gymnastics.”
This Article provides the first empirical test of these assertions. In particular, 231 patent attorneys predicted how courts would rule on the subject matter eligibility of litigated software patent claims, and the results were compared with the actual district court rulings. Among other findings, the results suggest that while the test is certainly not a beacon of absolute clarity, it is also not as amorphous as many commentators have suggested.
This was an ambitious study, and getting 231 participants is commendable. As discussed below, the results are interesting, and there's a lot of great results to takeaway from it. Though I think the takeaways depend on your goals for the system, no matter what your priors, this is a useful survey.

Tuesday, February 27, 2018

Tribal Sovereign Immunity and Patent Law, Part II: Lessons in Shoddy Reasoning from the PTAB

Guest post by Professor Greg Ablavsky, Stanford Law School

Per Lisa's request, I have returned to offer some thoughts on the PTAB's tribal sovereign immunity decision (you can find my earlier post here and some additional musings coauthored with Lisa here). I had thought I had retired my role of masquerading as an (entirely unqualified) intellectual property lawyer, but, as the PTAB judges clearly haven't relinquished their pretensions to be experts in federal Indian law, here we are.

The upshot is that I find the PTAB's decision highly unpersuasive, for the reasons that follow, and I hope to convince you that, however you feel about the result, the PTAB's purported rationales should give pause. I should stress at the outset that I have no expertise to assess the PTAB's conclusion that Allergan is the "true owner" of the patent, which may well be correct. But the fact that this conclusion could have served as entirely independent basis for the judgment makes the slipshod reasoning in the first part of the decision on tribal immunity all the more egregious. Here are some examples—I hope you'll forgive the dive into Indian law and immunity doctrine:
1. Supreme Court Precedent: The tenor of the PTAB's decision is clear from its quotation of isolated dicta from Kiowa, where, in the process of considering off-reservation tribal sovereign immunity, the Supreme Court expressed some sympathy for the viewpoint of the dissenting Justices: "There are reasons to doubt the wisdom of perpetuating the [tribal immunity] doctrine." But the PTAB omits the key language that came at the end of the Court's discussion of this issue: "[W]e defer to the role Congress may wish to exercise in this important judgment," leaving the decision as to whether to abrogate tribal sovereign immunity—which Congress may do under its "plenary power"—to the legislature. In short, although you wouldn't know it from the PTAB's cherry-picked quotations, Kiowa actually determined that the right approach in the face of uncertainty was to uphold the doctrine of tribal sovereign immunity.
Nor was the 20-year-old Kiowa case the last word on this question. Astonishingly, the PTAB's decision never discusses the facts, holding, or reasoning of Bay Mills, even though the Court decided the case, unquestionably its most important recent statement on tribal sovereign immunity, in 2014. There, the Court rejected another effort to invalidate tribal sovereign immunity, stating that "it is fundamentally Congress's job, not ours, to determine whether or how to limit tribal immunity." This rule, the Court held, applied even more forcefully after Congress had had twenty years to revisit the holding in Kiowa and declined to eliminate tribal sovereign immunity. Id.
Arguably, the PTAB should give at least equal deference to congressional determinations as the Supreme Court, especially given the existence of pending legislation abrogating tribal immunity in this context. Or, setting the bar even lower, one would hope that the PTAB would at some point grapple with recent Supreme Court decisions directly on point. But they don't—in part because, as I'll discuss now, they mischaracterize the question as one of first impression.