As noted in my earlier post today with my remarks from We Robot, I’ve been busy with lots of interesting conferences and workshops in the past few weeks. Because I wrote out detailed notes for two of them, I thought I would post them for people who weren’t able to attend. Here are my comments from the fantastic BCLT/BTLJ symposium on the administrative law of IP, where I was on a panel discussing IP small-claims tribunals:
You have already heard some insightful comments from Ben Depoorter and Pam Samuelson on copyright small-claims courts, including their analysis of problems with the proposed Copyright Alternatives in Small-Claims Enforcement (CASE) Act of 2017, as well as their thoughts on how a more narrowly tailored small-claims system might be beneficial. The main justification for introducing such a tribunal is that high litigation costs prevent claimants from pursuing valid small claims.
I’m here to provide some perspective from the patent law side, and the short version of my comments is that the idea of a patent small-claims court seems mostly dead in the United States, and I don’t see a reason to revive it.
Patent & IP blog, discussing recent news & scholarship on patents, IP theory & innovation.
Friday, April 27, 2018
We Robot Comments on Ryan Abbott's Everything is Obvious
Posted by
Lisa Larrimore Ouellette
I’ve been busy with lots of interesting conferences and workshops in the past few weeks, and since I wrote out detailed notes for two of them, I thought I would post them for people who weren’t able to attend. First, my comments from the We Robot conference two weeks ago at Stanford:
Ryan Abbott’s Everything is Obvious is part of an interesting series of articles Ryan has been working on related to how developments in AI and computing affect legal areas such as patent law. In an earlier article, I Think, Therefore I Invent, he provocatively argued that creative computers should be considered inventors for patent and copyright purposes. Here, he focuses on how these creative computers should affect one of the most important legal standards in patent law: the requirement that an invention not be obvious to a person having ordinary skill in the art.
Ryan’s definition of “creative computers” is purposefully broad. The existing creative computers he discusses are all narrow or specific AI systems that are programmed to solve particular problems, like systems from the 1980s that were programmed to design new microchips based on certain rules and IBM’s Watson, which is currently identifying novel drug targets for pharmaceutical research. And Ryan thinks patent law already needs to change in response to these developments. But I think his primary concern is the coming of artificial general intelligence that surpasses human inventors.
Ryan Abbott’s Everything is Obvious is part of an interesting series of articles Ryan has been working on related to how developments in AI and computing affect legal areas such as patent law. In an earlier article, I Think, Therefore I Invent, he provocatively argued that creative computers should be considered inventors for patent and copyright purposes. Here, he focuses on how these creative computers should affect one of the most important legal standards in patent law: the requirement that an invention not be obvious to a person having ordinary skill in the art.
Ryan’s definition of “creative computers” is purposefully broad. The existing creative computers he discusses are all narrow or specific AI systems that are programmed to solve particular problems, like systems from the 1980s that were programmed to design new microchips based on certain rules and IBM’s Watson, which is currently identifying novel drug targets for pharmaceutical research. And Ryan thinks patent law already needs to change in response to these developments. But I think his primary concern is the coming of artificial general intelligence that surpasses human inventors.
Tuesday, April 24, 2018
Naruto, the Article III monkey
Posted by
Michael Risch
The Ninth Circuit released its opinion in the "monkey selfie" case, reasonably ruling that Naruto the monkey doesn't have standing under the Copyright laws. The opinion dodges the hard questions about who can be an author (thus leaving for another day questions about artificial intelligence, for example) by instead focusing on mundane things like the ability to have heirs. As a result, it's not the strongest opinion, but one that's hard to take issue with.
But I'd like to focus on an issue that's received much less attention in the press and among my colleagues. The court ruled that Naruto has Article III standing because there is a case or controversy. I'll admit that I hadn't thought about this angle, having instead gone right to the copyright authorship question (when you're a hammer, everything looks like a nail). But I guess when you're an appellate court, that whole "jurisdiction and standing section" means something even though we often skim that in our non-civ pro/con law/fed courts classes in law school.
I'll first note that the court is doubtful that PETA has standing as "next friend." Footnote 3 is a scathing indictment of its actions in this case, essentially arguing that PETA leveraged the case for its own political ends rather than for any benefit of Naruto. Youch! More on this aspect here. The court also finds that the copyright statute does not allow for next friend standing, a completely non-shocking result given precedent.
Even so, the court looks to whether Naruto has individual standing even without some sort of guardian. Surprisingly enough, this was not an issue of first impression. The Ninth Circuit had already ruled that a group of whales had Article III standing. From this, the court very quickly decides that Naruto has standing: the allegation of ownership in the photograph easily creates a case or controversy.
Once again, the best part is in the footnotes. I'll reproduce part of note 5 here:
I'll end with perhaps my favorite part of the opinion: the award of attorneys' fees. The award itself is not surprising, but the commentary is. It notes that the court does not know how or whether the settlement in the case dealt with the possibility of such an award, but also that Naruto was not part of such a settlement. It's unclear what this means. Can Slater collect from Naruto? How would that happen? Can Slater collect from PETA because Naruto was not part of the settlement? The court, I'm sure, would say to blame any complexity on the whale case.
But I'd like to focus on an issue that's received much less attention in the press and among my colleagues. The court ruled that Naruto has Article III standing because there is a case or controversy. I'll admit that I hadn't thought about this angle, having instead gone right to the copyright authorship question (when you're a hammer, everything looks like a nail). But I guess when you're an appellate court, that whole "jurisdiction and standing section" means something even though we often skim that in our non-civ pro/con law/fed courts classes in law school.
I'll first note that the court is doubtful that PETA has standing as "next friend." Footnote 3 is a scathing indictment of its actions in this case, essentially arguing that PETA leveraged the case for its own political ends rather than for any benefit of Naruto. Youch! More on this aspect here. The court also finds that the copyright statute does not allow for next friend standing, a completely non-shocking result given precedent.
Even so, the court looks to whether Naruto has individual standing even without some sort of guardian. Surprisingly enough, this was not an issue of first impression. The Ninth Circuit had already ruled that a group of whales had Article III standing. From this, the court very quickly decides that Naruto has standing: the allegation of ownership in the photograph easily creates a case or controversy.
Once again, the best part is in the footnotes. I'll reproduce part of note 5 here:
In our view, the question of standing was explicitly decided in Cetacean. Although, as we explain later, we believe Cetacean was wrongly decided, we are bound by it. Short of an intervening decision from the Supreme Court or from an en banc panel of this court, [] we cannot escape the proposition that animals have Article III standing to sue....
[The concurrence] insightfully identifies a series of issues raised by the prospect of allowing animals to sue. For example, if animals may sue, who may represent their interests? If animals have property rights, do they also have corresponding duties? How do we prevent people (or organizations, like PETA) from using animals to advance their human agendas? In reflecting on these questions, Judge Smith [in the concurrence] reaches the reasonable conclusion that animals should not be permitted to sue in human courts. As a pure policy matter, we agree. But we are not a legislature, and this court’s decision in Cetacean limits our options. What we can do is urge this court to reexamine Cetacean. See infra note 6. What we cannot do is pretend Cetacean does not exist, or that it states something other, or milder, or more ambiguous on whether cetaceans have Article III standing.I was glad to see this, because when I read the initial account that Article III standing had been granted, I wondered why the court would come to that decision and thought of many of these questions (and more - like what if there's no statute to deny standing, like diversity tort liability).
I'll end with perhaps my favorite part of the opinion: the award of attorneys' fees. The award itself is not surprising, but the commentary is. It notes that the court does not know how or whether the settlement in the case dealt with the possibility of such an award, but also that Naruto was not part of such a settlement. It's unclear what this means. Can Slater collect from Naruto? How would that happen? Can Slater collect from PETA because Naruto was not part of the settlement? The court, I'm sure, would say to blame any complexity on the whale case.
Sunday, April 22, 2018
Chris Walker & Melissa Wasserman on the PTAB and Administrative Law
Posted by
Lisa Larrimore Ouellette
Christopher Walker is a leading administrative law scholar, and Melissa Wasserman's excellent work on the PTO has often been featured on this blog, so when the two of them teamed up to study how the PTAB fits within broader principles of administrative law, the result—The New World of Agency Adjudication (forthcoming Calif. L. Rev.)—is self-recommending. With a few notable exceptions (such as a 2007 article by Stuart Benjamin and Arti Rai), patent law scholars have paid relatively little attention to administrative law. But the creation of the PTAB has sparked a surge of interest, including multiple Supreme Court cases and a superb symposium at Berkeley earlier this month (including Wasserman, Rai, and many others). Walker and Wasserman's new article is essential reading for anyone following these recent debates, whether you are interested in specific policy issues like PTAB panel stacking or more general trends in administrative review.
Monday, April 16, 2018
Comprehensive Data about Federal Circuit Opinions
Posted by
Michael Risch
Jason Rantanen (Iowa) has already blogged about his new article, but I thought I would mention it briefly has well. He has created a database of data about Federal Circuit opinions. An article describing it is forthcoming in the American University Law Reviw on SSRN and the abstract is here:
Quantitative studies of the U.S. Court of Appeals for the Federal Circuit's patent law decisions are almost more numerous than the judicial decisions they examine. Each study painstakingly collects basic data about the decisions-case name, appeal number, judges, precedential status-before adding its own set of unique observations. This process is redundant, labor-intensive, and makes cross-study comparisons difficult, if not impossible. This Article and the accompanying database aim to eliminate these inefficiencies and provide a mechanism for meaningful cross-study comparisons.The article has some interesting details about opinions and trends, but I wanted to point out that this is a database now available for use in scholarly work, which is really helpful. The inclusion of non-precedential opinions adds a new wrinkle as well. Hopefully some useful studies will come of this
This Article describes the Compendium of Federal Circuit Decisions ("Compendium"), a database created to both standardize and analyze decisions of the Federal Circuit. The Compendium contains an array of data on all documents released on the Federal Circuit's website relating to cases that originated in a federal district court or the United States Patent and Trademark Office (USPTO)-essentially all opinions since 2004 and all Rule 36 affirmances since 2007, along with numerous orders and other documents.
This Article draws upon the Compendium to examine key metrics of the Federal Circuit's decisions in appeals arising from the district courts and USPTO over the past decade, updating previous work that studied similar populations during earlier time periods and providing new insights into the Federal Circuit's performance. The data reveal, among other things, an increase in the number of precedential opinions in appeals arising from the USPTO, a general increase in the quantity-but not necessarily the frequency-with which the Federal Circuit invokes Rule 36, and a return to general agreement among the judges following a period of substantial disuniformity. These metrics point to, on the surface at least, a Federal Circuit that is functioning smoothly in the post-America Invents Act world, while also hinting at areas for further study.
Tuesday, April 10, 2018
Statute v. Constitution as IP Limiting Doctrine
Posted by
Michael Risch
In his forthcoming article, "Paths or Fences: Patents, Copyrights, and the Constitution," Derek Bambauer (Arizona), notices (and provides some data to support) a discrepancy in how boundary and limiting issues are handled in patent and copyright. He notes that, for reasons he theorizes, big copyright issues are often "fenced in" by the Constitution - that is the constitution limits what can be protected. But patent issues are often resolved by statute, because the Constitution creates a "path" which Congress may follow. Thus, he notes, we have two types of IP emanating from the same source, but treated differently for unjustifiable reasons.
The article is forthcoming in Iowa Law Review, and is posted on SSRN. The abstract is here:
The article is forthcoming in Iowa Law Review, and is posted on SSRN. The abstract is here:
Congressional power over patents and copyrights flows from the same constitutional source, and the doctrines have similar missions. Yet the Supreme Court has approached these areas from distinctly different angles. With copyright, the Court readily employs constitutional analysis, building fences to constrain Congress. With patent, it emphasizes statutory interpretation, demarcating paths the legislature can follow, or deviate from (potentially at its constitutional peril). This Article uses empirical and quantitative analysis to show this divergence. It offers two potential explanations, one based on entitlement strength, the other grounded in public choice concerns. Next, the Article explores border cases where the Court could have used either fences or paths, demonstrating the effects of this pattern. It sets out criteria that the Court should employ in choosing between these approaches: countermajoritarian concerns, institutional competence, pragmatism, and avoidance theory. The Article argues that the key normative principle is that the Court should erect fences when cases impinge on intellectual property’s core constitutional concerns – information disclosure for patent and information generation for copyright. It concludes with two examples where the Court should alter its approach based on this principle.The article is an interesting theory piece that has some practical payoff.
Wednesday, April 4, 2018
Tun-Jen Chiang: Can Patents Restrict Free Speech?
Posted by
Lisa Larrimore Ouellette
Guest post by Jason Reinecke, a 3L at Stanford Law School whose work has been previously featured on this blog.
Scholars have long argued that copyright and trademark law have the potential to violate the First Amendment right to free speech. But in Patents and Free Speech (forthcoming in the Georgetown Law Journal), Professor Tun-Jen Chiang explains that patents can similarly restrict free speech, and that they pose an even greater threat to speech than copyrights and trademarks because patent law lacks the doctrinal safeguards that have developed in that area.
Professor Chiang convincingly argues that patents frequently violate the First Amendment and provides numerous examples of patents that could restrict speech. For example, he uncovered one patent (U.S. Patent No. 6,311,211) claiming a “method of operating an advocacy network” by “sending an advocacy message” to various users. He argues that such “advocacy emails are core political speech that the First Amendment is supposed to protect. A statute or regulation that prohibited groups from sending advocacy emails would be a blatant First Amendment violation.”
Perhaps the strongest counterargument to the conclusion that patents often violate free speech is that private enforcement of property rights is generally not subject to First Amendment scrutiny, because the First Amendment only applies to acts of the government, not private individuals. Although Professor Chiang has previously concluded that this argument largely justifies copyright law’s exemption from the First Amendment, he does not come to the same conclusion for patent law for two reasons.
Scholars have long argued that copyright and trademark law have the potential to violate the First Amendment right to free speech. But in Patents and Free Speech (forthcoming in the Georgetown Law Journal), Professor Tun-Jen Chiang explains that patents can similarly restrict free speech, and that they pose an even greater threat to speech than copyrights and trademarks because patent law lacks the doctrinal safeguards that have developed in that area.
Professor Chiang convincingly argues that patents frequently violate the First Amendment and provides numerous examples of patents that could restrict speech. For example, he uncovered one patent (U.S. Patent No. 6,311,211) claiming a “method of operating an advocacy network” by “sending an advocacy message” to various users. He argues that such “advocacy emails are core political speech that the First Amendment is supposed to protect. A statute or regulation that prohibited groups from sending advocacy emails would be a blatant First Amendment violation.”
Perhaps the strongest counterargument to the conclusion that patents often violate free speech is that private enforcement of property rights is generally not subject to First Amendment scrutiny, because the First Amendment only applies to acts of the government, not private individuals. Although Professor Chiang has previously concluded that this argument largely justifies copyright law’s exemption from the First Amendment, he does not come to the same conclusion for patent law for two reasons.
Monday, April 2, 2018
Masur & Mortara on Prospective Patent Decisions
Posted by
Lisa Larrimore Ouellette
Judicial patent decisions are retroactive. When the Supreme Court changed the standard for assessing obviousness in 2007 with KSR v. Teleflex, it affected not just patents filed after 2007, but also all of the existing patents that had been filed and granted under a different legal standard—upsetting existing reliance interests. But in a terrific new article, Patents, Property, and Prospectivity (forthcoming in the Stanford Law Review), Jonathan Masur and Adam Mortara argue that it doesn't have to be this way, and that in some cases, purely prospective patent changes make more sense.
As Masur and Mortara explain, retroactive changes might have benefits in terms of imposing an improved legal rule, but these changes also have social costs. Most notably, future innovators may invest less in R&D because they realize that they will not be able to rely on the law preserving their future patent rights. (Note that the private harm to existing reliance interests from past innovators is merely a wealth transfer from the public's perspective; the social harm comes from future innovators.) Moreover, courts may be less likely to implement improvements in patent law from the fear of upsetting reliance interests. Allowing courts to choose to make certain changes purely prospectively would ameliorate these concerns, and Masur and Mortara have a helpful discussion of how judges already do this in the habeas context.
The idea that judges should be able to make prospective patent rulings (and prospective judicial rulings more generally, outside habeas cases) seems novel and nonobvious and right, and I highly recommend the article. But I had lots of thoughts while reading about potential ways to further strengthen the argument:
As Masur and Mortara explain, retroactive changes might have benefits in terms of imposing an improved legal rule, but these changes also have social costs. Most notably, future innovators may invest less in R&D because they realize that they will not be able to rely on the law preserving their future patent rights. (Note that the private harm to existing reliance interests from past innovators is merely a wealth transfer from the public's perspective; the social harm comes from future innovators.) Moreover, courts may be less likely to implement improvements in patent law from the fear of upsetting reliance interests. Allowing courts to choose to make certain changes purely prospectively would ameliorate these concerns, and Masur and Mortara have a helpful discussion of how judges already do this in the habeas context.
The idea that judges should be able to make prospective patent rulings (and prospective judicial rulings more generally, outside habeas cases) seems novel and nonobvious and right, and I highly recommend the article. But I had lots of thoughts while reading about potential ways to further strengthen the argument:
Wednesday, March 28, 2018
Oracle v. Google Again: The Unicorn of a Fair Use Jury Reversal
Posted by
Michael Risch
It's been about two years, so I guess it was about time to write about Oracle v. Google. The trigger this time: in a blockbuster opinion (and I never use that term), the Federal Circuit has overturned a jury verdict finding that Google's use of 37 API headers was fair use and instead said that said reuse could not be fair use as a matter of law. I won't describe the ruling in full detail - Jason Rantanen does a good job of it at Patently-O.
Instead, I'll discuss my thoughts on the opinion and some ramifications. Let's start with this one: people who know me (and who read this blog) know that my knee jerk reaction is usually that the opinion is not nearly as far-reaching and worrisome as they think. So, it may surprise a few people when I say that this opinion may well be as worrisome and far-reaching as they think.
And I say that without commenting on the merits; right or wrong, this opinion will have real repercussions. The upshot is: no more compatible compiler/interpreters/APIs. If you create an API language, then nobody else can make a competing one, because to do so would necessarily entail copying the same structure of the input commands and parameters in your specification. If you make a language, you own the language. That's what Oracle argued for, and it won. No Quattro Pro interpreting old Lotus 1-2-3 macros, no competitive C compilers, no debugger emulators for operating systems, and potentially no competitive audio/visual playback software. This is, in short, a big deal.
So, what happened here? While I'm not thrilled with the Court's reasoning, I also don't find it to be so outside the bounds of doctrine as to be without sense. Here are my thoughts.
Instead, I'll discuss my thoughts on the opinion and some ramifications. Let's start with this one: people who know me (and who read this blog) know that my knee jerk reaction is usually that the opinion is not nearly as far-reaching and worrisome as they think. So, it may surprise a few people when I say that this opinion may well be as worrisome and far-reaching as they think.
And I say that without commenting on the merits; right or wrong, this opinion will have real repercussions. The upshot is: no more compatible compiler/interpreters/APIs. If you create an API language, then nobody else can make a competing one, because to do so would necessarily entail copying the same structure of the input commands and parameters in your specification. If you make a language, you own the language. That's what Oracle argued for, and it won. No Quattro Pro interpreting old Lotus 1-2-3 macros, no competitive C compilers, no debugger emulators for operating systems, and potentially no competitive audio/visual playback software. This is, in short, a big deal.
So, what happened here? While I'm not thrilled with the Court's reasoning, I also don't find it to be so outside the bounds of doctrine as to be without sense. Here are my thoughts.
Tuesday, March 27, 2018
Are We Running out of Trademarks? College Sports Edition
Posted by
Michael Risch
As I watched the Kansas State Wildcats play the Kentucky Wildcats in the Sweet Sixteen this year, it occurred to me that there are an awful lot of Wildcats in the tournament (five, to be exact, or nearly 7.5% of the teams). This made me think of the interesting new paper by Jeanne Fromer and Barton Beebe, called Are We Running Out of Trademarks? An Empirical Study of Trademark Depletion and Congestion. The paper is on SSRN, and is notable because it is the rare a) IP and b) empirical paper published by the Harvard Law Review. The abstract of the paper is here:
American trademark law has long operated on the assumption that there exists an inexhaustible supply of unclaimed trademarks that are at least as competitively effective as those already claimed. This core empirical assumption underpins nearly every aspect of trademark law and policy. This Article presents empirical evidence showing that this conventional wisdom is wrong. The supply of competitively effective trademarks is, in fact, exhaustible and has already reached severe levels of what we term trademark depletion and trademark congestion. We systematically study all 6.7 million trademark applications filed at the U.S. Patent and Trademark Office (PTO) from 1985 through 2016 together with the 300,000 trademarks already registered at the PTO as of 1985. We analyze these data in light of the most frequently used words and syllables in American English, the most frequently occurring surnames in the United States, and an original dataset consisting of phonetic representations of each applied-for or registered word mark included in the PTO’s Trademark Case Files Dataset. We further incorporate data consisting of all 128 million domain names registered in the .com top-level domain and an original dataset of all 2.1 million trademark office actions issued by the PTO from 2003 through 2016. These data show that rates of word-mark depletion and congestion are increasing and have reached chronic levels, particularly in certain important economic sectors. The data further show that new trademark applicants are increasingly being forced to resort to second-best, less competitively effective marks. Yet registration refusal rates continue to rise. The result is that the ecology of the trademark system is breaking down, with mounting barriers to entry, increasing consumer search costs, and an eroding public domain. In light of our empirical findings, we propose a mix of reforms to trademark law that will help to preserve the proper functioning of the trademark system and further its core purposes of promoting competition and enhancing consumer welfare.The paper is really well developed and interesting. They consider common law marks as well as domain names. Also worth a read is Written Description's own Lisa Larrimore Ouellette's response, called Does Running Out of (Some) Trademarks Matter?, also in Harvard Law Review and on SSRN.
Wednesday, March 21, 2018
Blurred Lines Verdict Affirmed - How Bad is It?
Posted by
Michael Risch
The Ninth Circuit ruled on Williams v. Gaye today, the "Blurred Lines" verdict that found infringement and some hefty damages. I've replied to a few of my colleagues' Twitter posts today, so I figured I'd stop harassing them with my viewpoint and just make a brief blog post.
Three years ago this week, I blogged here that:
To be clear, I'm not saying that's how I would have voted were I on the jury. I wasn't in the courtroom.
So, why are (almost) all my colleagues bent out of shape?
First, there is a definite view that the only thing copied here was a "vibe," and that the scenes a faire and other unprotected expression should have been filtered out. I am a big fan of filtration; I wrote an article on it. I admit to not being an expert on music filtration. But I do know that there was significant expert testimony here that more than a vibe was copied (which was enough to avoid summary judgment), and that once you're over summary judgment, all bets are off on whether the jury will filter out the "proper" way. Perhaps the jury didn't; but that's not what we ask on an appeal. So, the only way you take it from a jury is to say that there was no possible way to credit the plaintiff's expert that more than a vibe was copied. I've yet to see an analysis based on the actual evidence in the case that shows this (though I have seen plenty of folks disagreeing with Plaintiff's expert), though if someone has one, please point me to it and I'll gladly post it here. The court, for its part, is hostile to such "technical" parsing in music cases (in a way that it is not in photography and computer cases). But that's nothing new; the court cites old law for this proposition, so its hostility shouldn't be surprising, even if it is concerning.
Second, the court seems to double down on the "inverse ratio" rule:
The defendants appealed this instruction, but only on filtration grounds (which were rejected), and not on inverse ratio type grounds.
In short, jury determinations of music copyright is messy business. There's a lot not to like about the Ninth Circuit's intrinsic/extrinsic test (I'm not a big fan, myself). The jury instructions could probably be improved on filtration (there were other filtration instructions, I believe).
But here's where I end up:
Three years ago this week, I blogged here that:
People have strong feelings about this case. Most people I know think it was wrongly decided. But I think that copyright law would be better served if we examined the evidence to see why it was wrongly decided. Should the court have ruled that the similarities presented by the expert were simply never enough to show infringement? Should we not allow juries to consider the whole composition (note that this usually favors the defendant)? Should we provide more guidance to juries making determinations? Was the wrong evidence admitted (that is, is my view of what the evidence was wrong)?
But what I don't think is helpful for the system is to assume straw evidence - it's easy to attack a decision when the court lets the jury hear something it shouldn't or when the jury ignores the instructions as they sometimes do. I'm not convinced that's what happened here; it's much harder to take the evidence as it is and decide whether we're doing this whole music copyright infringement thing the right way.My sense then was that it would come down to how the appeals court would view the evidence, and it turns out I was right. I find this opinion to be...unremarkable. The jury heard evidence of infringement, and ruled that there was infringement. The court affirmed because that's what courts do when there is a jury verdict. There was some evidence of infringement, and that's enough.
To be clear, I'm not saying that's how I would have voted were I on the jury. I wasn't in the courtroom.
So, why are (almost) all my colleagues bent out of shape?
First, there is a definite view that the only thing copied here was a "vibe," and that the scenes a faire and other unprotected expression should have been filtered out. I am a big fan of filtration; I wrote an article on it. I admit to not being an expert on music filtration. But I do know that there was significant expert testimony here that more than a vibe was copied (which was enough to avoid summary judgment), and that once you're over summary judgment, all bets are off on whether the jury will filter out the "proper" way. Perhaps the jury didn't; but that's not what we ask on an appeal. So, the only way you take it from a jury is to say that there was no possible way to credit the plaintiff's expert that more than a vibe was copied. I've yet to see an analysis based on the actual evidence in the case that shows this (though I have seen plenty of folks disagreeing with Plaintiff's expert), though if someone has one, please point me to it and I'll gladly post it here. The court, for its part, is hostile to such "technical" parsing in music cases (in a way that it is not in photography and computer cases). But that's nothing new; the court cites old law for this proposition, so its hostility shouldn't be surprising, even if it is concerning.
Second, the court seems to double down on the "inverse ratio" rule:
We adhere to the “inverse ratio rule,” which operates like a sliding scale: The greater the showing of access, the lesser the showing of substantial similarity is required.This is really bothersome, because just recently, the court said that the inverse ratio rule shouldn't be used to make it easier to prove improper appropriation:
That rule does not help Rentmeester because it assists only in proving copying, not in proving unlawful appropriation, the only element at issue in this caseI suppose that you can read the new case as just ignoring Rentmeester's statement, but I don't think so. First, the inverse ratio rule, for better or worse, is old Ninth Circuit law, which a panel can't simply ignore. Second, it is relevant for the question of probative copying (that is, was there copying at all?), which was disputed in this case, unlike Rentmeester. Third, there is no indication that this rule had any bearing on the jury's verdict. The inverse ratio rule was not part of the instruction that asked the jury to determine unlawful appropriation (and the Defendants did not appear to appeal the inverse ratio instruction), nor was the rule even stated in the terms used by the court at all in the jury instructions:
The defendants appealed this instruction, but only on filtration grounds (which were rejected), and not on inverse ratio type grounds.
In short, jury determinations of music copyright is messy business. There's a lot not to like about the Ninth Circuit's intrinsic/extrinsic test (I'm not a big fan, myself). The jury instructions could probably be improved on filtration (there were other filtration instructions, I believe).
But here's where I end up:
- This ruling is not terribly surprising, and is wholly consistent with Ninth Circuit precedent (for better or worse)
- The ruling could have been written more clearly to avoid some of the consternation and unclarity about the inverse ratio rule (among other things)
- This ruling doesn't much change Ninth Circuit law, nor dilute the importance of Rentmeester
- This ruling is based in large part on the evidence, which was hotly disputed at trial
- If you want to win a copyright case as a defendant, better hope to do it before you get to a jury. You can still win in front of the jury, but if it doesn't go your way the appeal will be tough to win.
Tuesday, March 20, 2018
Evidence on Polarization in IP
Posted by
Michael Risch
Since my coblogger Lisa Ouellette has not tooted her own horn about this, I thought I would do so for her. She, Maggie Wittlin (Nebraska), and Greg Mandel (Temple, its Dean, no less) have a new article forthcoming in UC Davis L. Rev. called What Causes Polarization on IP Policy? A draft is on SSRN, and the abstract is here:
The abstract doesn't do justice to the results - the paper is worth a read, with some interesting graphs as well. One of the more interesting findings is that political party has almost no correlation with views on copyright, but relatively strong correlation with views on patenting. This latter result makes me an odd duck, as I lean more (way, in some cases) liberal but have also leaned more pro-patent than many of my colleagues. I think there are reasons for that, but we don't need to debate them here.
In any event, there is a lot of work in this paper that the authors tie to cultural cognition - that is, motivated reasoning based on priors. I don't have an opinion on the measures they use to define it, but they seem reasonable enough and they follow a growing literature in this area. I think anyone interested in current IP debates (or cranky about them) could learn a few things from this study.
Polarization on contentious policy issues is a problem of national concern for both hot-button cultural issues such as climate change and gun control and for issues of interest to more specialized constituencies. Cultural debates have become so contentious that in many cases people are unable to agree even on the underlying facts needed to resolve these issues. Here, we tackle this problem in the context of intellectual property law. Despite an explosion in the quantity and quality of empirical evidence about the intellectual property system, IP policy debates have become increasingly polarized. This disagreement about existing evidence concerning the effects of the IP system hinders democratic deliberation and stymies progress.
Based on a survey of U.S. IP practitioners, this Article investigates the source of polarization on IP issues, with the goal of understanding how to better enable evidence-based IP policymaking. We hypothesized that, contrary to intuition, more evidence on the effects of IP law would not resolve IP disputes but would instead exacerbate them. Specifically, IP polarization might stem from "cultural cognition," a form of motivated reasoning in which people form factual beliefs that conform to their cultural predispositions and interpret new evidence in light of those beliefs. The cultural cognition framework has helped explain polarization over other issues of national concern, but it has never been tested in a private-law context.
Our survey results provide support for the influence of cultural cognition, as respondents with a relatively hierarchical worldview are more likely to believe strong patent protection is necessary to spur innovation. Additionally, having a hierarchical worldview and also viewing patent rights as property rights may be a better predictor of patent strength preferences than either alone. Taken together, our findings suggest that individuals' cultural preferences affect how they understand new information about the IP system. We discuss the implications of these results for fostering evidence-based IP policymaking, as well as for addressing polarization more broadly. For example, we suggest that empirical legal studies borrow from medical research by initiating a practice of advance registration of new projects-in which the planned methodology is publicly disclosed before data are gathered-to promote broader acceptance of the results.This work follows Lisa's earlier essay on Cultural Cognition in IP. I think this is a fascinating and interesting area, and it is certainly seems to be more salient as stakes have increased. I am not without my own priors, but I do take pride in having my work cited by both sides of the debate.
The abstract doesn't do justice to the results - the paper is worth a read, with some interesting graphs as well. One of the more interesting findings is that political party has almost no correlation with views on copyright, but relatively strong correlation with views on patenting. This latter result makes me an odd duck, as I lean more (way, in some cases) liberal but have also leaned more pro-patent than many of my colleagues. I think there are reasons for that, but we don't need to debate them here.
In any event, there is a lot of work in this paper that the authors tie to cultural cognition - that is, motivated reasoning based on priors. I don't have an opinion on the measures they use to define it, but they seem reasonable enough and they follow a growing literature in this area. I think anyone interested in current IP debates (or cranky about them) could learn a few things from this study.
Tuesday, March 13, 2018
Which Patents Get Instituted During Inter Partes Review?
Posted by
Michael Risch
I recently attended PatCon 8 at the University of San Diego Law School. It was a great event, with lots of interesting papers. One paper I enjoyed from one of the (many) empirical sessions was Determinants of Patent Quality: Evidence from Inter Partes Review Proceedings by Brian Love (Santa Clara), Shawn Miller (Stanford), and Shawn Ambwani (Unified Patents). The paper is on SSRN and the abstract is here:
old middle-age rambling ahead) I get that whether a patent is valid or not is an important quality indicator, and I've made similar claims. I just think the authors have to spend a lot of time/space (it's an 84 page paper) trying to support their claim.
For example, they argue that IPRs are more complete compared to litigation, because litigation has selection effects both in what gets litigated and in settlement post-litigation. But IPRs suffer from the same problem. Notwithstanding some differences, there's a high degree of matching between IPRs and litigation, and many petitions settle both before and after institution.
Which leads to a second point: these are institutions - not final determinations. Now, they treat institutions patents where the claims are upheld as non-instituted, but with 40% of the cases still pending (and a declining institution rate as time goes on) we don't know how the incomplete and settled institutions look. More controversially, they count as low quality any patent where any single claim is instituted. So, you could challenge 100 claims, have one instituted, and the patent falls into the "bad" pile.
Now, they present data that shows it is not quite so bad as this, but the point remains: with high settlements and partial invalidation, it's hard work to make a general claim about patent quality. To be fair, the authors point out all of these limitations in their draft. It is not as though they aren't aware of the criticism, and that's a good thing. I suppose, then, it's just a style difference. Regardless, this paper is worth checking out.
We study the determinants of patent “quality”—the likelihood that an issued patent can survive a post-grant validity challenge. We do so by taking advantage of two recent developments in the U.S. patent system. First, rather than relying on the relatively small and highly-selected set of patents scrutinized by courts, we study instead the larger and broader set of patents that have been subjected to inter partes review, a recently established administrative procedure for challenging the validity of issued patents. Second, in addition to characteristics observable on the face of challenged patents, we utilize datasets recently made available by the USPTO to gather detailed information about the prosecution and examination of studied patents. We find a significant relationship between validity and a number of characteristics of a patent and its owner, prosecutor, examiner, and prosecution history. For example, patents prosecuted by large law firms, pharmaceutical patents, and patents with more words per claim are significantly more likely to survive inter partes review. On the other hand, patents obtained by small entities, patents assigned to examiners with higher allowance rates, patents with more U.S. patent classes, and patents with higher reverse citation counts are less likely to survive review. Our results reveal a number of strategies that may help applicants, patent prosecutors, and USPTO management increase the quality of issued patents. Our findings also suggest that inter partes review is, as Congress intended, eliminating patents that appear to be of relatively low quality.The study does a good job of identifying a variety of variables that do (and do not) correlate with whether the PTO institutes a review of patents. Some examples of interesting findings:
- Pharma patents are less likely to be instituted
- Solo/small firm prosecuted patents are more likely to be instituted
- Patents with more words in claim 1 (i.e. narrower patents) are less likely to be instituted
- Patents with more backward citations are more likely to be instituted (this is counterintuitive, but consistent with my own study of the patent litigation)
- Patent examiner characteristics affect likelihood of institution
For example, they argue that IPRs are more complete compared to litigation, because litigation has selection effects both in what gets litigated and in settlement post-litigation. But IPRs suffer from the same problem. Notwithstanding some differences, there's a high degree of matching between IPRs and litigation, and many petitions settle both before and after institution.
Which leads to a second point: these are institutions - not final determinations. Now, they treat institutions patents where the claims are upheld as non-instituted, but with 40% of the cases still pending (and a declining institution rate as time goes on) we don't know how the incomplete and settled institutions look. More controversially, they count as low quality any patent where any single claim is instituted. So, you could challenge 100 claims, have one instituted, and the patent falls into the "bad" pile.
Now, they present data that shows it is not quite so bad as this, but the point remains: with high settlements and partial invalidation, it's hard work to make a general claim about patent quality. To be fair, the authors point out all of these limitations in their draft. It is not as though they aren't aware of the criticism, and that's a good thing. I suppose, then, it's just a style difference. Regardless, this paper is worth checking out.
Friday, March 9, 2018
Sapna Kumar: The Fate Of "Innovation Nationalism" In The Age of Trump
Posted by
Camilla Hrdy
One of the biggest pieces of news last week was that President Trump will be imposing tariffs on foreign steel and aluminum because, he tweets, IF YOU DON'T HAVE STEEL, YOU DON'T HAVE A COUNTRY. Innovation Nationalism, a timely new article by Professor Sapna Kumar at University of Houston School of Law, explains the role that innovation and patent law play in the "global resurgence of nationalism" in the age of Trump. After reading her article, I think Trump should replace this tweet with: IF YOU DON'T HAVE PATENTS, YOU DON'T HAVE A COUNTRY.
Tuesday, March 6, 2018
The Quest to Patent Perpetual Motion
Posted by
Michael Risch
Those familiar with my work will know that I am a big fan of utility doctrine. I think it is underused and misunderstood. When I teach about operable utility, I use perpetual motion machines as the type of fantastic (and not in a good way) invention that will be rejected by the PTO as inoperable due to violating the laws of thermodynamics.
On my way to a conference last week, I watched a great documentary called Newman about one inventor's quest to patent a perpetual motion machine. The trailer is here, and you can stream it pretty cheaply (I assume it will come to a service at some point):
The movie is really well done, I think. The first two-thirds is a great survey of old footage, along with interviews of many people involved in the saga. The final third focuses on what became of Newman after his court case, leading to a surprising ending that colors how we should look at the first part of the movie. The two acts work really well together, and I think this movie should be of interest to anyone, and not just patent geeks.
That said, I'd like to spend a bit of time on the patent aspects, namely utility doctrine. Wikipedia has a pretty detailed entry, with links to many of the relevant documents. The federal circuit case, Newman v. Quigg, as well as the district court case, also lay out many of the facts. The claim was extremely broad:
First, the case continues what I believe to be a central confusion in utility. The initial rejection was not based on Section 101 ("new and useful") but on Section 112 (enablement to "make and use"). This is a problematic distinction. As the Patent Board of Appeals even noted: "We do not doubt that a worker in this art with appellant's specification before him could construct a motor ... as shown in Fig. 6 of the drawing." Well, then one could make and use it, even if it failed at its essential purpose. Now, there is an argument that the claim is so broad that Newman didn't enable every device claimed (as in the Incandescent Lamp case), but that's not what the board was describing. The section 101 defense was not added until 1986, well into the district court proceeding. The district court later makes some actual 112 comments (that the description is metaphysical), but this is not the same as failing to achieve the claimed outcome. The Federal Circuit makes clear that 112 can support this type of rejection: "neither is the patent applicant relieved of the requirement of teaching how to achieve the claimed result, even if the theory of operation is not correctly explained or even understood." But this is not really enablement - it's operable utility! The 112 theory of utility is that you can't enable someone to use and invention if it's got no use. But just about every invention has some use. I write about this confusion in my article A Surprisingly Useful Requirement.
Second, this leads to another key point of the case. The failed claim was primarily due to the insistence on claiming perpetual motion. Had Newman claimed a novel motor, then the claim might have survived (though there was a 102/103 rejection somewhere in the history). One of the central themes of the documentary was that Newman needed this patent to commercialize his invention, so others could not steal the idea. He could not share it until it was protected. But he could have achieved this goal with a much narrower patent that did not claim perpetual motion. That he did not attempt a narrower patent is quite revealing, and foreshadows some of the interesting revelations from the end of the documentary.
Third, the special master in the case, William Schuyler, had been Commissioner of Patents. He recommended that the Court grant the patent, finding sufficient evidence to support the claims. It is surprising that he would have issued a report finding operable utility here, putting the Patent Office in the unenviable position of attacking its former chief.
Fourth, the case is an illustration in waiver. Newman claimed that the device only worked properly when ungrounded. More important, the output was measured in complicated ways (according to his own witnesses). Yet, Newman failed to indicate how measurement should be done when it counted: "Dr. Hebner [of National Bureau of Standards] then asked Newman directly where he intended that the power output be measured. His attorney advised Newman not to answer, and Newman and his coterie departed without further comment." The court finds a similar waiver with respect to whether the device should have been grounded, an apparently key requirement. These two waivers allowed the courts to credit the testing over Newman's later objections that the testing was improperly handled.
I'm sure I had briefly read Newman v. Quigg at some point in the past, and the case is cited as the seminal "no perpetual motion machine" case. Even so, I'm glad I watched the documentary to get a better picture of the times and hooplah that went with this, as well as what became of the man who claimed to defy the laws of thermodynamics.
On my way to a conference last week, I watched a great documentary called Newman about one inventor's quest to patent a perpetual motion machine. The trailer is here, and you can stream it pretty cheaply (I assume it will come to a service at some point):
That said, I'd like to spend a bit of time on the patent aspects, namely utility doctrine. Wikipedia has a pretty detailed entry, with links to many of the relevant documents. The federal circuit case, Newman v. Quigg, as well as the district court case, also lay out many of the facts. The claim was extremely broad:
38. A device which increases the availability of usable electrical energy or usable motion, or both, from a given mass or masses by a device causing a controlled release of, or reaction to, the gyroscopic type energy particles making up or coming from the atoms of the mass or masses, which in turn, by any properly designed system, causes an energy output greater than the energy input.Here are some thoughts:
First, the case continues what I believe to be a central confusion in utility. The initial rejection was not based on Section 101 ("new and useful") but on Section 112 (enablement to "make and use"). This is a problematic distinction. As the Patent Board of Appeals even noted: "We do not doubt that a worker in this art with appellant's specification before him could construct a motor ... as shown in Fig. 6 of the drawing." Well, then one could make and use it, even if it failed at its essential purpose. Now, there is an argument that the claim is so broad that Newman didn't enable every device claimed (as in the Incandescent Lamp case), but that's not what the board was describing. The section 101 defense was not added until 1986, well into the district court proceeding. The district court later makes some actual 112 comments (that the description is metaphysical), but this is not the same as failing to achieve the claimed outcome. The Federal Circuit makes clear that 112 can support this type of rejection: "neither is the patent applicant relieved of the requirement of teaching how to achieve the claimed result, even if the theory of operation is not correctly explained or even understood." But this is not really enablement - it's operable utility! The 112 theory of utility is that you can't enable someone to use and invention if it's got no use. But just about every invention has some use. I write about this confusion in my article A Surprisingly Useful Requirement.
Second, this leads to another key point of the case. The failed claim was primarily due to the insistence on claiming perpetual motion. Had Newman claimed a novel motor, then the claim might have survived (though there was a 102/103 rejection somewhere in the history). One of the central themes of the documentary was that Newman needed this patent to commercialize his invention, so others could not steal the idea. He could not share it until it was protected. But he could have achieved this goal with a much narrower patent that did not claim perpetual motion. That he did not attempt a narrower patent is quite revealing, and foreshadows some of the interesting revelations from the end of the documentary.
Third, the special master in the case, William Schuyler, had been Commissioner of Patents. He recommended that the Court grant the patent, finding sufficient evidence to support the claims. It is surprising that he would have issued a report finding operable utility here, putting the Patent Office in the unenviable position of attacking its former chief.
Fourth, the case is an illustration in waiver. Newman claimed that the device only worked properly when ungrounded. More important, the output was measured in complicated ways (according to his own witnesses). Yet, Newman failed to indicate how measurement should be done when it counted: "Dr. Hebner [of National Bureau of Standards] then asked Newman directly where he intended that the power output be measured. His attorney advised Newman not to answer, and Newman and his coterie departed without further comment." The court finds a similar waiver with respect to whether the device should have been grounded, an apparently key requirement. These two waivers allowed the courts to credit the testing over Newman's later objections that the testing was improperly handled.
I'm sure I had briefly read Newman v. Quigg at some point in the past, and the case is cited as the seminal "no perpetual motion machine" case. Even so, I'm glad I watched the documentary to get a better picture of the times and hooplah that went with this, as well as what became of the man who claimed to defy the laws of thermodynamics.
Subscribe to:
Posts (Atom)