Pages

Wednesday, March 28, 2018

Oracle v. Google Again: The Unicorn of a Fair Use Jury Reversal

It's been about two years, so I guess it was about time to write about Oracle v. Google. The trigger this time: in a blockbuster opinion (and I never use that term), the Federal Circuit has overturned a jury verdict finding that Google's use of 37 API headers was fair use and instead said that said reuse could not be fair use as a matter of law. I won't describe the ruling in full detail - Jason Rantanen does a good job of it at Patently-O.

Instead, I'll discuss my thoughts on the opinion and some ramifications. Let's start with this one: people who know me (and who read this blog) know that my knee jerk reaction is usually that the opinion is not nearly as far-reaching and worrisome as they think. So, it may surprise a few people when I say that this opinion may well be as worrisome and far-reaching as they think.

And I say that without commenting on the merits; right or wrong, this opinion will have real repercussions. The upshot is: no more compatible compiler/interpreters/APIs. If you create an API language, then nobody else can make a competing one, because to do so would necessarily entail copying the same structure of the input commands and parameters in your specification. If you make a language, you own the language. That's what Oracle argued for, and it won. No Quattro Pro interpreting old Lotus 1-2-3 macros, no competitive C compilers, no debugger emulators for operating systems, and potentially no competitive audio/visual playback software. This is, in short, a big deal.

So, what happened here? While I'm not thrilled with the Court's reasoning, I also don't find it to be so outside the bounds of doctrine as to be without sense. Here are my thoughts.

Tuesday, March 27, 2018

Are We Running out of Trademarks? College Sports Edition

As I watched the Kansas State Wildcats play the Kentucky Wildcats in the Sweet Sixteen this year, it occurred to me that there are an awful lot of Wildcats in the tournament (five, to be exact, or nearly 7.5% of the teams).  This made me think of the interesting new paper by Jeanne Fromer and Barton Beebe, called Are We Running Out of Trademarks? An Empirical Study of Trademark Depletion and Congestion. The paper is on SSRN, and is notable because it is the rare a) IP and b) empirical paper published by the Harvard Law Review. The abstract of the paper is here:
American trademark law has long operated on the assumption that there exists an inexhaustible supply of unclaimed trademarks that are at least as competitively effective as those already claimed. This core empirical assumption underpins nearly every aspect of trademark law and policy. This Article presents empirical evidence showing that this conventional wisdom is wrong. The supply of competitively effective trademarks is, in fact, exhaustible and has already reached severe levels of what we term trademark depletion and trademark congestion. We systematically study all 6.7 million trademark applications filed at the U.S. Patent and Trademark Office (PTO) from 1985 through 2016 together with the 300,000 trademarks already registered at the PTO as of 1985. We analyze these data in light of the most frequently used words and syllables in American English, the most frequently occurring surnames in the United States, and an original dataset consisting of phonetic representations of each applied-for or registered word mark included in the PTO’s Trademark Case Files Dataset. We further incorporate data consisting of all 128 million domain names registered in the .com top-level domain and an original dataset of all 2.1 million trademark office actions issued by the PTO from 2003 through 2016. These data show that rates of word-mark depletion and congestion are increasing and have reached chronic levels, particularly in certain important economic sectors. The data further show that new trademark applicants are increasingly being forced to resort to second-best, less competitively effective marks. Yet registration refusal rates continue to rise. The result is that the ecology of the trademark system is breaking down, with mounting barriers to entry, increasing consumer search costs, and an eroding public domain. In light of our empirical findings, we propose a mix of reforms to trademark law that will help to preserve the proper functioning of the trademark system and further its core purposes of promoting competition and enhancing consumer welfare.
The paper is really well developed and interesting. They consider common law marks as well as domain names. Also worth a read is Written Description's own Lisa Larrimore Ouellette's response, called Does Running Out of (Some) Trademarks Matter?, also in Harvard Law Review and on SSRN.

Wednesday, March 21, 2018

Blurred Lines Verdict Affirmed - How Bad is It?

The Ninth Circuit ruled on Williams v. Gaye today, the "Blurred Lines" verdict that found infringement and some hefty damages. I've replied to a few of my colleagues' Twitter posts today, so I figured I'd stop harassing them with my viewpoint and just make a brief blog post.

Three years ago this week, I blogged here that:
People have strong feelings about this case. Most people I know think it was wrongly decided. But I think that copyright law would be better served if we examined the evidence to see why it was wrongly decided. Should the court have ruled that the similarities presented by the expert were simply never enough to show infringement? Should we not allow juries to consider the whole composition (note that this usually favors the defendant)? Should we provide more guidance to juries making determinations? Was the wrong evidence admitted (that is, is my view of what the evidence was wrong)?
But what I don't think is helpful for the system is to assume straw evidence - it's easy to attack a decision when the court lets the jury hear something it shouldn't or when the jury ignores the instructions as they sometimes do. I'm not convinced that's what happened here; it's much harder to take the evidence as it is and decide whether we're doing this whole music copyright infringement thing the right way.
My sense then was that it would come down to how the appeals court would view the evidence, and it turns out I was right. I find this opinion to be...unremarkable. The jury heard evidence of infringement, and ruled that there was infringement. The court affirmed because that's what courts do when there is a jury verdict. There was some evidence of infringement, and that's enough.

To be clear, I'm not saying that's how I would have voted were I on the jury. I wasn't in the courtroom.

So, why are (almost) all my colleagues bent out of shape?

First, there is a definite view that the only thing copied here was a "vibe," and that the scenes a faire and other unprotected expression should have been filtered out. I am a big fan of filtration; I wrote an article on it. I admit to not being an expert on music filtration. But I do know that there was significant expert testimony here that more than a vibe was copied (which was enough to avoid summary judgment), and that once you're over summary judgment, all bets are off on whether the jury will filter out the "proper" way. Perhaps the jury didn't; but that's not what we ask on an appeal. So, the only way you take it from a jury is to say that there was no possible way to credit the plaintiff's expert that more than a vibe was copied. I've yet to see an analysis based on the actual evidence in the case that shows this (though I have seen plenty of folks disagreeing with Plaintiff's expert), though if someone has one, please point me to it and I'll gladly post it here. The court, for its part, is hostile to such "technical" parsing in music cases (in a way that it is not in photography and computer cases). But that's nothing new; the court cites old law for this proposition, so its hostility shouldn't be surprising, even if it is concerning.

Second, the court seems to double down on the "inverse ratio" rule:
We adhere to the “inverse ratio rule,” which operates like a sliding scale: The greater the showing of access, the lesser the showing of substantial similarity is required.
This is really bothersome, because just recently, the court said that the inverse ratio rule shouldn't be used to make it easier to prove improper appropriation:
That rule does not help Rentmeester because it assists only in proving copying, not in proving unlawful appropriation, the only element at issue in this case
I suppose that you can read the new case as just ignoring Rentmeester's statement, but I don't think so. First, the inverse ratio rule, for better or worse, is old Ninth Circuit law, which a panel can't simply ignore. Second, it is relevant for the question of probative copying (that is, was there copying at all?), which was disputed in this case, unlike Rentmeester. Third, there is no indication that this rule had any bearing on the jury's verdict. The inverse ratio rule was not part of the instruction that asked the jury to determine unlawful appropriation (and the Defendants did not appear to appeal the inverse ratio instruction), nor was the rule even stated in the terms used by the court at all in the jury instructions:
The defendants appealed this instruction, but only on filtration grounds (which were rejected), and not on inverse ratio type grounds.

In short, jury determinations of music copyright is messy business. There's a lot not to like about the Ninth Circuit's intrinsic/extrinsic test (I'm not a big fan, myself). The jury instructions could probably be improved on filtration (there were other filtration instructions, I believe).

But here's where I end up:
  1. This ruling is not terribly surprising, and is wholly consistent with Ninth Circuit precedent (for better or worse)
  2. The ruling could have been written more clearly to avoid some of the consternation and unclarity about the inverse ratio rule (among other things)
  3. This ruling doesn't much change Ninth Circuit law, nor dilute the importance of Rentmeester
  4. This ruling is based in large part on the evidence, which was hotly disputed at trial
  5. If you want to win a copyright case as a defendant, better hope to do it before you get to a jury. You can still win in front of the jury, but if it doesn't go your way the appeal will be tough to win.

Tuesday, March 20, 2018

Evidence on Polarization in IP

Since my coblogger Lisa Ouellette has not tooted her own horn about this, I thought I would do so for her. She, Maggie Wittlin (Nebraska), and Greg Mandel (Temple, its Dean, no less) have a new article forthcoming in UC Davis L. Rev. called What Causes Polarization on IP Policy? A draft is on SSRN, and the abstract is here:
Polarization on contentious policy issues is a problem of national concern for both hot-button cultural issues such as climate change and gun control and for issues of interest to more specialized constituencies. Cultural debates have become so contentious that in many cases people are unable to agree even on the underlying facts needed to resolve these issues. Here, we tackle this problem in the context of intellectual property law. Despite an explosion in the quantity and quality of empirical evidence about the intellectual property system, IP policy debates have become increasingly polarized. This disagreement about existing evidence concerning the effects of the IP system hinders democratic deliberation and stymies progress.
Based on a survey of U.S. IP practitioners, this Article investigates the source of polarization on IP issues, with the goal of understanding how to better enable evidence-based IP policymaking. We hypothesized that, contrary to intuition, more evidence on the effects of IP law would not resolve IP disputes but would instead exacerbate them. Specifically, IP polarization might stem from "cultural cognition," a form of motivated reasoning in which people form factual beliefs that conform to their cultural predispositions and interpret new evidence in light of those beliefs. The cultural cognition framework has helped explain polarization over other issues of national concern, but it has never been tested in a private-law context.
Our survey results provide support for the influence of cultural cognition, as respondents with a relatively hierarchical worldview are more likely to believe strong patent protection is necessary to spur innovation. Additionally, having a hierarchical worldview and also viewing patent rights as property rights may be a better predictor of patent strength preferences than either alone. Taken together, our findings suggest that individuals' cultural preferences affect how they understand new information about the IP system. We discuss the implications of these results for fostering evidence-based IP policymaking, as well as for addressing polarization more broadly. For example, we suggest that empirical legal studies borrow from medical research by initiating a practice of advance registration of new projects-in which the planned methodology is publicly disclosed before data are gathered-to promote broader acceptance of the results.
This work follows Lisa's earlier essay on Cultural Cognition in IP.  I think this is a fascinating and interesting area, and it is certainly seems to be more salient as stakes have increased. I am not without my own priors, but I do take pride in having my work cited by both sides of the debate.

The abstract doesn't do justice to the results - the paper is worth a read, with some interesting graphs as well. One of the more interesting findings is that political party has almost no correlation with views on copyright, but relatively strong correlation with views on patenting. This latter result makes me an odd duck, as I lean more (way, in some cases) liberal but have also leaned more pro-patent than many of my colleagues. I think there are reasons for that, but we don't need to debate them here.

In any event, there is a lot of work in this paper that the authors tie to cultural cognition - that is, motivated reasoning based on priors. I don't have an opinion on the measures they use to define it, but they seem reasonable enough and they follow a growing literature in this area. I think anyone interested in current IP debates (or cranky about them) could learn a few things from this study.

Tuesday, March 13, 2018

Which Patents Get Instituted During Inter Partes Review?

I recently attended PatCon 8 at the University of San Diego Law School. It was a great event, with lots of interesting papers. One paper I enjoyed from one of the (many) empirical sessions was Determinants of Patent Quality: Evidence from Inter Partes Review Proceedings by Brian Love (Santa Clara), Shawn Miller (Stanford), and Shawn Ambwani (Unified Patents). The paper is on SSRN and the abstract is here:
We study the determinants of patent “quality”—the likelihood that an issued patent can survive a post-grant validity challenge. We do so by taking advantage of two recent developments in the U.S. patent system. First, rather than relying on the relatively small and highly-selected set of patents scrutinized by courts, we study instead the larger and broader set of patents that have been subjected to inter partes review, a recently established administrative procedure for challenging the validity of issued patents. Second, in addition to characteristics observable on the face of challenged patents, we utilize datasets recently made available by the USPTO to gather detailed information about the prosecution and examination of studied patents. We find a significant relationship between validity and a number of characteristics of a patent and its owner, prosecutor, examiner, and prosecution history. For example, patents prosecuted by large law firms, pharmaceutical patents, and patents with more words per claim are significantly more likely to survive inter partes review. On the other hand, patents obtained by small entities, patents assigned to examiners with higher allowance rates, patents with more U.S. patent classes, and patents with higher reverse citation counts are less likely to survive review. Our results reveal a number of strategies that may help applicants, patent prosecutors, and USPTO management increase the quality of issued patents. Our findings also suggest that inter partes review is, as Congress intended, eliminating patents that appear to be of relatively low quality.
 The study does a good job of identifying a variety of variables that do (and do not) correlate with whether the PTO institutes a review of patents. Some examples of interesting findings:
  • Pharma patents are less likely to be instituted
  • Solo/small firm prosecuted patents are more likely to be instituted
  • Patents with more words in claim 1 (i.e. narrower patents) are less likely to be instituted
  • Patents with more backward citations are more likely to be instituted (this is counterintuitive, but consistent with my own study of the patent litigation)
  • Patent examiner characteristics affect likelihood of institution
There's a lot of good data here, and the authors did a lot of useful work to gather information that's not simply on the face of the patent. The paper is worth a good read. My primary criticism is the one I voiced during the session at PatCon - there's something about framing this as a generalized patent quality study that rankles me. (Warning, cranky old middle-age rambling ahead) I get that whether a patent is valid or not is an important quality indicator, and I've made similar claims. I just think the authors have to spend a lot of time/space (it's an 84 page paper) trying to support their claim.

For example, they argue that IPRs are more complete compared to litigation, because litigation has selection effects both in what gets litigated and in settlement post-litigation. But IPRs suffer from the same problem. Notwithstanding some differences, there's a high degree of matching between IPRs and litigation, and many petitions settle both before and after institution.

Which leads to a second point: these are institutions - not final determinations. Now, they treat institutions patents where the claims are upheld as non-instituted, but with 40% of the cases still pending (and a declining institution rate as time goes on) we don't know how the incomplete and settled institutions look. More controversially, they count as low quality any patent where any single claim is instituted.  So, you could challenge 100 claims, have one instituted, and the patent falls into the "bad" pile.

Now, they present data that shows it is not quite so bad as this, but the point remains: with high settlements and partial invalidation, it's hard work to make a general claim about patent quality. To be fair, the authors point out all of these limitations in their draft. It is not as though they aren't aware of the criticism, and that's a good thing. I suppose, then, it's just a style difference. Regardless, this paper is worth checking out.

Friday, March 9, 2018

Sapna Kumar: The Fate Of "Innovation Nationalism" In The Age of Trump

One of the biggest pieces of news last week was that President Trump will be imposing tariffs on foreign steel and aluminum because, he tweets, IF YOU DON'T HAVE STEEL, YOU DON'T HAVE A COUNTRY.  Innovation Nationalism, a timely new article by Professor Sapna Kumar at University of Houston School of Law, explains the role that innovation and patent law play in the "global resurgence of nationalism" in the age of Trump. After reading her article, I think Trump should replace this tweet with: IF YOU DON'T HAVE PATENTS, YOU DON'T HAVE A COUNTRY.

Tuesday, March 6, 2018

The Quest to Patent Perpetual Motion

Those familiar with my work will know that I am a big fan of utility doctrine. I think it is underused and misunderstood. When I teach about operable utility, I use perpetual motion machines as the type of fantastic (and not in a good way) invention that will be rejected by the PTO as inoperable due to violating the laws of thermodynamics.

On my way to a conference last week, I watched a great documentary called Newman about one inventor's quest to patent a perpetual motion machine. The trailer is here, and you can stream it pretty cheaply (I assume it will come to a service at some point):
The movie is really well done, I think. The first two-thirds is a great survey of old footage, along with interviews of many people involved in the saga. The final third focuses on what became of Newman after his court case, leading to a surprising ending that colors how we should look at the first part of the movie. The two acts work really well together, and I think this movie should be of interest to anyone, and not just patent geeks.

That said, I'd like to spend a bit of time on the patent aspects, namely utility doctrine. Wikipedia has a pretty detailed entry, with links to many of the relevant documents. The federal circuit case, Newman v. Quigg, as well as the district court case, also lay out many of the facts. The claim was extremely broad:
38. A device which increases the availability of usable electrical energy or usable motion, or both, from a given mass or masses by a device causing a controlled release of, or reaction to, the gyroscopic type energy particles making up or coming from the atoms of the mass or masses, which in turn, by any properly designed system, causes an energy output greater than the energy input.
Here are some thoughts:

First, the case continues what I believe to be a central confusion in utility. The initial rejection was not based on Section 101 ("new and useful") but on Section 112 (enablement to "make and use"). This is a problematic distinction. As the Patent Board of Appeals even noted: "We do not doubt that a worker in this art with appellant's specification before him could construct a motor ... as shown in Fig. 6 of the drawing." Well, then one could make and use it, even if it failed at its essential purpose. Now, there is an argument that the claim is so broad that Newman didn't enable every device claimed (as in the Incandescent Lamp case), but that's not what the board was describing. The section 101 defense was not added until 1986, well into the district court proceeding. The district court later makes some actual 112 comments (that the description is metaphysical), but this is not the same as failing to achieve the claimed outcome. The Federal Circuit makes clear that 112 can support this type of rejection: "neither is the patent applicant relieved of the requirement of teaching how to achieve the claimed result, even if the theory of operation is not correctly explained or even understood." But this is not really enablement - it's operable utility! The 112 theory of utility is that you can't enable someone to use and invention if it's got no use. But just about every invention has some use. I write about this confusion in my article A Surprisingly Useful Requirement.

Second, this leads to another key point of the case. The failed claim was primarily due to the insistence on claiming perpetual motion. Had Newman claimed a novel motor, then the claim might have survived (though there was a 102/103 rejection somewhere in the history). One of the central themes of the documentary was that Newman needed this patent to commercialize his invention, so others could not steal the idea. He could not share it until it was protected. But he could have achieved this goal with a much narrower patent that did not claim perpetual motion. That he did not attempt a narrower patent is quite revealing, and foreshadows some of the interesting revelations from the end of the documentary.

Third, the special master in the case, William Schuyler, had been Commissioner of Patents. He recommended that the Court grant the patent, finding sufficient evidence to support the claims. It is surprising that he would have issued a report finding operable utility here, putting the Patent Office in the unenviable position of attacking its former chief.

Fourth, the case is an illustration in waiver. Newman claimed that the device only worked properly when ungrounded. More important, the output was measured in complicated ways (according to his own witnesses). Yet, Newman failed to indicate how measurement should be done when it counted: "Dr. Hebner [of National Bureau of Standards] then asked Newman directly where he intended that the power output be measured. His attorney advised Newman not to answer, and Newman and his coterie departed without further comment." The court finds a similar waiver with respect to whether the device should have been grounded, an apparently key requirement. These two waivers allowed the courts to credit the testing over Newman's later objections that the testing was improperly handled.

I'm sure I had briefly read Newman v. Quigg at some point in the past, and the case is cited as the seminal "no perpetual motion machine" case. Even so, I'm glad I watched the documentary to get a better picture of the times and hooplah that went with this, as well as what became of the man who claimed to defy the laws of thermodynamics.

Monday, March 5, 2018

Intellectual Property and Jobs

During the 2016 presidential race, an op ed in the New York Times by Jacob S. Hacker, a professor of political science at Yale, and Paul Pierson, a professor of political science at the University of California, Berkeley, asserted that "blue states" that support Democratic candidates, like New York, California, and Massachusetts, are "generally doing better" in an economic sense than "red states" that support Republican candidates, like Mississippi, Kentucky, and (in some election cycles) Ohio. The gist of their argument is that conservatives cannot honestly claim that "red states dominate" on economic indicators like wealth, job growth, and education, when the research suggests the opposite. "If you compare averages," they write, "blue states are substantially richer (even adjusting for cost of living) and their residents are better educated."

I am not here to argue over whether blue states do better than red states economically. What I do want to point out is how professors Hacker and Pierson use intellectual property – and in particular patents – in making their argument. Companies in blue states, they write, "
do more research and development and produce more patents[]" than red states. Indeed, "few of the cities that do the most research or advanced manufacturing or that produce the most patents are in red states." How, they ask rhetorically, can conservatives say red states are doing better when most patents are being generated in California?*

Hacker and Pierson's reasoning, which is quite common, goes like this. Patents are an indicator of innovation. Innovation is linked to economic prosperity. Therefore, patents – maybe even all forms of intellectual property – are linked to economic prosperity.

In my new paper, Technological Un/employment, I cast doubt on the connection between intellectual property and one important indicator of economic prosperity: employment.

This post is based on a talk I gave at the 2018 Works-In-Progress Intellectual Property (WIPIP) Colloquium at Case Western Reserve University School of Law on Saturday, February 17.

Saturday, March 3, 2018

PatCon8 at San Diego

Yesterday and today, the University of San Diego School of Law hosted the eighth annual Patent Conference—PatCon8—largely organized by Ted Sichelman. Schedule and participants are here. For those who missed it—or who were at different concurrent sessions—here's a recap of my live Tweets from the conference. (For those who receive Written Description posts by email: This will look much better—with pictures and parent tweets—if you visit the website version.)

Friday, March 2, 2018

Matteo Dragoni on the Effect of the European Patent Convention

Guest post by Matteo Dragoni, Stanford TTLF Fellow

Recent posts by both Michael Risch and Lisa Ouellette discussed the recent article The Impact of International Patent Systems: Evidence from Accession to the European Patent Convention, by economists Bronwyn Hall and Christian Helmers. Based on my experience with the European patent system, I have some additional thoughts on the article, which I'm grateful for the opportunity to share.

First, although Risch was surprised that residents of states joining the EPC continued to file in their home state in addition to filing in the EPO, this practice is quite common (and less unreasonable than it might seem at first glance) for at least three reasons:
  1. The national filing is often used as a priority application to file a European patent (via the PCT route or not). This gives one extra year of time (to gain new investments and to postpone expenses) and protection (to reach 21 years instead of 20) than merely starting with an EPO application.
  2. Some national patent offices have the same (or very similar) patenting standards as the EPO but a less strict application of those standards de facto when a patent is examined. Therefore, it is sometimes easier to obtain a national patent than a European patent.
  3. Relatedly, the different application of patentability standards means that the national patent may be broader than the eventual European patent. The validity/enforceability of these almost duplicate patents is debatable and represents a complex issue, but a broader national patent is often prima facie enforceable and a valid ground to obtain (strong) interim measures.