Pages

Monday, December 19, 2016

(Un)Reasonable Royalties

For the small subgroup of people who read this blog, but don't read Patently-O, I thought I would point to a new article that I've posted called (Un)Reasonable Royalties. I won't write much about it here. Dennis Crouch did a nice review, for which I'm thankful. Here is the abstract:
Though reasonable royalty damages are ubiquitous in patent litigation, they are only one-hundred years old. But in that time they have become deeply misunderstood. This Article returns to the development and origins of reasonable royalties, exploring both why and how courts originally assessed them.
It then turns a harsh eye toward all that we think we know about reasonable royalties. No current belief is safe from criticism, from easy targets such as the 25% “rule of thumb” to fundamental dogma such as the hypothetical negotiation. In short, the Article concludes that we are doing it wrong, and have been for some time.
This Article is agnostic as to outcome; departure from traditional methods can and has led to both over- and under-compensation. But it challenges those who support departure from historic norms—all the while citing cases from the same time period—to justify new rules, many of which fail any economic justification.

Blockbuster IP Term for 8-Member SCOTUS

After the Senate's failure to move forward with Judge Garland's nomination to the Supreme Court, the conventional wisdom was that the Justices would shy away from politically sensitive cases that could lead to 4-4 splits, focusing instead on areas such as intellectual property in which cases tend to be unanimous. So far, that's spot on. The Court added a hot patent venue case to its docket this week, its seventh IP case for the Term so far. For those who have lost count, here's a quick round-up. You can also see my list of all Supreme Court patent cases back to 1952 here.

TC Heartland v. Kraft Food (cert granted Dec. 14): Does a change to the general federal venue statute affect the venue rule in patent cases? Most observers think this is the end of E.D. Tex.'s patent dominance—and perhaps the end of the incongruous Samsung ice-skating rink outside the Marshall, TX courthouse.

Tuesday, December 6, 2016

Samsung v. Apple: Drilling Down on Profit Calculations

The Supreme Court unanimously ruled in Samsung v. Apple today. The opinion was short and straightforward: an article of manufacture under 35 USC 289 (allowing all the profits for an infringing article of manufacture as damages) need not be the entire product. If the article is less than the entire product, then all the profits of that smaller part should be awarded per the statute. The court remanded the case for determination of what the proper article of manufacture should be.

I have three brief comments on the opinion:

1. This is not a surprising result given the oral argument. The only surprise here is that the "entire product" rule had been in place so long, essentially unchallenged, such that this is a new way to look at damages.

2. One reason why the old rule was in place a long time is that it is difficult to square this opinion with the historical context of section 289 (or rather, its predecessor). The carpet in the Dobson case (which had awarded only nominal damages) had a design on the front, but also had an unpatented backing, etc. Although there were at least two components, no one at the time Congress passed the law thought for a second that the profits for each component should be considered separately. Indeed, that's what the court had done in Dobson, finding that the design added nothing to the profits -- the whole reason Congress passed a statute in the first place. This context is why (as I've written before), I've always been torn about this case. As a matter of statutory interpretation, the Court is surely right. But if that's true, then the statute has been misapplied literally since day one - and that doesn't sit well with me.

3. This is a "live by the sword, die by the sword" opinion. If patentees want to claim articles of manufacture that are less than whole products, then they have to accept that articles of manufacture for damages are less than whole products. The court cites In re Zahn (an often maligned case) approvingly. In Zahn, the patentee claimed the shank of a drill bit but not the bit itself:






drill bit
Now comes the difficult/fun (in the eye of the beholder) part: how do we calculate profits on an infringing drill bit? The patentee argues that drill bits are fungible, so that 100% of the profits must be assigned to the shank "article of manufacture." The infringer argues that people buy drill bits to drill, not to look at, so that 100% of the profits must be assigned to the bit "article of manufacture." If the infringer wins, even substantially, then we are back in the pre-289 world of Dobson. That can't be right. But if the patentee wins, it means that the Supreme Court's opinion means nothing.

I'm sure that some happy medium will be determined by experts, judges, and juries, but this is an area that is not going to be getting clarity any time soon.

Saturday, November 12, 2016

Rochelle Dreyfuss: Whither Experimental Use?

Since the Federal Circuit's formation in 1982, Rochelle Dreyfuss has witnessed patent law go through some significant changes, and the climate for university research and biotechnology innovation today is markedly different from what it was thirty years ago. However, Dreyfuss stated at the beginning of remarks she gave at Akron Law on Friday November 4, "sometimes facts change; but one's views on a subject do not." Patent law's common law "experimental use" defense–or lack thereof–is, she believes, a case in point. When she wrote on this topic over ten years ago in her paper Protecting the Public Domain of Science: Has the Time for an Experimental Use Defense Arrived?she felt largely the same as she does now: universities need the benefit of "some version of an invigorated experimental use defense."  In her new paper, "Reconsidering Experimental Use," to be published in the Akron Law Review this spring., Dreyfuss returns to the important question of whether, and to what extent, patented inventions can be used for experimental purposes during the term of a patent. Read more at the jump.

Wednesday, November 2, 2016

Guest Post by Greg Reilly: Forcing Patent Applicants to Internalize Costs from Overclaiming

Guest post by Greg Reilly (IIT Chicago-Kent College of Law), whose work on patent "forum selling" and patent discovery has previously been featured on this blog.

Conventional wisdom is that patent prosecutors should obtain the broadest possible claim scope or, more precisely, should obtain at least one claim that is as broad as the U.S. Patent and Trademark Office examiner will allow while also hedging with other, narrower claims. A new paper by Oskar Liivak (Cornell), Overclaiming Is Criminal, provocatively argues that this standard practice is not just sub-optimal or improper, but is in fact illegal under federal law – “it is a felony to willfully overclaim in a patent application.” Liivak’s paper is a crucial contribution to the debate over improving patent quality, highlighting the need to alter the patent applicant’s and patent prosecutor’s incentives, not just attempt to improve the Patent Office’s performance.

Liivak’s legal argument is virtually air-tight, and I am sympathetic to his policy concerns, though the practical impact of his proposal is less certain because of the difficulty of identifying “willful overclaiming.” In any event, criminal sanctions are not the only way to alter the incentives of applicants and prosecutors to ensure that they internalize costs from overclaiming. Another possibility is restricting claim amendments in Patent Office post-issuance proceedings, an issue currently before the en banc Federal Circuit in In re Aqua Products, Inc. (More after the jump.)

Thursday, October 20, 2016

IP and Climate Change

My colleague and friend Josh Sarnoff (DePaul) sent me a review copy of the book he edited: Intellectual Property and Climate Change, even though I told him I wouldn't have much time to look at it. Wouldn't you know, on a quick skim I found it pretty interesting, and thought I would talk about it a bit.

The book is part of the Elgar Research Handbook series. I wrote a chapter that I really like (who am I kidding, I just love that book chapter) in the Research Handbook on Trade Secret Law. But because it's in an expensive book, nobody seems to know about it (and my colleagues in trade secret law will attest that I remind them whenever I review one of their drafts that is remotely in the area of trade secrets and incentives).

So, I thought I would flag this book, so readers would know this is out there. IP will have a growing role in climate change, as this cool story from this week illustrates. The book is comprehensive - it has 26 chapters from a variety of different authors. Some of the topics:

  • International law and TRIPS
  • Enforcement
  • Technology transfer
  • Innovation funding and university research
  • Antitrust, patents, copyrights, trade secrets, trademarks
  • Rights in climate data
  • Privacy (this one surprised me)
  • Standards
  • Energy, transportation, food, natural resources
There is something for everyone in this book. Though it is focused on climate change, much of the discussion can be generalized to other emerging areas of law. In that sense, it does present a little bit like the law of the horse, but given that this is a research handbook, I'm not so sure that's a bad thing.

Tuesday, October 11, 2016

Apple v. Samsung Oral Argument Thoughts

The Supreme Court heard Apple v. Samsung (technically Samsung v. Apple) today. The transcript is here. This post is short and assumes some knowledge of the issues: the tea leaves appear to be that many justices are uncomfortable with the current rule that treats the "article of manufacture" as the entire product. Even Apple seemed to give ground (surprisingly) and admit that what the article of manufacture is a factual determination (but that Apple satisfies those facts in this case). This was an interesting concession, quite frankly, as it is not a given that the Court would accept its view that no new trial is needed. The Court did not seem to even want to hear the "don't remand" argument.

But it also seems clear that the Court has no idea how a rule would operate in practice. After all, it's been 140 years and as far as I can tell, one important case has ever denied profits to a design that was not sold separately (on a refrigerator latch). And Justice Kennedy, in particular, was worried about how one would distinguish a method of determining profits of the patented article of manufacture (OK) as compared to the apportioning profits associated with the patented design (Not OK).

I've made no secret that I favor Samsung's view here. I signed on to an amicus that said as much, and I believe that there is case law mentioned briefly by counsel for Apple (and cited in the amicus) that allows for assessing profits only associated with the infringement, not the entire product. For example, in Westinghouse, the Court held that even if all profits were to be assessed with no apportionment, the defendant could still provide evidence that separated profits unrelated to the infringement.

But that said, I think the argument highlights the central tension here. In Dobson v. Hartford Carpet (1885), the Court considered the very same arguments made today: that people buy carpet for a variety of reasons, that design might drive sales, but there are also other inventions that affect quality in the carding, spinning, dying, and weaving (not to mention the backing). Thus, the Court held, any profits must be allocated based on the patented design. It would be unfair, the Court ruled, to force defendants to pay out their entire profits twice if they infringed another patent.

It was this ruling that Congress promptly intended to reverse by making a rule that all profits on the article of manufacturer would be owed.  If the proposals discussed today were in place then, Dobson would not have received all of the defendant's profits; I cannot believe that Congress intended this outcome then, and so I wonder how one can read the statute differently now.

And that's where I'm left. The utilitarian in me says this statute is wrong in many ways (and I've looked for ways to achieve that result through statutory interpretation, like Westinghouse). But the statutory intepreter in me finds it hard to believe that we've somehow misunderstood this statute for the last 140 years, and that the solutions today would leave us with the same outcome that Congress quite clearly intended to reverse. If the Court does decide on narrowing damages, I wish it good luck in finding a happy medium.

Friday, October 7, 2016

Apple v. Samsung, Part ?? (not the Supreme Court case)

I've lost count of the rounds back and forth in Apple v. Samsung. But another opinion issued today, and it was a doozy. When we last left our intrepid litigants in late March, the panel had reversed the $120m verdict in favor of Apple, ruling a) non-infringement of a patent, and b) obviousness of two patents (slide to unlock and autocorrect). These rulings were as a matter of law - that is, they reversed jury findings to the contrary.

Apple filed for an en banc hearing, and we never heard anything again...until today. The en banc Federal Circuit (except Judge Taranto, who did not participate) vacated the panel opinion. The decision came without briefing, and it was unanimous...except for the three members of the original panel, who all dissented.

The opinion begins with a statement (rebuke?) about what appellate review should do: take the facts as found by the factfinder, and then review them for substantial evidence. The opinion takes umbrage at the fact that substantial evidence is not really addressed by the original panel at all. The opinion also takes issue with "extra record" material being considered for claim construction on appeal. The opinion then goes through each patent and shows the substantial evidence that would support a verdict, even if the appeals court would disagree with it.

Perhaps most telling of the deferential approach is the slide to unlock patent, which I think is the weakest of the bunch. The prior art, when combined, clearly has all the elements. But in finding non-obviousness, the jury found that the person with skill in the art would not have combined the references. That might be wrong, but evidence was submitted to support it, and thus the claim is non-obvious. The dissent takes issue with this, saying that KSR loosened up the combination standard. More on this later.

This opinion has a lot of important aspects:
1. It is another en banc opinion without briefing. I am sure the litigants (especially the losing ones) hate that. As an observer, I'm not so bothered in this case. The briefing was full and complete, and there was little to add in the way of analysis.

2. Why is the en banc circuit showing up only now to defend substantial evidence? I can think of at least two prominent cases in which juries made non-obvious findings of fact that a Federal Circuit panel disregarded to find a patent obvious and the en banc request was denied. Why now?

3. Just what is the obviousness standard of review and who is supposed to make these decisions? In general, the final obviousness determination is one of law, based on underlying findings of fact. Many judges have juries decide those underlying findings of fact, such as the scope of the prior art or the motivations to combine references. Some jury instructions ask in detail, and some just say "is it obvious?" If it is the latter case, then any jury finding is entitled to all inferences on appeal - if the jury said non-obvious, then it must have found no motivation to combine. The problem with this approach (and even the specific question approach) is that it makes it hard to "loosen" a standard as KSR v. Teleflex says we should. KSR affirmed a grant of summary judgment by the district court - in other words, it affirmed a finding that references could be combined as a matter of law. It is unclear why an appellate court could not have made the same determination here. At the same time, why have trials and findings of fact if we are simply going to ignore them? Deciding how obviousness should get decided is almost as important as the obviousness standard itself.

4. This opinion shows the importance of which panel you draw at the Federal Circuit. Apple had the bad fortune to draw the only three judges to disagree here. Of course, we don't know how the en banc dynamics work, and perhaps some in the majority here would have concurred in the original panel opinion. To generalize, though, judicial preferences may drive the disparity in opinions in Section 101 right now. A couple cases that have just issued are ripe targets for en banc review as well to aid this.

This is my final takeaway - if any part of this case is to make it to the Supreme Court, it will be the slide to unlock patent. This is a patent where all the elements are in the prior art, and there is a real dispute about the procedure for determining obviousness. This question has been presented to the Court before, but perhaps this version will take hold.

Thursday, October 6, 2016

The Long Awaited FTC Study on Patent Assertion and Nuisance Litigation

After months of speculation that the FTC's long awaited FTC study on patent assertion entities was going to issue any day now, the study has finally issued. The press release is here, and the full PDF is here.There is a lot to learn from this study - the FTC had subpoena power to obtain data unavailable to mere mortals like the scholars who study this area. I thought that the study was generally balanced and well researched. A scan of the footnotes alone will make a great literature review for anyone new to this area (even if it is missing a couple of my articles...).

Some of the highlights of the study come right out of the press release:
The report found two types of PAEs that use distinctly different business models. One type, referred to in the report as Portfolio PAEs, were strongly capitalized and purchased patents outright. They negotiated broad licenses, covering large patent portfolios, frequently worth more than $1 million. The second, more common, type, referred to in the report as Litigation PAEs, frequently relied on revenue sharing agreements to acquire patents. They overwhelmingly filed infringement lawsuits before securing licenses, which covered a small number of patents and were generally less valuable.
The report found that, among the PAEs in the study, Litigation PAEs accounted for 96 percent of all patent infringement lawsuits, but generated only about 20 percent of all reported PAE revenues. The report also found that 93 percent of the patent licensing agreements held by Litigation PAEs resulted from litigation, while for Portfolio PAEs that figure was 29 percent.
The separation of portfolio versus litigation business models was an important one - something I discuss in Patent Portfolios as Securities and Lemley & Melamed discuss in Missing the Forest for the Trolls.

The study also debunks the notion of widespread demand letter abuse, but does show that PAEs tend to sue first and demand later. It shows about $4B in revenues for all study respondents over a 6 year period, 80% of which came from portfolio companies (and mostly not after litigation). The study is unclear what this $4B represents if extended to all PAEs, but provides some statistics that might aid in a ball park calculation for the whole market.

There's more -- a lot more -- to this report, which runs 150 pages before the appendices. The study concludes with some reasonable and mostly uncontroversial ways the system can be made better, such as limiting expensive discovery at the early stages of a case. That said, I do take issue with one point of the study - relating to nuisance litigation. More on this after the jump.

Wednesday, October 5, 2016

Helsinn v. Teva Oral Argument Recap

In March, I posted about an amicus brief filed by 42 IP profs in Helsinn v. Teva, which argued that contrary to the district court's opinion and position taken by the USPTO, the America Invents Act (AIA) did not change the meaning of "on sale" and "public use" in 35 U.S.C. § 102(a)(1). The case was argued yesterday before the Federal Circuit, and the panel (Judge Dyk, Judge Mayer, and Judge O'Malley) didn't seem eager to conclude that the AIA wrought a significant change.

The appeal involves Teva's challenge to Helsinn's post-AIA patent on the nausea drug palonosetron, which was filed over a year after a secret licensing and supply contract for the drug. In Pfaff v. Wells Electronics (1998), the Supreme Court held that the on-sale bar applies when a product is (1) "the subject of a commercial offer for sale" and (2) "ready for patenting" as of the critical date (one year before filing). Both issues are contested here, as the district court said that the drug was neither ready for patenting nor on sale within the meaning of the post-AIA § 102. I'll focus here just on the AIA issue, but note that Judge O'Malley asked about remanding for further factfinding and whether it is necessary to reach the AIA issue.

The only line of questioning on the AIA issue for Teva was Judge Dyk's criticism of the dueling canons of statutory interpretation for figuring out what "or otherwise available to the public" means in the new § 102. Teva argued that under the "last antecedent" canon, "to the public" modifies only "otherwise available"; Helsinn countered that under the "series qualifier" canon, the concluding phrase "otherwise available to the public" qualifies everything in the series, including "on sale." But Judge Dyk stated that neither canon can apply because the modifier would be "available to the public," leaving just the word "otherwise," which doesn't make sense. Teva pivoted to its argument that "or otherwise available to the public" is a catchall category for new technologies, which the panel seemed comfortable with; Judge Dyk suggested "an oral description at a conference" as something that might fall into this bucket.

Tuesday, October 4, 2016

A Comprehensive Study of Trade Secret Damages

Elizabeth Rowe (Florida) has shared a draft of "Unpacking Trade Secret Damages" on SSRN. The paper is an ambitious one, examining all of the federal trade secret verdicts she could find (which she believes is a reasonably complete set based on her methods) that issued between 2000 and 2014. The abstract is:
This study is the first to conduct an in-depth empirical analysis of damages in trade secret cases in the U.S. From an original data set of cases in federal courts from 2000 to 2014, I assess the damages awarded on trade secret claims. In addition, a wide range of other variables are incorporated into the analysis, including those related to background court and jurisdiction information, the kinds of trade secrets at issue, background details about the parties, the related causes of action included with claims of trade secret misappropriation, and details about the damages awarded.
Analysis of this data and the relationship between and among the variables yields insightful observations and answers fundamental questions about the patterns and the nature of damages in trade secret misappropriation cases. For instance, I find average trade secret damage awards comparable to those in patent cases and much larger than trademark cases, very positive overall outcomes for plaintiffs, and higher damages on business information than other types of trade secrets. The results make significant contributions in providing deeper context and understanding for trade secret litigation and IP litigation generally, especially now that we enter a new era of trade secret litigation in federal courts under the Defend Trade Secrets Act of 2016.
I think this study has a lot to offer. Although it doesn't include state court cases, it provides a detailed look at trade secret cases in the first part of this century. Of course, the verdicts, which were about 6% of all trade secret cases filed, are subject to the same selection effects as any other verdict analysis - there is a whole array of cases (more than 2000 of them in the federal system alone) that never made it this far, and we don't know what the tried cases tells us about the shorter-lived cases.

The study offers a lot of details: amounts of awards, states with the highest awards, states with the most litigation, judge v. jury, attorneys' fees, punitive damages, the effect of NDAs on damages, etc. It goes a step further and offers information about the types of information at issue, and even the types of information that garner different sizes of awards. It's really useful information, and I recommend this study to anyone interested in the state of trade secret litigation today.

There are, however, a couple ways I think the information could have been presented differently. First, the study has some percentile information which was great, but most of it focuses on averages. This is a concern because the data is highly skewed; one nearly billion dollar verdict drives much of the relevant totals. Thus, it is difficult to get a real sense for how the verdicts look and there is no standard deviation reported.

Of course, the median award according to the paper is zero, so reporting medians is a problem. I particularly liked the percentile table and discussion, and I wonder whether a 25/50/75 presentation would work. Speaking of zero dollar awards, though, I thought the paper could be improved by clarifying what is calculated in the average. Is it the average of all verdicts? All verdicts where the plaintiff wins? All non-zero verdicts? Related to this, I thought that clearly disaggregating defendant verdicts would be helpful. The paper reports how many plaintiffs won, but this is not reflected in either the median or average award data (that I can tell - only total cases are reported). At one point the paper discusses the average verdict for defendants (more than $800,000) which is confusing since defendants shouldn't win any damages. Are these part of the averages? Are they calculated as a negative value? If these are fee awards, they should be reported separately, I would think.

Though I would like more data resolution, I should note that this really is just a presentation issue. The hard part is done, and the data is clearly available to slice and dice in a variety of ways, and I look forward to further reporting of it.

Tuesday, September 27, 2016

Network Neutrality in 1992

I've been remiss in blogging of late; I had a really busy summer and beginning of Fall. I have a bit more time now and hope to resume some blogging about papers and cases shortly. In the meantime, it doesn't take long to write a post about your own work, so I figured it would be an easy way to (re)break the ice.

I've written an essay with a former student, Christie Larochelle, who is now clerking in Delaware (she was a tenured physics professor before attending law school). As part of a hometown symposium for the Villanova Law Review's 60th anniversary, we tackled an interesting topic: rumblings about network neutrality at the birth of the commercial internet. More on the article and on coauthoring after the jump.

Tuesday, September 6, 2016

Sandeen and Seaman: Toward a Federal Jurisprudence of Trade Secret Law

If you're interested in the fate of trade secrets law, and you like conflicts of law, I recommend taking a look at Sharon Sandeen and Chris Seaman's paper, Toward a Federal Jurisprudence of Trade Secret Law, when it comes out. As we know, Congress has now passed a federal civil cause of action for trade secret misappropriation: the Defend Trade Secret Act of 2016 (DTSA). For the first time, civil trade secret plaintiffs can choose to sue under federal law. But the DTSA leaves courts with some major interpretive challenges. This is the subject of Sandeen and Seaman's project. I was fortunate to get a glimpse at the IP Scholars trade secrets panel last month.  Read more at the jump.

Thursday, September 1, 2016

Mark Rose: The Authors and their Personalities that Shaped Copyright Law

“Great cases like hard cases make bad law” said Justice Holmes at the turn of the twentieth century. By contrast in copyright law, complex personalities and facts seem to allow the law to work itself pure. That seems to be the principal takeaway from Mark Rose’s illuminating new book Authors in Court: Scenes from the Theater of Copyright.

A literary historian of copyright whose prior book is considered a seminal contribution to the field, Rose sets out in Authors in Court to tell the story behind several of copyright’s leading cases through an investigation of the personalities that prompted the dispute and its eventual resolution. The book’s main chapters each tell the story of a major copyright case that is today part of the copyright canon: Pope v. Curll, Stowe v. Thomas, Burrow-Giles v. Sarony, Nichols v. Universal, Salinger v. Random House, and Rogers v. Koons. Some of these cases (e.g., Nichols, Koons) continue to be cited by courts to this day.

Rich in detail, and lucidly written, each chapter showcases what the idea of “authorship” meant to the protagonists in each dispute and the range of values and influences that motivated the construct. To some, it involved the maintenance and policing of their public personae (e.g. Pope), to others it involved balancing the conflation of art and value (e.g. Sarony), and to yet others it involved melding authorship with narratives of honesty and authenticity. Rose does an excellent job of bringing to life the colorful personalities that initiated these famed copyright disputes.

Introducing New Blogger: Shyam Balganesh

I am very pleased to welcome Professor Shyam Balganesh as a new Written Description blogger. He is a Professor of Law at the University of Pennsylvania Law School, where he is also a Co-Director of the Center for Technology, Innovation & Competition (CTIC). His scholarship focuses on understanding how copyright law and intellectual property can benefit from the use of ideas, concepts and structures from different areas of the common law, especially private law. His article Causing Copyright (forthcoming in the Columbia Law Review) was featured on this blog in March, and Foreseeability and Copyright Incentives (published in the Harvard Law Review) is one of the most cited IP articles. I look forward to reading his contributions to Written Description!

Tuesday, August 30, 2016

Brennan, Kapczynski, Monahan & Rizvi: Leveraging Government Patent Use for Health

The federal government can and should use its power to buy generic medicines at a fraction of their current price, according to Hannah Brennan, Amy Kapczynski, Christine H. Monahan, and Zain Rizvi in their new article, A Prescription for Excessive Drug Pricing: Leveraging Government Patent Use for Health. They note that 28 U.S.C. § 1498 allows the federal government to use patents without license as long as it pays "reasonable and entire compensation for such use." This provision "is regularly used by the government in other sectors, including defense," and was relied on "numerous times to procure cheaper generic drugs in the 1960s," and should "once again be used to increase access to life-saving medicines." The article is chock-full of interesting details and is a recommended read even for those who disagree with their ultimate policy conclusions.

The authors discuss how § 1498 has been used recently to acquire patented inventions ranging from electronic passports to genetically mutated mice, and how the Defense Department used § 1498 to buy generic antibiotics from Italian firms before Italy started issuing patents on drugs. They synthesize the § 1498 caselaw and note that it is not a replication of the patent damages award; e.g., lost profits are strongly disfavored, and the cases show concern with "excessive compensation" to the patent owner. Adjustments to § 1498 royalties have been made based on risks and expenses incurred by the patentee in developing and creating a market for the products, and to account for "reasonable" profits, so the authors advocate awarding pharmaceutical patentees their risk-adjusted R&D costs plus average industry returns (perhaps a 10-30% bounty). This approach to calculating patent royalties is similar in many ways to that advocated by Ted Sichelman for all patent cases, as discussed on this blog in June.

Wednesday, August 10, 2016

16th IP Scholars Conference at Stanford

The 16th IP Scholars Conference (IPSC) will be held at Stanford Law School tomorrow and Friday, with about 150 presentations and 200 attendees from around the world. The conference schedule is here (which is largely thanks to the work of our terrific fellow, Shawn Miller). IPSC was my first law conference right out of law school, and I was overwhelmed by how welcoming and generous the IP scholarly community was.

I'll post in the coming weeks about some of the papers I hear about at the conference, but for those who want more immediate updates, Jake Linford (whose work I've written about here) will be live-blogging at PrawfsBlawg, and Rebecca Tushnet (who was just featured on this blog) likely will do the same at her 43(B)log. Given the size of the IP prof twitter network, I'm sure there also will be plenty of tweets about #IPSC16.

Looking forward to welcoming IP scholars to Stanford! (And if you read Written Description but haven't met me yet, please find me and say hello.)

Monday, July 25, 2016

Rebecca Tushnet: The Inconsistent and Confusing Role of Registration in American Trademark Law

Patent law scholars argue over how much time and money the Patent & Trademark Office (PTO) should spend on pre-grant review of patent applications. Likewise, they argue over the degree to which patents should be given a "presumption of validity" once granted. But what if patent law scholars completely ignored the details of the PTO's review, and simply assumed that patents are generally properly and efficiently granted? What if courts in patent infringement cases treated PTO review as a mere formality and focused exclusively on the appropriate scope of patents and whether they have been infringed in the marketplace? We would probably conclude this to be error. Given the time and public money expended each year on the process of patent examination, not to mention the role of published patent specifications in establishing and providing public notice of granted rights, it is worth paying attention to the administrative procedure through which patents are created and preserving the relevance of the PTO's analysis through doctrines like presumption of validity and prosecution history estoppel.

Yet, according Rebecca Tushnet's new article in the Harvard Law Review, Registering Disagreement: Registration in Modern American Trademark Law, foundational trademark law and scholarship suffer from a similar form of tunnel vision. Not only do they place inconsistent weight on the PTO's assessments during registration for establishing substantive trademark rights, but they do not apply a consistent vision of what the role of trademark registration actually is. Read more at the jump.

Friday, July 22, 2016

Merges & Mattioli on the Costs and (Enormous) Benefits of Patent Pools

Patent pools bundle related patents for a single price, reducing the transaction costs of negotiating patent licenses but creating the threat of anti-competitive harm. So are they a net benefit from a social welfare perspective? Professors Rob Merges and Mike Mattioli empirically tackle this difficult question in their new draft, Measuring the Costs and Benefits of Patent Pools, which at least for now is available on SSRN (though since its takeover by Elsevier, SSRN has conducted some egregious takedowns). Spoiler: They don't reach a one-size-fits-all answer, but they conclude that "[p]ools save enormous amounts of money," which means that "those who are concerned with the potential downside of pools will, from now on, need to make a good faith effort to quantify the costs they describe."

To address the benefit side of the equation, Merges and Mattioli interviewed senior personnel at two patent pool administrators: MPEG-LA, which administers 13 pools and provided information on the High Efficiency Video Encoding (HEVC) pool, and Via Licensing, which administers 9 pools and provided information on the MPEG Audio pool. The two pools focused on were "believed [to] represent[] the average (in terms of scale and cost) among the set of pools they administer." Based on these interviews, the authors estimate the total estimated setup expenses over a two-year period as $4.6M for HEVC and $7.8M for MPEG Audio. (Of course, pool administrators may not be the most unbiased source of information, but the authors itemize the costs in a way that makes it easy for others to check.) Merges and Mattioli then consider the counterfactual world in which all the associated licenses were negotiated individually, in which they estimate the transaction costs at $400M for HEVC and $600M for MPEG Audio. This suggests that the pools resulted in a staggering savings of about two orders of magnitude. They also estimate that the pooling arrangement reduces the ongoing transaction costs.

On the cost side of the equation, Merges and Mattioli state that patent pool critics have raised two main consumer welfare concerns: (1) combining substitutes, such that firms that should have been competitors are able to act as monopolists; and (2) grantback clauses, which could allow pools to suppress future competitors. They note that in practice, these are unlikely to be significant problems: most pools require members to make their patents available independently, which "makes technology suppression through a pool impossible." But if a pool does not have such a provision, how big are the potential consumer welfare losses?

Wednesday, July 20, 2016

New GAO Patent Studies

The Government Accountability Office released two new reports on the PTO today: one on search capabilities and examiner monitoring, one on patent quality and clarity. They also released the underlying data from examiner surveys.

I think the survey data is more interesting than the conclusions; examiners were asked questions including how much time they spend on different parts of examination, how useful they found PTO training, how often they searched for/used different types of prior art, what factors make prior art searching/examination difficult, how much uncompensated overtime they worked to meet production goals, how confident they were that they found the most relevant prior art, what they think of PTO quality initiatives, etc. Lots of rich data here!

Here are the concluding recommendations from each report, for which the GAO will track the PTO's responses on their website. (I believe the PTO is already working on quite a number of these.)

Wednesday, July 13, 2016

Neel Sukhatme: Make Patent Examination Losers Pay

Why do patent applicants pay higher fees when they succeed than when they fail? In a terrific new draft posted last week, "Loser Pays" in Patent Examination, Neel Sukhatme (Georgetown Law) argues that "such pricing is precisely backwards, penalizing good patent applications instead of bad ones." Instead of this "winner pays" system, he argues that the PTO should discourage weak applications by forcing unsuccessful applicants to pay more.

I think most patent scholars would agree that the PTO issues many patents that do not meet the legal standards for validity; see, e.g., work by Michael Frakes and Melissa Wasserman showing that time-crunched examiners have higher grant rates, or work by John Allison, Mark Lemley, and David Schwartz on the large number of patents invalidated during litigation. To address this problem, it may be worth increasing the resources devoted to examination—e.g., examiners could be given more time, and I have argued for some form of peer review. But I think most patent scholars would also agree with Mark Lemley that at some point the costs of increased scrutiny would not outweigh the actual social costs of the invalid patents that slip through (most of which are never asserted or litigated).

Sukhatme's paper builds on a growing body of scholarship that tackles the problem of invalid patents not through increasing the resources spent on examination, but rather through price structures designed to disincentivize socially costly behavior by patent applicants. For example, Jonathan Masur has argued that the high costs of obtaining a patent disproportionately select against socially harmful patents, and Stephen Yelderman suggests a number of more fine-grained way to rationalize application fees. Economists Bernard Caillaud and Anne Duchêne have developed some of these ideas in a formal model, including the possibility of "a penalty for rejected applications" which "would unambiguously encourage R&D" because it would have "no impact on the submission strategy for non-obvious projects." And it seems clear from work by scholars such as Gaétan de Rassenfosse and Adam Jaffe that changes in fees do affect applicant behavior in practice.

Sukhatme does a wonderful job expanding on these ideas to explore how loser-pays rules might be adapted to patent examination. For example, he suggests that applicants could be required to post a bond at the outset of examination, some of which would be returned if the applicant is successful. Continuations could be discouraged by reducing the recoverable bond amount as prosecution proceeds. Additional revenues could be returned to successful applicants, providing stronger incentives for filing valid patents. To reduce unfairness to individual inventors, the PTO could continue the current practice of offering discounts for small and micro entities.

The PTO could implement this system using its fee-setting authority under current law. Indeed, I think Sukhatme could go further and argue that the PTO has not only authority but also some obligation to do so. I've previously blogged about Jonathan Masur's work on cost-benefit analysis (CBA) at the PTO, in which he explains why the PTO's attempt to use CBA for its recent fee-setting regulations was a misapplication of basic principles of patent economics. If the PTO's current fee structure were compared with Sukhatme's under a correct application of CBA, I think Sukhatme's would win.

Friday, July 1, 2016

Use of IP and Alternatives by UK Firms (1997-2006)

This isn't a new report, but it's new on SSRN: the UK Intellectual Property Office commissioned a 2012 report by Bronwyn Hall, Christian Helmers, Mark Rogers, and Vania Sena, The Use of Alternatives to Patents and Limits to Incentives, which presents data on the choice of different IP protection mechanisms by UK firms from 1997 to 2006. The report starts with a terrific literature review, covering works like the Levin et al. (1987) "Yale" survey, the Cohen et al. (2000) "Carnegie Mellon" survey, and the Graham et al. (2010) "Berkeley" survey. Discussion of the UK data begins on p. 41. There are 38,760 observations, but few firms show up over the whole time period.

Just a few teaser results: Only about 30% of firms report any form of innovation, and only 1.3% of firms patent. Within formal IP categories, trademarks are considered the most important right. 92% of firms that do not report product innovations regard patents as unimportant, compared with slightly less than 30% of product innovators. Trade secrecy complements use of patents: almost 40% of patentees consider secrecy to be crucial, compared with 9% of non-patenting firms. The full PDF is 138 pages, so I won't even attempt a summary, but I thought patent scholars who hadn't discovered this yet might be interested.

Monday, June 27, 2016

Can You Induce Yourself to Infringe?

The Supreme Court granted certiorari in Life Tech v. Promega Corp. today to resolve an interesting conundrum of statutory interpretation having to do with foreign infringement. I won't provide all the details here - as Jason Rantanen and Dennis Crouch have ably done so. [Note: it turns out that the issue on which the court granted cert. is the one I find less interesting. Thus, I've edited this!]

The question is deceptively simple: when a manufacturer creates an infringing product in a foreign country, here a "kit," is it infringement of the patent for the manufacturer to buy or make a key component of that kit and then export it from the U.S. to the foreign manufacturing facility?

You'd think there would be a clear answer to this question, but there isn't. The statute, 35 U.S.C. § 271(f)(1), states:
Whoever without authority supplies or causes to be supplied in or from the United States all or a substantial portion of the components of a patented invention, where such components are uncombined in whole or in part, in such manner as to actively induce the combination of such components outside of the United States in a manner that would infringe the patent if such combination occurred within the United States, shall be liable as an infringer.
I had always assumed that the exported product is a substantial portion. The jury found it was, and if it turns out it wasn't, then that's just not that interesting a question. But I guess not - as this is the question that the court granted cert on. Indeed, on this issue only, as discussed further below.

The only thing that's interesting about the substantial portion question is that the exported product is a commodity, and thus 35 U.S.C. § 271(f)(2) - which enforces liability for exporting specially made products used for contributory infringement - doesn't apply. In other words, this is an interesting case because inducement liability is the only type of liability available, and inducement liability is really hard to prove, especially with a commodity.

But inducement is much easier here, theoretically, because the exporter is the same company as the foreign manufacturer. In other words, you can assume that the intent of the export was to combine the product into the infringing combination. And, yet, the court did not grant cert on that issue.

Instead, it will have to answer the question in a roundabout way - whether self-inducement is possible, but not when the component is a commodity. I suppose the Court could say, well, it's not the commodity that matters, but that it was too small a component. But isn't that a jury question? It seems to me that the only way the Court can reasonable make a distinction is either to make some new threshhold for what "substantial" is, or say that a commodity can never qualify. I don't like either of those options much, though - as I note below - the commodity angle has more legs given that the same commodity could be purchased from a third party without liability.

When this type of combination is done by a single manufacturer in the U.S., we call it direct infringement under § 271(a). The concept of inducement simply never comes up, and thus all the precedent to date discusses inducing another and spends no time on the importance of each component (indeed, § 271(b) says you can induce without selling anything!). So, to say that Life Tech is liable under § 271(f)(1) is to say that it has induced itself to infringe by exporting the commodity component that it could have bought from someone else who would not have induced it.

I don't think this was such a clear cut case on the self-inducing point, and I think that granting cert. on only the substantial component issue muddles the question. I offer two opposing viewpoints.

On the one hand, of course one can induce oneself for this statute. Foreign infringement liability was written in order to stop parties from avoiding the reach of a U.S. patent by shipping parts overseas to be assembled there. Viewed from this angle, it is not only rational but mandated that a company be held liable for shipping components overseas for the purpose of combining them into an infringing product. From this perspective, the policy goals of the statute dictate liability - even if the component shipped is a commodity.

On the other hand, the key component is a commodity, supplied by any number of companies. If Life Tech had only ordered the commodity from one of the other companies for shipment from the U.S., rather than supplying it from it's U.S. arm, it surely would not be liable. Viewed from this perspective, the extraterritorial reach of the statute makes little sense -- is a little silly even -- if it turns on a detail as minute as whether a company bought a commodity and shipped it overseas to itself or whether it bought a commodity and had the seller ship it overseas to itself.

My gut says follow the statute and find liability, even if avoiding liability is ridiculously easy. There are lots of statutes like that, and there's no reason why this shouldn't be one of them. In that sense, the Court's denial of cert. on the self-inducement issue makes sense, but I don't know what to make of the issue on which it did grant review.

Thursday, June 23, 2016

Cuozzo v. Lee and the Potential for Patent Law Deference Mistakes

I wrote a short post on Monday's decision in Cuozzo v. Lee for Stanford's Legal Aggregate blog, which I'm reposting here. My co-blogger Michael Risch has already posted his initial reactions to the opinion on Monday, and he also wrote about deference mistakes in the context of the "broadest reasonable interpretation" standard in an earlier article, The Failure of Public Notice in Patent Prosecution.

The Federal Circuit's patent law losing streak was broken Monday with the Supreme Court's decision in Cuozzo v. Lee. At issue were two provisions of the 2011 America Invents Act related to the new "inter partes review" (IPR) procedure for challenging granted patents before the Patent and Trademark Office. IPR proceedings have been melodramatically termed "death squads" for patents—only 14% of patents that have been through completed trials have emerged unscathed—but the Supreme Court dashed patent owners' hopes by upholding the status quo. Patent commentators are divided on whether the ease of invalidating patents through IPR spurs or hinders innovation, but I have a more subtle concern: the Supreme Court's affirmance means that the PTO and the courts will evaluate the validity of granted patents under different standards of review and different interpretive rules, providing ample possibilities for what Prof. Jonathan Masur and I have termed "deference mistakes" if decisionmakers aren't careful about distinguishing them.

Monday, June 20, 2016

Cuozzo: So Right, Yet So Wrong

The Supreme Court issued its basically unanimous opinion in Cuozzo today. I won't give a lot of background here; anyone taking the time to read this likely understands the issues. The gist of the ruling is this: USPTO institution decisions in inter partes review (IPR) are unappealable, and the PTO can set the claim construction rules for IPR's, and thus the current broadest reasonable construction rule will surely remain unchanged.

I have just a few thoughts on the ruling, which I'll discuss here briefly.

First, the unappealability ruling seems right to me. That is, what part of "final and non-appealable" do we not understand? Of course, this leads to a partial dissent, that it means no interlocutory appeals, but you can appeal upon a final disposition. But that's just a statutory interpretation difference in my book. I'm not a general admin law expert, but the core of the reading, that Congress can give the right to institute a proceeding and make it unreviewable, so long as the outcome of the proceeding is reviewable, seems well within the range of rationality here.

But, even so, the ruling is unpalatable based on what I know about some of the decisions that have been made by the PTO. (Side note, my student won the NYIPLA writing competition based on a paper discussing this issue.) The court dismisses patentee's complaint that the PTO might institute on claims that weren't even petitioned for review as simply quibbling with the particularity of the petition and not raising any constitutional issue. This is troublesome, and it sure doesn't ring true in light of Twiqbal.

Second, the broadest reasonable construction ruling seems entirely, well, broadly reasonable. The PTO uses that method already in assessing claims, and it has wide discretion in the procedures it uses to determine patentability. Of course the PTO can do this.

But, still, it's so wrong. The Court understates, I believe, the difficulty of obtaining amendments during IPR. The Court also points to the opportunity to amend during the initial prosecution; of course, the art in the IPR is now newly being applied - so it is not as if the BRC rule had been used in prosecution to narrow the claim. Which is the entire point of the rule - to read claims broadly to invalidate them, so that they may be narrowed during prosecution. But this goal often fails, as I wrote in my job talk article: The Failure of Public Notice in Patent Prosecution, in which I suggested dumping the BRC rule about 10 years ago.

Whatever the merits of the BRC rule in prosecution, they are lost in IPR, where the goal is to test a patent for validity, not to engage in an iterative process of narrowing the claims with an examiner. I think more liberal allowance of amendments (which is happening a bit) would solve some of the problems of the rule in IPRs.

Thus, my takeaway is a simple one: sometimes the law doesn't line up with preferred policy. It's something you see on the Supreme Court a lot. See, e.g. Justice Sotomayor's dissent today in Utah v. Strieff

Thursday, June 16, 2016

Halo v. Pulse and the Increased Risks of Reading Patents

I wrote a short post on Monday's decision in Halo v. Pulse for Stanford's Legal Aggregate blog, which I'm reposting here.

The Supreme Court just made it easier for patent plaintiffs to get enhanced damages—but perhaps at the cost of limiting the teaching benefit patents can provide to other researchers. Chief Justice Robert’s opinion in Halo v. Pulse marks yet another case in which the Supreme Court has unanimously rejected the Federal Circuit’s efforts to create clearer rules for patent litigants. Unlike most other Supreme Court patent decisions over the past decade, however, Halo v. Pulse serves to strengthen rather than weaken patent rights.

Patent plaintiffs typically may recover only their lost profits or a “reasonable royalty” to compensate for the infringement, but § 284 of the Patent Act states that “the court may increase the damages up to three times the amount found or assessed.” In the absence of statutory guidance on when the court may award these enhanced damages, the Federal Circuit created a two-part test in its 2007 en banc Seagate opinion, holding that the patentee must show both “objective recklessness” and “subjective knowledge” on the part of the infringer. The Supreme Court has now replaced this “unduly rigid” rule with a more uncertain standard, holding that district courts have wide discretion “to punish the full range of culpable behavior” though “such punishment should generally be reserved for egregious cases.”

Monday, June 13, 2016

On Empirical Studies of Judicial Opinions

I've always found it odd that we (and I include myself in this category) perform empirical studies of outcomes in judicial cases. There's plenty to be gleaned from studying the internals of opinions - citation analysis, judge voting, issue handling, etc., but outcomes are what they are. It should simply be tallying up what happened. Further, modeling those outcomes on the internals becomes the realest of realist pursuits.

And, yet, we undertake the effort, in large part because someone has to. Otherwise, we have no idea what is happening out there in the real world of litigation (and yes, I know there are detractors who say that even this isn't sufficient to describe reality because of selection effects).

But as data is easier to come by, studies have become easier. When I started gathering data for Patent Troll Myths in 2009, there was literally no publicly aggregated data about NPE activity. By the time my third article in the series, The Layered Patent System, hit the presses last month (it had been on SSRN for 16 months, mind you) there was a veritable cottage industry of litigation reporting - studies published by my IP colleagues at other schools, annual reports by firms, etc.

Even so, they all measure things differently, even when they are measuring the same thing. This is where Jason Rantanen's new paper comes in. It's called Empirical Analyses of Judicial Opinions: Methodology, Metrics and the Federal Circuit, and the abstract follows:

Despite the popularity of empirical studies of the Federal Circuit’s patent law decisions, a comprehensive picture of those decisions has only recently begun to emerge. Historically, the literature has largely consisted of individual studies that provide just a narrow slice of quantitative data relating to a specific patent law doctrine. Even studies that take a more holistic approach to the Federal Circuit’s jurisprudence primarily focus on their own results and address only briefly the findings of other studies. While recent developments in the field hold great promise, one important but yet unexplored dimension is the use of multiple studies to form a complete and rigorously supported understanding of particular attributes of the court’s decisions.

Drawing upon the empirical literature as a whole, this Article examines the degree to which the reported data can be considered in collective terms. It focuses specifically on the rates at which the Federal Circuit reverses lower tribunals — a subject whose importance is likely to continue to grow as scholars, judges, and practitioners attempt to ascertain the impact of the Supreme Court’s recent decisions addressing the standard of review applied by the Federal Circuit, including in the highly contentious area of claim construction. The existence of multiple studies purportedly measuring the same thing should give a sense of the degree to which researchers can measure that attribute.

Surprisingly, as this examination reveals, there is often substantial variation of reported results within the empirical literature, even when the same parameter is measured. Such variation presents a substantial hurdle to meaningful use of metrics such as reversal rates. This article explores the sources of this variability, assesses its impact on the literature and proposes ways for future researchers to ensure that their studies can add meaningful data (as opposed to just noise) to the collective understanding of both reversal rate studies and quantitative studies of appellate jurisprudence more broadly. Although its focus is on the Federal Circuit, a highly studied court, the insights of this Article are applicable to virtually all empirical studies of judicial opinions.
I liked this paper. It provides a very helpful overview of the different types of decisions researchers make that can affected how their empirical "measurement" (read counting) can be affect and thus inconsistent with others. It also provides some suggestions for solving this issue in the future.

My final takeaway is mixed, however. On the one hand, Rantanen is right that the different methodologies make it hard to combine studies to get a complete picture. More consistent measures would be helpful. On the other hand, many folks count the way they do because they see deficiencies with past methodologies. I know I did. For example, when counting outcomes, I was sure to count how many cases settled without a merits ruling either way (almost all of them). Why? Because "half of patents are invalidated" is very different than "half of the 10% of patents ever challenged are invalidated" are two very different outcomes.

Thus, I suspect one reason we see inconsistency is that each later researcher has improved on the methodology of those who went before, at least in his or her own mind. If that's true, the only way we get to consistency now is if we are in some sort of "post-experimental" world of counting. And if that's true, then I suspect we won't see multiple studies in the first place (at least not for the same time period). Why bother counting the same thing the same way a second time?

Friday, June 10, 2016

Patent Damages Conference at Texas Law

Numerous patent academics, practitioners, and judges gathered in Austin at the University of Texas School of Law yesterday and today for a conference on patent damages, organized by Prof. John Golden and supported by a gift from Intel. Here's a quick overview of the 12 papers that were presented, the suggestions from the paper commenters, and some notes from the Q&A. (We're following a modified Chatham House Rules in which only statements from academics can be attributed, but it was great having others in the room.)

Jason Bartlett & Jorge Contreras, Interpleader and FRAND Royalties – There is no reason to believe the sum of the bottom-up royalty determinations from FRAND proceedings will be reasonable in terms of the overall value the patents contribute to the standard. To fix this, statutory interpleader should be used to join all patent owners for a particular standard into a single proceeding that starts with a top-down approach. Arti Rai asks whether the bottom-up approach really creates such significant problems. Why can’t courts doing the bottom-up approach look at what prior courts have done? And doesn’t this vary depending on what product you’re talking about? But ultimately, this is a voluntary proposal that individual clients could test out. Doug Melamed notes that even if royalties in individual cases are excessive, standard implementers won't have an incentive to interplead unless their aggregate burden is excessive—and given the large number of "sleeping dog" patents, it's not clear that's true.

Ted Sichelman, Innovation Factors for Reasonable Royalties – Instead of calculating royalties based on the infringer's revenues, let's use the patentee's R&D costs (including related failures and commercialization costs) and award reasonable rate of return. Better aligned with innovation-focused goals of patent law. Becky Eisenberg notes that it is stunning that patentee costs aren't in the kitchen-sink Georgia-Pacific list, and she thinks idea of moving toward a cost-based approach more broadly has significant normative appeal, but she doesn't think it's easier to apply (see, e.g., criticisms of DiMasi estimates of pharmaceutical R&D costs). I think this paper is tapping into the benefits of R&D tax credits as an innovation reward. Daniel Hemel and I have compared the cost-based reward of R&D tax credits with the typical patent reward (in a paper Ted has generously reviewed), and it seems worth thinking more about whether and when it makes sense to move this cost-based reward into the patent system.

Tuesday, June 7, 2016

Does Europe Have Patent Trolls?

There have been countless articles—including in the popular press—about the problems (or lack thereof) with "patent trolls" or "non-practicing entities" (NPEs) or "patent-assertion entities" (PAEs) in the United States. Are PAEs and NPEs a uniquely American phenomenon? Not exactly, says a new book chapter, Patent Assertion Entities in Europe, by Brian Love, Christian Helmers, Fabian Gaessler, and Max Ernicke.

They study all patent suits filed from 2000-2008 in Germany's three busiest courts and most cases filed from 2000-2013 in the UK. They find that PAEs (including failed product companies) account for about 9% of these suits and that NPEs (PAEs plus universities, pre-product startups, individuals, industry consortiums, and IP subsidiaries of product companies) account for about 19%. These are small numbers by U.S. standards, but still significant. Most European PAE suits involve computer and telecom technologies. Compared with the United States, more PAE suits are initiated by the alleged infringer, fewer suits involve validity challenges, fewer suits settle, and more suits involve patentee wins.

Many explanations have been offered for the comparative rarity of PAE suits in Europe, including higher barriers to patenting software, higher enforcement costs, cheaper defense costs, smaller damages awards, and more frequent attorney's fee awards. The authors think their "data suggests that each explanation plays a role," but that "the European practice of routinely awarding attorney's fees stands out the most as a key reason why PAEs tend to avoid Europe."

Tuesday, May 31, 2016

Rachel Sachs: Prizing Insurance

If anyone is looking for a clear and comprehensive review of the ways in which patents can distort investment in innovation, as well as a summary of the literature on incentives "beyond IP", I highly recommend Rachel Sachs' new article Prizing Insurance: Prescription Drug Insurance as Innovation Incentive, forthcoming in the Harvard Journal of Law & Technology. Sachs' article is specific to the pharmaceutical industry but is very useful for anyone writing on the general topics of non-patent alternatives and patent-caused distortion of innovation. Sachs follows in the footsteps of IP scholars like Amy Kapczynski, along with Rebecca Eisenberg, Nicholson Price, Arti Rai, and Ben Roin–and draws on plentiful literature in the health law field that IP scholars may never see. Her analysis is far more detailed and sophisticated than this brief summary. Read more at the jump.

Friday, May 27, 2016

Thoughts on Google's Fair Use Win in Oracle v. Google

It seems like I write a blog post about Oracle v. Google every two years. My last one was on May 9, 2014, so the time seems right (and a fair use jury verdict indicates now or never). It turns out that I really like what I said last time, so I'm going to reprint what I wrote at Madisonian.net a couple years ago at the bottom. Nothing has changed about my my views of the law and of what the Federal Circuit ruled.

So, this was a big win for Google, especially given the damages Oracle was seeking. But it was a costly win. It was expensive to have a trial, and it was particularly expensive to have this trial. But it is also costly because it leaves so little answered: what happens the next time someone wants to do what Google did? I don't know. Quite frankly, I don't know how often people make compatible programs already, how many were holding back, or how many will be deterred.

Google did this a long time ago thinking it was legal. How many others have done similar work that haven't been sued? Given how long has it been since Lotus v. Borland quieted things, has the status quo changed at all? My thoughts after the jump.

Thursday, May 19, 2016

Galasso & Schankerman on the Effect of Patent Invalidation on Subsequent Innovation by the Patentee

In a paper previously featured on this blog, economists Alberto Galasso (Toronto School of Management) and Mark Schankerman (London School of Economics) pioneered the use of effectively random Federal Circuit panel assignments as an instrumental variable for patent invalidation. That paper looked at the effect of invalidation on citations to the patent; they now have a new paper, Patent Rights and Innovation by Small and Large Firms, examining the effect of invalidation on subsequent innovation by the patent holder. They summarize their results as follows:
Patent invalidation leads to a 50 percent decrease in patenting by the patent holder, on average, but the impact depends critically on characteristics of the patentee and the competitive environment. The effect is entirely driven by small innovative firms in technology fields where they face many large incumbents. Invalidation of patents held by large firms does not change the intensity of their innovation but shifts the technological direction of their subsequent patenting.
Their measure of post-invalidation patenting is the number of applications filed by the patent owner in a 5-year window after the Federal Circuit decision. They also present results suggesting that large firms tend to redirect their research efforts after invalidation of a non-core patent (but not for a core patent), whereas "the loss of a patent leads small firms to reduce innovation across the board, rather than to redirect it." (A "core" patent is one whose two-digit technology field accounts for at least 2/3 of the firm's patenting.)

This is a rich paper with many, many results and nuances and caveats—highly recommended for anyone interested in patent empirics.

Monday, May 16, 2016

Rules, Standards, and Change in the Patent System (Keynote Speech Transcript)

Last weekend I was honored to give the keynote speech at the Giles S. Rich Inn of Court annual dinner held at the Supreme Court. It was a great time, and I met many judges, lawyers, clerks, and consultants that I had not met before.

Several people asked me what I planned to discuss, so I thought I would post a (very lightly edited) transcription of my talk. I'll note that the kind words I mention at the beginning refer to my introduction, given by Judge Taranto, which really was too kind and generous by at least half.

The text after the jump.

Wednesday, May 11, 2016

Buccafusco, Heald & Bu: Do Pornographic Knock-offs Tarnish the Original Work?

Trademark law provides a remedy against "dilution by tarnishment of [a] famous mark" and the extension of copyright term was justified in part by concerns about tarnishment if Mickey Mouse fell into the public domain. But there has been little evidence of what harm (if any) trademark and copyright owners suffer due to unwholesome uses of their works. Chris Buccafusco, Paul Heald, and Wen Bu provide some new experimental evidence on this question in their new article, Testing Tarnishment in Trademark and Copyright Law: The Effect of Pornographic Versions of Protected Marks and Works. In short, they exposed over 1000 MTurk subjects to posters of pornographic versions of popular movies and measured perceptions of the targeted movie. They "find little evidence of tarnishment, except for among the most conservative subjects, and some significant evidence of enhanced consumer preferences for the 'tarnished' movies."

Before describing the experiments, their article begins with a thorough review of tarnishment theory and doctrine, as well as consumer psychology literature on the role of sex in advertising. For both experiments, subjects were shown numerous pairs of movie posters, and were asked questions like which movie a theater should show to maximize profits. In the first experiment, treatment subjects saw a poster for a pornographic version of one of the movies; e.g., before comparing Titanic vs. Good Will Hunting, treatment subjects had to compare the porn parody Bi-Tanic vs. another porn movie. Overall, control subjects chose the target movie (e.g., Titanic) 53% of the time, whereas treatment subjects who saw the porn poster (e.g., Bi-Tanic) chose the target movie 58% of the time, and this increase was statistically significant. Women were no less affected by the pornographic "tarnishment" than men, and familiarity with the target movie did not have any consistent effect.

Sunday, May 8, 2016

Jotwell Post: Is It Time To Overrule the Trademark Classification Scheme?

As I've noted before, Jotwell is a great way to keep up with interesting recent scholarship in IP and other areas of law. My latest Jotwell review, of Jake Linford's Are Trademarks Ever Fanciful?, was just published on Friday. As I describe in the post, this is the latest in an impressive trifecta of recent articles that have attacked the Abercrombie spectrum for word marks from all sides. The full review is available here.

Tuesday, May 3, 2016

[with Colleen Chien] Recap of the Berkley Software IP Symposium

Slides and papers from the 20th Annual Berkeley Center for Law and Technology/Berkeley Technology Law Journal Symposium - focused on IP and software are now posted. Colleen Chien and I thought we would discuss a few highlights (with some commentary sprinkled in):

David Hayes' opening keynote on the history of software and IP was terrific. The general tenor was that copyright rose and fell with a lot of uncertainty in between. Just was copyright fell, patent rose, and is now falling, with a lot of uncertainty in between. And trade secret law has remained generally steady throughout. David has long been the Chair of the Intellectual Property Group of Fenwick and West, former home to USPTO Director Michelle Lee, as well as IP professors Brenda Simon, Steve Yelderman, and Colleen Chien and is one of the wisest and most experienced IP counselors in the valley. (Relatedly, Michael Risch's former firm was founded by former Fenwick & West lawyers.)

Peter Menell's masterful presentation on copyright and software spanned decades and ended with a Star Wars message, "May the Fair Use Be With You."

Randall Picker took a different view of copyright and software, focusing instead on whether reuse was simply an add-on/clone or a new platform/core product. Thus, he thought Sega v. Accolade came out wrong because allowing fair use for an unlicensed game undermined the discount pricing for game consoles, but thought Whelan v. Jaslow (a case nearly everyone hates) came out properly because the infringing software was a me-too clone. Borland, on the other hand, created a whole new spreadsheet program to create competition. In related work, Risch published "How can Whelan v. Jaslow and Lotus v. Borland Both be Right?" some 15 years ago.

Felix Wu presented an interesting talk about how the copyright "abstraction-filtration-comparison" test might be used to determine the meaning of "means plus function" claims in patent law.

MIT's Randall Davis's "technical talk" explained how software is made and how abstractions are the essence of software. It's turtles all the way down: one level that seems concrete is merely an abstraction when viewed from the level below. The challenge, it seems, is that calling anything abstract can have wide meaning.

Rob Merges further discussed how we might define abstract. His suggestion was to look at abstract as the opposite of concrete and definite. Thus, patents would need to be far more detailed than many that are being rejected now, but such a standard might be more clear to apply.

Arti Rai discussed a similar solution, noting that lower levels of abstraction were more likely to be affirmed. Furthermore, solutions to computer specific problems seem to hold a key. Rai and Merges should be posting papers on these topics soon.

Kevin Collins presented a draft paper on Williamson v. Citrix Online. He posited that Williamson would present difficult challenges for courts trying to determine structure - including structure that's supposedly present in the claim. He presented some ideas about how to think about solutions to the problem.

Similarly, Lee Van Pelt showed some difficulties with Williamson (including Williamson itself) in practice.

Michael Risch's talk and paper leaves off where Hayes ended, with the fall of patents. It explores whether or not, in the wake of the trouble software patents are in, developers might turn to trade secret to protect visible features, and what the implications might be. It turns out that less than a week after the conference, a software company won a $940m jury verdict on exactly this theory.

Colleen Chien's talk explored, if software is eating the world (H/T MarcAndreesen), how much IP and its default allotments matter, in a world where contract is king, and monopolies are coming from data, network effects, scale (a la Thiel) and, possibly, winner take all dynamics, as discussed on Mike Masnick’s recent podcast rather than patents and copyrights. It presents early results and an early draft paper from an analysis of ~2000 technology agreements and some 30k sales involving software, finding evidence of both technology and liability transfers.

Aaron Perzanowski's presentation and forthcoming book with Jason Schultz suggests that perhaps the IoT should be known as IoThings-We-Don't-Own.

Relatedly, John Duffy addressed the first sale doctrine and presented his recent paper with Richard Hynes that shows how commercial law ties to and explains how exhaustion should work. This is relevant to the Federal Circuit's recent decision in the Lexmark case on international exhaustion.

Second day lunchtime keynote, William Raduchel, talked about the importance of culture to innovation and IP. As Mark Zuckerberg mentioned on an investor call, Facebook develops openly (some of it's IT infrastructure and non-core innovation, at least) because that's what it's developers demand and need to get the job done. He also discussed how "deep learning" may change how we consider IP, because computers will now be writing the code that produces creative and inventive output.

The empirical panel provided a helpful overview of recent studies. Pam Samuelson’s talk highlighted changes in the software industry, particularly with the growth of software as a service (SaAS), the cloud, the app market, the IoT, and embedded software as well as the software IP protection landscape since the Berkeley Patent Survey was carried out in 2007. Samuelson also discussed how recent invalidations of algorithms and data structure patents will affect copyright. If those features are too abstract for patenting, then we should consider whether they are too abstract for copyright protection, even if they might be expressed in multiple ways. (NB: A return to the old Baker v. Selden conundrum: bookkeeping systems are the province of patents, not copyrights. But can you patent a bookkeeping system? Maybe a long time ago, but surely not today).

John Allison gave an overview of what we know (empirically) about software patents. And the chief IP officers panel was a highlight, as each person had a different perspective on the system based on its own position - though they did agree on a few basics, such as the need for some way to appropriate investments and the preference for clear lines.

There is much more at the link to the symposium, including slides, drafts, and past (but relevant) papers. It's well worth a look! TAP is also running a seven-part series on the conference, starting with this overview of David Hayes' talk.