Tuesday, December 22, 2015

Burk: Is Dolly patentable subject matter in light of Alice?

Dan Burk's work should already be familiar to those who follow patentable subject matter debates (see, e.g., here, here, and here). In a new essay, Dolly and Alice, he questions whether the Federal Circuit's May 2014 In re Roslin decision—holding clones such as Dolly to not be patentable subject matter—should have come out differently under the Supreme Court's June 2014 decision in Alice v. CLS Bank. Short answer: yes.

Burk does not have kind words for either the Federal Circuit or the Supreme Court, and he reiterates his prior criticism of developments like the gDNA/cDNA distinction in Myriad. His analysis of how Roslin should be analyzed under Alice begins on p. 11 of the current draft:
[E]ven assuming that the cloned sheep failed the first prong of the Alice test, the analysis would then move to the second prong to look for an "inventive concept" that takes the claimed invention beyond an attempt to merely capture the prohibited category of subject matter identified in the first step. . . . The Roslin patent claims surely entail such an inventive concept in the method of creating the sheep. The claims recite "clones," which the specification discloses were produced by a novel method that is universally acknowledged to have been a highly significant and difficult advance in reproductive technology—an "inventive concept" if there ever was one . . . [which] was not achieved via conventional, routine, or readily available techniques . . . .
But while Burk thinks Roslin might have benefited from the Alice framework, he also contends that this exercise demonstrates the confusion Alice creates across a range of doctrines, and particularly for product by process claims. He concludes by drawing an interesting parallel to the old Durden problem of how the novelty of a starting material affects the patentability of a process, and he expresses skepticism that there is any coherent way out; rather, he thinks Alice "leaves unsettled questions that will haunt us for years to come."

Tuesday, December 15, 2015

3 New Copyright Articles: Buccafusco, Bell & Parchomovsky, Grimmelmann

My own scholarship and scholarly reading focuses most heavily on patent law, but I've recently come across a few interesting copyright papers that seem worth highlighting:
  • Christopher Buccafusco, A Theory of Copyright Authorship – Argues that "authorship involves the intentional creation of mental effects in an audience," which expands copyrightability to gardens, cuisine, and tactile works, but withdraws it from aspects of photographs, taxonomies, and computer programs.
  • Abraham Bell & Gideon Parchomovsky, The Dual-Grant Theory of Fair Use – Argues that rather than addressing market failure, fair use calibrates the allocation of uses among authors and the public. A prima facie finding of fair use in certain categories (such as political speech) could only be defeated by showing the use would eliminate sufficient incentives for creation.
  • James Grimmelmann, There's No Such Thing as a Computer-Authored Work – And It's a Good Thing, Too – "Treating computers as authors for copyright purposes is a non-solution to a non-problem. It is a non-solution because unless and until computer programs can qualify as persons in life and law, it does no practical good to call them 'authors' when someone else will end up owning the copyright anyway. And it responds to a non-problem because there is nothing actually distinctive about computer-generated works."
Are there other copyright pieces posted this fall that I should take a look at?

Update: For readers not on Twitter, Chris Buccafusco added some additional suggestions:

Tuesday, December 8, 2015

Bernard Chao on Horizontal Innovation and Interface Patents

Bernard Chao has posted an interesting new paper, Horizontal Innovation and Interface Patents (forthcoming in the Wisconsin Law Review), on inventions whose value comes merely from compatibility rather than improvements on existing technology. And I'm grateful to him for writing an abstract that concisely summarizes the point of the article:
Scholars understandably devote a great deal of effort to studying how well patent law works to incentive the most important inventions. After all, these inventions form the foundation of our new technological age. But very little time is spent focusing on the other end of the spectrum, inventions that are no better than what the public already has. At first blush, studying such “horizontal” innovation seems pointless. But this inquiry actually reveals much about how patents can be used in unintended, and arguably, anticompetitive ways.
This issue has roots in one unintuitive aspect of patent law. Despite the law’s goal of promoting innovation, patents can be obtained on inventions that are no better than existing technology. Such patents might appear worthless, but companies regularly obtain these patents to cover interfaces. That is because interface patents actually derive value from two distinct characteristics. First, they can have “innovation value” that is based on how much better the patented interface is than prior technology. Second, interface patents can also have “compatibility value.” In other words, the patented technology is often needed to make products operate (i.e. compatible) with a particular interface. In practical terms, this means that an interface patent that is not innovative can still give a company the ability to foreclose competition.
This undesirable result is a consequence of how patent law has structured its remedies. Under current law, recoveries implicitly include both innovation and compatibility values. This Article argues that the law should change its remedies to exclude the latter kind of recovery. This proposal has two benefits. It would eliminate wasteful patents on horizontal technology. Second, and more importantly, the value of all interface patents would be better aligned with the goals of the patent system. To achieve these outcomes, this Article proposes changes to the standards for awarding injunctions, lost profits and reasonable royalties.
The article covers examples ranging from razor/handle interfaces to Apple's patented Lightning interface, so it is a fun read. And it also illustrates what seems like an increasing trend in patent scholarship, in which authors turn to remedies as the optimal policy tool for effecting their desired changes.

Wednesday, December 2, 2015

Sampat & Williams on the Effect of Gene Patents on Follow-on Innovation

Bhaven Sampat (Columbia Public Health) and Heidi Williams (MIT Econ) are two economists whose work on innovation is always worth reading. I've discussed a number of their papers before (here, here, here, here, and here), and Williams is now a certified genius. They've posted a new paper, How Do Patents Affect Follow-On Innovation? Evidence from the Human Genome, which is an important follow-up to Williams's prior work on gene patents. Here is the abstract:
We investigate whether patents on human genes have affected follow-on scientific research and product development. Using administrative data on successful and unsuccessful patent applications submitted to the US Patent and Trademark Office, we link the exact gene sequences claimed in each application with data measuring follow-on scientific research and commercial investments. Using this data, we document novel evidence of selection into patenting: patented genes appear more valuable — prior to being patented — than non-patented genes. This evidence of selection motivates two quasi-experimental approaches, both of which suggest that on average gene patents have had no effect on follow-on innovation.
Their second empirical design is particularly clever: they use the leniency of the assigned patent examiner as an instrumental variable for which patent applications are granted patents. Highly recommended.

Saturday, November 28, 2015

Tim Holbrook on Induced Patent Infringement at the Supreme Court

Tim Holbrook (Emory Law) has a new article, The Supreme Court's Quiet Revolution in Induced Patent Infringement (forthcoming in the Notre Dame Law Review), arguing that with all the hand-wringing over Supreme Court patentable subject matter cases, scholars have missed the substantial changes the Court has wrought in induced patent infringement. Here is the abstract:
The Supreme Court over the last decade or so has reengaged with patent law. While much attention has been paid to the Court’s reworking of what constitutes patent eligible subject matter and enhancing tools to combat “patent trolls,” what many have missed is the Court’s reworking of the contours of active inducement of patent infringement under 35 U.S.C. § 271(b). The Court has taken the same number of § 271(b) cases as subject matter eligibility cases – four. Yet this reworking has not garnered much attention in the literature. This article offers the first comprehensive assessment of the Court’s efforts to define active inducement. In so doing, it identifies the surprising significance of the Court’s most recent case, Commil USA, LLC v. Cisco Systems, Inc., where the Court held that a good faith belief on the part of the accused inducer cannot negate the mental state required for inducement – the intent to induce acts of infringement. In so doing, the Court moved away from its policy of encouraging challenges to patent validity as articulated in Lear, Inc. v. Adkins and its progeny. This step away from Lear is significant and surprising, particularly where critiques of the patent system suggest there are too many invalid patents creating issues for competition. This article critiques these aspects of Commil and then addresses lingering, unanswered questions. In particular, this article suggests that a good faith belief that the induced acts are not infringing, which remains as a defense, should only act as a shield against past damages and not against prospective relief such as injunctions or ongoing royalties. The courts so far have failed to appreciate this important temporal dynamic.
The four cases he's talking about are Grokster, Global-Tech, Limelight, and Commil. (You might say, "Wait, Grokster is a copyright case!" But Holbrook explains the substantial impact it had on patent law.) I think the article is worth a read, and that the concluding point on damages is quite interesting.

Tuesday, November 24, 2015

Decoding the Patent Venue Statute

Last Friday, Colleen Chien and I published an op-ed in the Washington Post arguing that the courts and/or Congress should take a hard look at venue provisions. It was a fun and challenging project, because we worked hard to delineate where we agreed and where we disagreed. One area where we weren't sure if we disagreed or not was whether the 2011 amendment to the general venue provisions should affect patent venue.

This is a thorny statutory interpretation issue, and because we didn't have space to discuss it in the op-ed (nor did we agree on all the details), I thought I would lay out my view of the issues here. My views don't speak for Colleen. Further, while my views fall on one side, they do so based solely on statutory interpretation. I don't have a horse in the policy race other than to say that it's important, it's complicated, and it should be considered.

Here is my tracing of the history:

Thursday, November 19, 2015

Defending a Federal Trade Secrets Law

This last week, 42 professors sent a letter to Congress opposing the Defend Trade Secrets Act. This same week, James Pooley, a well-known attorney and former Deputy Director General of WIPO, released a draft of The Myth of the Trade Secret Troll: Why We Need a Federal Civil Claim for Trade Secret Misappropriation, forthcoming in the George Mason Law Review.

Jim Pooley is the author of a treatise on trade secrets, and I respect him greatly. He has forgotten more about trade secrets than most people will ever know, and it should be no surprise that this support of the DTSA is the most well-reasoned that I've seen. He considers each of the studies one by one and and addresses their concerns: that the UTSA is not that uniform, that the seizure provision is narrow, that global trade secret risks require federal jurisdiction, and that state trade secret laws will not be preempted.

The paper is worth a read. It is likely to be persuasive to those who are on board. It might sway those that are undecided.

I'm right in the middle on this one. I signed on the professor's letter, but barely. I think the latest version of the proposed law is much improved from before, but I have concerns about inevitable disclosure and the seizure rules.

Here is my take on four of the primary defenses of/needs for the new act:

It's a cyber world that needs federal procedures: A big part of the push for a federal law is that it opens up federal courts to trade secret cases and thus better procedures. While I understand this, I wonder whether the argument proves too much. If the concern is about foreign actors acting over the network, then federal courts will have diversity jurisdiction, and all the procedural hurdles melt away. Further, if it's about procedure, then a simple solution would be to allow filing of trade secret actions in federal court. Furthermore, the procedural advantages are not a panacea; sometimes state court judges are more accessible and move faster than federal judges. Pooley argues the opposite, but my own, admittedly more limited, experience is that it depends on the judge, not the forum. Furthermore, the procedures are improved in a federal system, but it was only in 2014 that out of district subpoenas could issue in the local district court, and one must still file motions in the remote district to enforce them.

The UTSA is not really that uniform: This is true. Indeed, I wrote an empirical essay and book chapter showing that courts routinely fail to cite other state court Uniform Trade Secrets Act decisions. But it is not clear that the non-uniformities, either in statute or in practice, are of the type that will affect important outcomes when there is a real trade secret misappropriation. I've yet to hear of a case where the venue's peculiar trade secret laws made a difference to the types of "global cyber-espionage" type misappropriation that Pooley is concerned with.

Seizures rules are narrow, and they are narrower than they were in prior drafts: In my studies, injunctions were the types of decisions least likely to result in citation to UTSA cases, as opposed to "general" injunction law. This seems to favor the need for a specialized procedure. That said, I've yet to hear a convincing reason why TRO practice is insufficient or why they must be in federal court. I've represented clients on both sides of seizures, and I've seen how court papers can be manipulated to get desired results. This is not to say that we shouldn't have ex parte seizures; just that the case for a specialized procedure is not clear. I've yet to hear of a case with a real misappropriation of the "global cyber-espionage" type where a TRO was refused and the bad guys got away. I also think that the seizure laws are not quite as narrow as they could be - there is still plenty of room for abuse. [UPDATE: Eric Goldman provides good analysis--and critique--of the seizure provisions.]

The proposed act is neutral on inevitable disclosure: The paper makes a good point: inevitable disclosure is not substantive trade secret law, but instead how courts apply "threatened" misappropriation injunctions. This is true, but it doesn't answer the concern. Some states have a stronger policy of employee mobility than others, and thus require more evidence of a threat. The concern with a federal law is that precedent in one circuit (or district court) will be applied in other district courts. That state trade secret law is not preempted is no answer, because the supremacy clause will dictate how federal law applies. State trademark laws aren't preempted either, and we don't see those applied very often-and never to allow more use by the defendant, only less. Thus, the standard for what constitutes a "threat" could be weakened, and that concerns those who view employee mobility important for competition policy. While I agree with Pooley on the doctrine, I don't think the doctrinal view is enough to persuade that this is not a concern.

So, where does that leave us? Quite frankly, I don't know. I think that the case for a federal trade secret law is not that strong. There are benefits to such a law, but I'm not convinced they are so great that we should supplant 150 years of state regulation of trade secrets. On the other hand, I don't think the case for keeping state trade secret laws is that great either. I like the federalist experiment argument, but there are a whole lot of states that have rules that I don't like (such as the inevitable disclosure doctrine). In my view, no one has made the case that one system is better than the other.

What I do think would be helpful - to me, at least - would be to hear the horror stories of trade secret misappropriation where current law and procedure failed, and the misappropriator got away because we did not have this law. I bet there are some such stories, but I haven't heard one yet.

Wednesday, November 18, 2015

McCarthy & Roumiantseva on Federal Circuit Exclusive Trademark Jurisdiction: "We Think Not"

Even though the Federal Circuit is often called "the patent court," it hears appeals in a wide variety of areas. Statistics on caseload by origin are available here; note that patent appeals have ballooned to 62% of the docket in FY2015 from 29% in FY2006.

A number of commentators have proposed "fixing" the Federal Circuit by adjusting its jurisdiction; for example, Paul Gugliuzza's creative suggestion has been discussed on this blog. One proposal has been to give the court exclusive jurisdiction over trademark cases. Tom McCarthy and Dina Roumiantseva think it is time to squelch any enthusiasm for this idea:
With some regularity over the years, a proposal is made to change the Lanham Act so that appeals in all Lanham Act trademark and false advertising cases from district courts across the United States will be diverted from the regional circuit courts of appeal to the Court of Appeals for the Federal Circuit. We think it is time to discuss this proposal head on and hopefully to convince the reader that this diversion is not a good idea and should never be implemented. Advocates of this proposal claim that trademark law would benefit from the consistency that a single appeals court could provide and that the Federal Circuit has exceptional expertise in trademark law. We believe, however, that trademark law does not suffer from the kind of circuit conflict that led to the channeling of all patent appeals to the Federal Circuit in 1982. Moreover, our review of case law suggests that some regional circuits have a comparable or greater experience with trademark law. We argue that no change in the present system of trademark appeals is needed.
Given the benefits of policy diversity and the lack of a compelling argument for centralizing trademark appeals, I tend to agree. Their full essay, Divert All Trademark Appeals to the Federal Circuit? We Think Not, is on SSRN.

Will there ever be changes to the Federal Circuit's jurisdiction? Given the lack of consensus on whether the current jurisdiction creates problems and if so, how best to fix it in a way that is both sound and politically palatable, I don't foresee any imminent changes.

Thursday, November 12, 2015

Erika Lietzan on the Myths of Data Exclusivity

Erika Lietzan joined the Missouri Law faculty last fall after a distinguished career practicing FDA law, which included making partner at Covington & Burling and serving as Assistant General Counsel for PhRMA (the pharmaceutical lobbying organization). She thus has a wealth of knowledge about the intricacies of pharmaceutical patents and FDA approval, and scholars and reporters interested in the details of pharma news (like the Daraprim controversy) could learn a lot by contacting her.

Lietzan recently posted The Myths of Data Exclusivity. The data exclusivity period for a drug is the period during which the FDA won't allow a generic to rely on the drug's clinical trial data for approval—typically 5 years for new non-biological drugs and 12 years for biological drugs. Data exclusivity was most recently in the news as the "final sticking point" in the TPP negotiations.

Data exclusivity is typically viewed as a patent-like benefit for innovative firms that provides a market-based reward by allowing firms to charge supracompetitive prices. But Lietzan attempts to reframe data exclusivity as not a benefit for pioneer pharmaceutical firms, but rather "a period of time during which all firms are subject to the same rules governing market entry." She explains:
In 1984 [with the Hatch-Waxman Act], pioneers with non-biological drugs approved after 1962 lost something; their right to perpetual exclusive use of their research became a right to only five years of exclusive use. And in 2010 [with the Biosimilars Act], pioneers with licensed biological drugs lost something; their perpetual exclusive right was shortened to twelve years. This reframing identifies the primary beneficiary of the choice made by policymakers as follow-on applicants rather than pioneers.

Thursday, October 29, 2015

Understanding the Role of Patents for Small Smartphone Companies

When I think of smartphones and smartphone patents, I think of the big battles and players: Apple v. Samsung, Motorola v. Microsoft, NTP v. RIM, Nokia, Ericsson, Google, Sony, and other mega-companies. But what about small smartphone companies? Do they have patents? And, if so, how do those patents affect important issues like fundraising and litigation?

Joel R. Reidenberg, N. Cameron Russell, Maxim Price & Anand Mohan (Fordham Law School and Fordham CLIP) answer some of these questions in their article, Patents and Small Participants in the Smartphone Industry (18 Stan. Tech. L. Rev. 375 (2015)). Here is the abstract:
For intellectual property law and policy, the impact that patent rights may have on the ability of small companies to compete in the smartphone market is a critically important issue for continued robust innovation. Open and competitive markets provide vitality for the development of smartphone technologies. Nevertheless, the impact of patent rights on the smartphone industry is an unexplored area of empirical research. Thus, this Article seeks to show how patent rights affect the ability of small participants to enter, compete, and exit smartphone markets. The study collected and used comprehensive empirical data on patent grants, venture funding, mergers and acquisitions, initial public offerings, patent litigation, and marketing research data. This Article shows empirically that small participants succeed in the market when they have a low and specific critical mass of patents and that this success exceeds the general norms in the startup world. Surprisingly, the analysis demonstrates that the level of financing and market success do not increase with larger patent portfolios. Lastly, despite the controversies over patent trolls, this Article demonstrates that patent litigation, whether from operating companies or NPEs, does not appear to be a significant concern for small players and does not appear to pose barriers to entry. The Article concludes by arguing that patent rights are providing incentives for innovation among small industry players and that contrary to some expectations, patent rights support competitiveness in the smartphone industry for small market players.
This is an interesting article - my comments after the jump.

Tuesday, October 20, 2015

How Often are DMCA Takedown Notices Wrong?

A couple weeks ago, I blogged about Lenz v. Universal Music and wondered how often "bad" DMCA notices are actually sent. My theory was one of availability and salience - we talk about the few nutty requests, but largely ignore the millions of real takedown requests. I wrote:
How important is this case in the scheme of things? On the one hand, it seems really important - it's really unfair (pardon the pun) to takedown fair use works. But how often does it happen? Once in a while? A thousand times a month? Ten thousand? It seems like often, because these are the takedowns we tend to hear about; blogs and press releases abound. However, I've never seen an actual number discerned from data (though the data is available).
While there are some older studies on smaller data sets, no one has attempted to tease out the millions of notices that come in each moth now (like 50 million requests per month!). It turns out, though, that someone has attempted a comprehensive study through 2012. Daniel Seng (Assoc. Prof. at NUS/JSD student at Stanford) downloaded Google's transparency data and performed cross-checks with Chilling Effects data to give us 'Who Watches the Watchmen?' An Empirical Analysis of Errors in DMCA Takedown Notices:
Under the Digital Millennium Copyright Act (DMCA) takedown system, to request for the takedown of infringing content, content providers and agents issuing takedown notice are required to identify the infringed work and the infringing material, and attest to the accuracy of such information and their authority to act on behalf of the copyright owner. Online service providers are required to evaluate such notices for their effectiveness and compliance before successfully acting on them. To this end, Google and Twitter as service providers are claiming very different successful takedown rates. There is also anecdotal evidence that many of these successful takedowns are "abusive" as they do not contain legitimate complaints of copyright or erroneously target legitimate content sites. This paper seeks to answer these questions by systematically examining the issue of errors in takedown notices. By parsing each individual notice in the dataset of half a million takedown notices and more than fifty million takedown requests served on Google up to 2012, this paper identifies the various types of errors made by content providers and their agents when issuing takedown notices, and the various notices which were erroneously responded to by Google. The paper finds in that up to 8.4% of all successfully-processed requests in the dataset had "technical" errors, and that additionally, at least 1.4% of all successfully-processed requests had some "substantive" errors. As all these errors are avoidable at little or no cost, this paper proposes changes to the DMCA that would improve the takedown system. By strengthening the attestation requirements of notices, subjecting notice senders to penalties for submitting notices with unambiguously substantive errors and clarifying the responsibilities of service providers in response to non-compliant notices, the takedown system will remain a fast, efficient and nuanced system that balances the diverse interests of content providers, service providers and the Internet community at large.
I think this is a really interesting and useful paper, and the literature review is also well worth a read. I think the takeaways, though, depend on your priors. Some thoughts on the paper after the jump.

Monday, October 12, 2015

The Banality of the TPP

I've now skimmed through the leaked IP Chapter of the final TPP agreement, and I've read some commentary on it. I'm not sure what to make of it, but at the moment I'm trying to decide if there's anything to make of it. As I read it, almost all of the provisions are an exportation of US law to the agreeing countries - for good or bad. But as a US scholar, that has me thinking, "meh" - at least as to the content of it. I'm not aware of any consensus that the way these other countries were doing things is so much better than our way. Or so much worse, for that matter. I understand concerns that the TPP represents the U.S. getting other countries to agree to its view of the world in exchange for whatever other benefit those countries think they will be getting, but that's not what this post is about. Instead, this post is about whether the TPP is creating some new law. (Disclaimer: I haven't looked at the pharma/biologics sections in detail. I know that these sections, in particular, might create issues in other countries in a way they don't currently experience).

But the hand-wringing I've read in content reviews seems odd. Many complain that copyright term will increase from life + 50 to life +70. I think that's no big deal - forever + 20 isn't that much  longer than forever. This doesn't mean I agree that the duration is a good thing; I don't. Life + 50 is already too long. I just think that if this is what you have to complain about, then there's not so much to complain about.

Other analysis is similar. The EFF points out the shocking clause that breaking digital locks may be punished even if there is no copyright infringement. The italics are theirs, as if this is some new thing. But it turns out that's been the law in the US since the DMCA was passed more than 15 years ago. Similarly, another website complains that devices used to break digital locks may be forfeited and destroyed. Shocking again! Except that this, too, has long been a potential remedy under the law in the U.S.

Don't get me wrong. People who like US law as is will be pleased with the TPP. People who don't like US law as it is will not be pleased with the TPP. But regardless of which camp one is in, I am not convinced that the world will be a significantly better or worse place because of the IP provisions of the TPP.

So, where does this leave us? The TPP has real problems, but they aren't substantive -- at least not newly substantive:

1. The secret negotiation process was not great. But I'm a cynic and think the outcome would have been the same.

2. The TPP locks in US law as it is, so dreams of orphan works and reduced IP protections are gone. But I'm a cynic and think that we are locked in anyway.

3. The TPP exports US law to other countries, extending its hegemony. I'm not convinced that this will change things one way or the other. But I'm a cynic, so time will tell. In the meantime, I don't think anyone's prediction will be accurate.

4. This is long and dense, and there may be parts that will change U.S. law in some way that isn't being discussed now. And I'm a cynic, so I'm sure there are.

Thursday, October 8, 2015

Policy Issues in Lexmark Argument on International Patent Exhaustion

Last Friday, the Federal Circuit heard en banc argument on whether it should adopt a U.S. rule of international patent exhaustion in Lexmark v. Impression Products. This case has important distributive implications for foreign consumers, as Daniel Hemel and I describe in our new essay, Trade and Tradeoffs: The Case of International Patent Exhaustion (forthcoming in the Columbia Law Review Sidebar).

In a Patently-O post last week, we asked whether the Federal Circuit would recognize the U.S.–foreign tradeoff at stake. And the answer appears to be yes. Tony Dutra summed up the argument for Bloomberg (subscription required): Policy Focus in Fed. Cir. Patent Exhaustion Review. Here's an excerpt of his analysis:
Most members of the court appeared prepared to distinguish patent law because there is no Patent Act statutory equivalent to the Copyright Act's provision. However, the discussion turned more to policy questions as the 90-minute argument proceeded. Some judges essentially said that the harm to the copyright holder in Kirtsaeng—books priced more cheaply overseas and imported for less than the U.S. price—was minimal compared to the harm to, for example, AIDS patients in Africa, unless patentees can engage in drug price discrimination.
You can listen to the oral argument yourself here. (Bill Vobach also maintains a helpful key to judge voices.) The most extensive discussion of the issue of AIDS drugs starts at 1:16:02. Barbara Fiacco, arguing for BIO as amicus, discusses the importance of a no-exhaustion rule for allowing regional pricing and preventing arbitrage at 1:05:21.

Thursday, October 1, 2015

How Does the Economy Affect Patent Litigation?

When I was in practice, the conventional wisdom was that litigation (of all kinds) grew during recessions, because people were less optimistic and willing to let slights go, and instead fought over every dollar. Alan Marco (Chief Economist, PTO), Shawn Miller (Stanford Law Fellow), and Ted Sichelman (San Diego) have attempted to tackle this question with respect to patent litigation. They examine litigation rates from 1970-2009 in conjunction with a variety of macroeconomic factors.

Their paper is coming out in the Journal of Empirical Legal Studies, but a draft is on SSRN. The abstract follows:

Recent studies estimate that the economic impact of U.S. patent litigation may be as large as $80 billion per year and that the overall rate of U.S. patent litigation has been growing rapidly over the past twenty years. And yet, the relationship of the macroeconomy to patent litigation rates has never been studied in any rigorous fashion. This lacuna is notable given that there are two opposing theories among lawyers regarding the effect of economic downturns on patent litigation. One camp argues for a substitution theory, holding that patent litigation should increase in a downturn because potential plaintiffs have a greater incentive to exploit patent assets relative to other investments. The other camp posits a capital constraint theory that holds that the decrease in cash flow and available capital disincentivizes litigation. Analyzing quarterly patent infringement suit filing data from 1971-2009 using a time-series vector autoregression (VAR) model, we show that economic downturns have significantly affected patent litigation rates. (To aid other researchers in testing and extending our analyses, we have made our entire dataset available online.) Importantly, we find that these effects have changed over time. In particular, patent litigation has become more dependent on credit availability in a downturn. We hypothesize that such changes resulted from an increase in use of contingent-fee attorneys by patent plaintiffs and the rise of non-practicing entities (NPEs), which unlike most operating companies, generally fund their lawsuits directly from outside capital sources. Over roughly the last twenty years, we find that macroeconomic conditions have affected patent litigation in contrasting ways. Decreases in GDP (particularly economy-wide investment) are correlated with significant increases in patent litigation and countercyclical economic trends. On the other hand, increases in T-bill and real interest rates as well as increases in economy-wide financial risk are generally correlated with significant decreases in patent suits, leading to procyclical trends. Thus, the specific nature of a downturn predicts whether patent litigation rates will tend to rise or fall.
The authors also have a guest post at Patently-O discussing their findings.

I don't have too much to add to their analysis; the notion that a credit crunch will reduce litigation makes a lot of sense.

My two primary additional comments are as follows:

1. There is a lot more to the findings and the authors' analysis than presented in the Patently-O post. For example, there was a shift as litigation changed from competitor to licensor-based claims. The full paper is worth a read.

2. I am not fully convinced what this tells us about the period from 2010-2014. The authors hint that economic growth during that time correlates with a drop in litigation, but the drop in litigation was only in latter 2014 (and reversed itself in early 2015, as they note). This is further complicated by the change in how we count litigation after the America Invents Act requirement that each defendant be joined in a separate case. I think a lot more work (and creative thought) needs to be done to meld the pre- and post-AIA data into a coherent data set.

[UPDATE: I've been corrected - patent litigation by defendant count apparently decreased more than I let on (see, e.g. here) if you exclude false marking claims. This tempers some of my skepticism, though I would still like to see the post AIA data combined with pre-AIA data]

Wednesday, September 30, 2015

Trade and Tradeoffs: The Case of International Patent Exhaustion

When I read all the briefs for Lexmark v. Impression Products—the en banc Federal Circuit case on patent exhaustion that will be argued Friday—it seemed like there were pieces missing, including related to an article Daniel Hemel and I are working on. So we've written and posted a short Essay about the case, Trade and Tradeoffs: The Case of International Patent Exhaustion. If ten pages is too long, we also have an even shorter guest post up at Patently-O today, Will the Federal Circuit Recognize the U.S.–Foreign Tradeoff in Friday’s Lexmark Argument? Comments welcome!

Sunday, September 27, 2015

Supreme Court To Consider 12 Patent Petitions Monday

So far, there are zero patent cases (or other IP cases) on the Supreme Court's docket this Term. But tomorrow is the first conference since the Court's summer break, also known as the "Long Conference," at which they will consider twelve petitions in Federal Circuit patent cases. Only one of the twelve involves patentable subject matter, and I don't think the chances of the Court taking it are high. What other issues have been teed up?

The only one of the twelve to make SCOTUSblog's Petitions We're Watching page is W.L. Gore v. Bard, but I'm not sure why they're watching. The longstanding dispute over the Gore-Tex patent has now turned to an effort to overturn the longstanding rule that patent licenses may be either express or implied, but the arguments don't seem particularly compelling.

Perhaps somewhat more worth watching is Life Technologies v. Promega, which involves extraterritorial application of U.S. patent laws in a case where LifeTech was found to have actively induced its own foreign subsidiary. The case has strong advocates (Carter Phillips for Life Technologies and Seth Waxman for Promega), and the petition is supported by amici Agilent Technologies and Professor Tim Holbrook, and by a dissent below from Chief Judge Prost.

There are two petitions related to whether the recent Supreme Court § 285 decisions (Octane Fitness and Highmark) also changed the standard for willful infringement under § 284: Halo v. Pulse and Stryker v. Zimmer. As Jason Rantanen noted at Patently-O, Judge Taranto's concurrence from the denial of rehearing en banc in Halo explained that this is not the right case, but that some § 284 issues could warrant en banc review in a future case. I think the Supreme Court might give the Federal Circuit time to work this out.

I/P Engine v. AOL questions whether the Federal Circuit's de facto standard of review in obviousness cases (including implementation of KSR's "common sense" approach) is insufficiently deferential to factual findings. The Federal Circuit's obviousness holding knocked out a $30 million jury verdict (over a dissent by Judge Chen), and the petition is supported by the Boston Patent Law Association and i4i. But this doesn't look like a winner to me: obviousness is a mixed question of fact and law; the Federal Circuit has always articulated what seems like the right standard of review; and it's hard to say the Federal Circuit has vigorously embraced KSR (see, e.g., the end of this post).

None of these seem like must-takes, but we'll see! Grant decisions will likely be released later in the week.

Thursday, September 24, 2015

The Difficulty of Measuring the Impact of Patent Law on Innovation

I'm teaching an international and comparative patent law seminar this fall, and I had my students read pages 80–84 of my Patent Experimentalism article to give them a sense of the difficulty evaluating any country's change in patent policy. For example, although there is often a correlation between increased patent protection and increased R&D spending, it could be that the R&D causes the patent changes (such as through lobbying by R&D-intensive industries), rather than vice versa. There is also the problem that patent law has transjurisdictional effects: increasing patent protection in one country will have little effect if firms were already innovating for the global market, meaning that studies of a patent law change will tend to understate the policy's impact.

It is thus interesting that some studies have found significant effects from increasing a country's patent protection. One example I quote is Shih-tse Lo's Strengthening Intellectual Property Rights: Experience from the 1986 Taiwanese Patent Reforms (non-paywalled draft here). In 1986, Taiwan extended the scope of patent protection and improved patent performance. Lo argues that this change was plausibly exogenous (i.e., externally driven) because they were caused by pressure from the United States rather than domestic lobbying, and he concludes that the strengthening of patent protection caused an increase in R&D intensity in Taiwan.

One of my students, Tai-Jan Huang, made a terrific observation about Lo's paper, which he has given me permission to share: "My first intuition when I see the finding of the article is that the increase of R&D expenses may have something to do with the tax credits for R&D expenses rather than stronger patent protection." He noted that in 1984, Taiwan introduced an R&D tax credit through Article 34-1 of the Investment Incentives Act, which he translated from here:
If the reported R&D expenses by manufacturing industry exceeds the annual highest spending on R&D in the last five years, 20% of the exceeding expenses could be used for tax credit for income tax. The total tax credit used could not exceed the 50% of annual income tax, but the unused tax credit could defer to next five years.
Additional revisions were made in 1987, related to a tax credit for corporations that invest in technology companies, which might indirectly lead to an increase in R&D spending by tech companies. As I've argued (along with Daniel Hemel) in Beyond the Patents–Prizes Debate, R&D tax credits are a very important innovation incentive, and Lo doesn't seem to have accounted for these changes in the tax code. Yet another addition to the depressingly long list of reasons it is hard to measure the impact of patent laws on innovation!

Friday, September 18, 2015

The Availability Heuristic and IP

I'm reading (or more accurately, listening to) Thinking, Fast and Slow, by Daniel Kahneman. The book is an outstanding survey of the psychological literature on how we form judgments and take shortcuts in our mental thinking. The number of studies of highly trained statisticians who make basic statistical errors in everyday tasks is remarkable.

The book is a must read, I think, for scholars of all types. Not only does it provide a variety of food for thought on how to thing about forming judgments from research, the informal book style allows Kahneman to take a meta-view in which he can describe problems of reproducible results and intractable debates in his own field (which, not surprisingly, ring true in IP research as well).

I'll have a couple of posts on this topic in the coming weeks, but the first relates to the availability heuristic. This mental shortcut usually manifests itself by giving greater weight, importance, or perceived frequency to events that are more "available" to the memory - that are more easily conjured by the mind. You usually see this trotted out in debates about the relative safety of air versus car travel (people remember big plane crashes, but way more people die in car accidents. I've also seen this raised in gun control debates, as more children die in swimming pools than accidental gunshots (especially if you consider the denominator number of pools v. number of guns). But pools are a silent killer. (Note that I make no statement on regulation - perhaps pools are underregulated; insurance companies seem to act as if they are).

Thursday, September 10, 2015

More Evidence on Patent Citations and Measuring Value

For years, researchers have used patent citations as a way to measure various aspects of the innovative ecosystem. They have been linked to value, information diffusion, and technological importance, among other things. Most studies find that more "forward citations" - that is, the more future patents that cite back to a patent - means more of all these things: more value, more diffusion, and more importance.

But forward citations are not without their warts. For example, both my longitudinal study of highly litigious NPEs and random patent litigants and Allison, Lemley & Scwhartz's cross-sectional study of all patent cases filed in 2008-2009 found that forward citations had no statistically significant impact on patent validity determinations. Additionally, Abrams, et al., found that actual licensing revenue followed an inverted "U" shape with respect to forward citations (Lisa writes about that paper here). That is, revenue grew as citations grew, but after a peak, revenues began to fall as forward citations grew even larger. This implies that the types of things we can measure with forward citations may be limited by just how many there are, and also by the particular thing we are trying to measure.

This is why it was so great to see a new NBER paper in my SSRN feed yesterday (it's not totally new - for those who can't get NBER papers, a draft was available about a year ago). The paper, by Petra Moser (NYU), Joerg Ohmstedt (Booz & Co.) & Paul W. Rhode (UNC) is called Patent Citations and the Size of the Inventive Step - Evidence from Hybrid Corn. The abstract follows: 
Patents are the main source of data on innovation, but there are persistent concerns that patents may be a noisy and biased measure. An important challenge arises from unobservable variation in the size of the inventive step that is covered by a patent. The count of later patents that cite a patent as relevant prior art – so called forward citations – have become the standard measure to control for such variation. Citations may, however, also be a noisy and biased measure for the size of the inventive step. To address this issue, this paper examines field trial data for patented improvements in hybrid corn. Field trials report objective measures for improvements in hybrid corn, which we use to quantify the size of the inventive step. These data show a robust correlation between citations and improvements in yields, as the bottom line measure for improvements in hybrid corn. This correlation is robust to alternative measures for improvements in hybrid corn, and a broad range of other tests. We also investigate the process, by which patents generate citations. This analysis reveals that hybrids that serve as an input for genetically-related follow-on inventions are more likely to receive self-citations (by the same firm), which suggests that self-citations are a good predictor for follow-on invention.
I love this study because it ties something not only measurable, but objective, to the forward citations. This is something that can't really be done with litigation and licensing studies, both of which have a variety selection effects that limit the random (shall we say, objective) nature of them. More on this after the jump.

Tuesday, September 8, 2015

Laura Pedraza-FariƱa on the Sociology of the Federal Circuit

The Federal Circuit has faced no shortage of criticism in its role as the expert patent court, including frequent Supreme Court reversals and calls for abolition of its exclusive patent jurisdiction (most prominently from Seventh Circuit Chief Judge Diana Wood, though she was far from the first). In Understanding the Federal Circuit: An Expert Community Approach, Laura Pedraza-FariƱa (Northwestern Law) argues that the sociology literature on "expert communities" helps explain the Federal Circuit's "puzzling behaviors."

She suggests that "[t]he drive that expert communities exhibit for maximal control and autonomy of their knowledge base . . . explains why the Federal Circuit is less likely to defer to solutions proposed by other expert communities, such as the PTO," as well as "to defy non-expert superior generalists, such as the Supreme Court." Expert communities also engage in codification of their domains to demonstrate their expertise, manage internal dissent, and constrain subordinate communities, and Pedraza-FariƱa argues that this tendency explains the Federal Circuit's frequent preference for rules over standards. (As she notes, this is related to Peter Lee's argument that the Federal Circuit adopts formalistic rules to limit the extent to which generalist judges must grapple with complex technologies.) Finally, expert communities seek to frame borderline problems as within their area of control, and to place inadequate weight on competing considerations outside their expertise—qualities that critics might also pin on the Federal Circuit.

Friday, September 4, 2015

Nothing is Patentable

I signed onto two amicus briefs last week, both related to the tightening noose of patentable subject matter. Those familiar with my article Everything is Patentable will know that I generally favor looser subject matter restrictions in favor of stronger patentability restrictions. That ship sailed, however; apparently we can't get our "stronger patentability restrictions" ducks in a row, and so we use subject matter as a coarse filter. It may surprise some to hear that I can generally live with that as a policy matter; for the most part, rejected patents have been terrible patents.

But, now that these weaker patents are falling like dominoes, I wonder whether subject matter rhetoric can stop itself. This has always been my concern more than any other: the notion of unpatentable subjects is fine, but actually defining a rule (or even a standard) that can be applied consistently is impossible.

This leads us to the amicus briefs. The first is in Sequenom, where the inventors discovered that a) fetal DNA might be in maternal blood, and b) the way you find it is to amplify paternal fetal DNA in the blood. The problem is that the discovery is "natural" and people already knew how to amplify DNA. As Dennis Crouch notes, this seems like a straightforward application of Mayo - a non-inventive application of the natural phenomenon. Kevin Noonan and Adam Mossoff were counsel of record on the brief.

But here's the thing: it's all in the way you abstract it. Every solution is non-inventive once you know the natural processes behind it. This argument is at the heart of a short essay I am publishing in the Florida L. Rev. Forum called Nothing is Patentable. In that essay, I show that many of our greatest inventions are actually rather simple applications of a natural phenomenon or abstract idea. As such, they would be unpatentable today, even though many of them survived subject matter challenges in their own day.

Returning to Sequenom, there were other ways to parse the natural phenomenon. For example, it is natural that there is fetal DNA in the mother's blood, but finding it by seeking out only the paternal DNA is a non-conventional application of that phenomenon. No one else was doing that. Or, it is natural that there is fetal DNA in the mother, but finding it within the blood is a non-conventional application of that phenomenon. After all, no one had been doing it before, and no one had thought to do it before. Either of these two views is different than the types of application in Mayo v. Prometheus, which simply involved giving a drug and then measuring the level of the drug in the system (something you would expect to find after giving the drug). In Mayo, the court commented on the bitter divide over what to do about diagnostics, and punted for another day. That day has come.


The second amicus brief is in Intellectual Ventures v. Symantec; Jay Kesan filed this brief. In the Symantec case, the district court ruled that unique hashes to identify files were like license plates, and therefore conventional. Further, it noted that the unique ids could be created by pencil and paper, given enough time. It distinguished virus signatures (an example in PTO guidance of something that is patentable) by saying that file ids were not really computer based, while virus signatures were. I mention this case in my Nothing is Patentable essay as well.

I have less to say about this ruling, but I think it is wrong on both counts. First, unique file id hashes are much more like virus signatures than they are like license plates. There is a rich computer science literature area in this field - solving problems by identifying files through codes associated with their content. Of course, computer science folks will say this is not patentable because it's just math. That's a different debate; but it is surely not the same thing as attaching a license plate to a car. Second, this notion that people can do it with a pencil and paper has got to go. As the brief points out, with enough people and enough time, you can simulate a microprocessor. But that can't be how we judge whether a microprocessor can be patented, can it?

These two cases show the pendulum swinging - and hard - toward a very restrictive view of patentability. Taken seriously and aggressively applied, they stand for the proposition that many of the fruits of current R&D are outside the patent system -- even though their historical analogues were patentable. Perhaps I'm being a pessimist; I sure hope so.

Friday, August 28, 2015

Dow v. NOVA: Maybe Nautilus Does Matter

In June 2014, the Supreme Court held in Nautilus v. Biosig that the Federal Circuit's "insolubly ambiguous" test for indefiniteness was "more amorphous than the statutory definiteness requirement allows," and that the proper test is whether the claims "fail to inform, with reasonable certainty, those skilled in the art about the scope of the invention." But is this actually a stricter test?

Jason Rantanen (Iowa Law) posted a nice essay this spring, Teva, Nautilus, and Change Without Change (forthcoming Stan. Tech. L. Rev.), arguing that in practice, the answer has been no: "The Federal Circuit continues to routinely reject indefiniteness challenges . . . . Indeed, with one exception, the Federal Circuit has not held a single claim indefiniteness under the Nautilus standard, and even that one exception would almost certainly have been indefinite [pre-Nautilus]." (Since then, the court also held the Teva v. Sandoz claims indefinite, but it had done the same pre-Nautilus.) Rantanen also noted that the Federal Circuit has failed to grapple with the meaning of Nautilus and has continued to rely on its pre-Nautilus cases when evaluating definiteness. In one case the court even reversed an decision that claims were indefinite for reconsideration after Nautilus—implying that the Nautilus standard might be less stringent! (I've noticed the Federal Circuit similarly undermine the Supreme Court's change to the law of obviousness in KSR.)

But the Federal Circuit's decision today in Dow Chemical Co. v. NOVA Chemicals Corp. carefully examines the change Nautilus has wrought. Dow's asserted claims cover an improved plastic with "a slope of strain hardening coefficient greater than or equal to 1.3," and NOVA argued that the patents fail to teach a person of ordinary skill how to measure the "slope of strain hardening." In a prior appeal (after a jury trial), the Federal Circuit had held the claims not indefinite under pre-Nautilus precedent. The district court then held a bench trial on supplemental damages, leading to the present appeal. In today's opinion by Judge Dyk, the Federal Circuit holds that Nautilus's change in law "provides an exception to the doctrine of law of the case or issue preclusion," and holds that the claims are indefinite under the new standard.

The Federal Circuit dismisses the hand-wringing over whether Nautilus really meant anything, stating that "there can be no serious question that Nautilus changed the law of indefiniteness." The court notes that "Nautilus emphasizes 'the definiteness requirement's public-notice function,'" and that "the patent and prosecution history must disclose a single known approach or establish that, where multiple known approaches exist, a person having ordinary skill in the art would know which approach to select. . . . Thus, contrary to our earlier approach, under Nautilus, '[t]he claims . . . must provide objective boundaries for those of skill in the art.'"

Examining the claims at issue, the court notes that the patents state that "FIG. 1 shows the various stages of the stress/strain curve used to calculate the slope of strain hardening," but the patents contain no figure showing the stress/strain curve. There were four ways to measure the slope, which could result in different results, but the patents provided no "guidance as to which method should be used or even whether the possible universe of methods is limited to these four methods." The claims thus fail the new test: "Before Nautilus, a claim was not indefinite if someone skilled in the art could arrive at a method and practice that method," but "this is no longer sufficient."

Tuesday, August 25, 2015

Evaluating Patent Markets

I've been interested in patent markets for some time. In addition to several articles studying NPE litigation, I've written two articles discussing secondary markets explicitly: Patent Portfolios as Securities and Licensing Acquired Patents.

Thus, I was very interested in Michael Burstein's (Cardozo) draft article on the subject, called Patent Markets: A Framework for Evaluation, which is now on SSRN and forthcoming in the Arizona State L.J.

What I like about the approach of this article is that it takes a step back from the question of whether certain types of parties create a market, and asks instead, is having a market at all a good thing?

Here is the abstract:
Patents have become financial assets, in both practice and theory. A nascent market for patents routinely produces headline-grabbing transactions in patent portfolios, and patent assertion entities frequently defend themselves as sources of liquidity essential for a patent market to function. Much of the discourse surrounding these developments assumes that a robust, liquid market for patents would improve the operation of the patent system. In this Essay, I challenge that assumption and systematically assess the cases for and against patent markets. I do so by taking seriously both the underlying innovation promotion goal of the patent system and the lessons of financial economics, and asking what might be the effects of a market for patents that looked roughly like other familiar markets for stocks, real estate, or secondhand goods.

I conclude that, like much in patent law, the effects of robust patent markets are likely to vary with specific technological and business contexts. When there is a close fit between patents and useful technologies, a patent market can support a market for technology that aids in connecting inventors with developers and sources of capital for commercialization. But when that fit breaks down, market pricing could favor litigation over commercialization. Similarly, a liquid patent market might help to allocate the risks of innovation and of patent infringement to the parties best able to bear it, but a kind of moral hazard familiar to the market for subprime mortgages could lead not to more innovation but to more patents, thereby increasing the overall risk in the system. This analysis suggests that we are having the wrong conversation about patent markets. Rather than assuming their utility and asking how to improve them, we should be undertaking empirical research to determine the circumstances in which they will or will not work and exercising caution in invoking the logic of markets in policy debates about the contours of the patent system.
Like other markets, they are good when they are good, and bad when they are bad. Burstein adds a lot of nuance throughout the article, focusing on arguments why markets may be good or not, but without making too many assumptions about any particular technology or patent owner type.

One thing I would add to the article is the importance of timing. Markets early might be better than markets later, even in the same technological contexts. The article would probably put this into the "business context" category, but I think the importance of diffusion, cumulative innovation, and path dependency merit a separate consideration.

In all events, I think the essay adds to the literature and may produce some testable hypotheses as well.

Saturday, August 1, 2015

Some Reflections on Localism, Innovation, and Jim Bessen's New Book "Learning by Doing"

I highly recommend Jim Bessen's new book, Learning by Doing: The Real Connection Between Innovation, Wages and Wealth (2015), published by Yale University Press. I was lucky to present alongside Jim at Yale's first Beyond IP Conference, where he discussed his ideas about the importance of education and worker training for a successful innovation economy. As Bessen puts it in his book,
innovation can suffer from two distinct problems: markets can fail to provide strong incentives to invest in R&D, and they can fail to provide strong incentives for learning new skills. Underinvestment in R&D is not the only problem affecting innovation. It might not even be the most important problem. ... There is simply no justification for focusing innovation policy exclusively on remedying underinvestment in R&D, especially since most firms report that patents, which are supposed to correct this underinvestment, are relatively unimportant for obtaining profits on their innovations.
The takeaway is that protecting inventions with patents and copyrights can't be the sole function of an effective innovation policy. Governments need to focus on a much broader range of policies to "encourage broad-based learning of new technical skills, including vocational education, government procurement, employment law, trade secrecy, and patents."

At IP Scholars in Chicago this year, I'll be presenting my new paper Patent Nationally, Innovate Locally.  Like Bessen, I will talk about a broad range of innovation incentives that focus on research and technology commercialization, as well as public investments in STEM education, worker training, and public infrastructure. I'll argue, however, that when intellectual property rights are not the chosen mechanism, many of these incentives should come from sub-national governments like states and cities because they are the smallest jurisdictions that internalize the immediate economic impacts of public investments in innovation.*  While states cannot internalize the benefits of patent and copyright regimes that result in widespread disclosure of easily transferable information, they can internalize the benefits of innovation finance (direct expenditures of taxpayer revenues on innovation) especially when those expenditures go towards improving the education, skills, and knowledge-base of the local labor force.

Innovation finance (IF) is an important new frontier in IP law scholarship. Not only does innovation finance supplement federal IP rights by correcting market failures in technology commercialization and alleviating some of the inefficiencies created by patents and copyrights, it also takes into account Bessen's point: "markets can fail to provide strong incentives to invest in R&D, and they can fail to provide strong incentives for learning new skills." Both market failures are important, and the latter may be even more important than the former. But if we really want to focus on a broader range of policies like government procurement and support for public education to "encourage broad-based learning of new technical skills," as Bessen suggests, then we need to start looking at state and local governments.

To understand this point, take the example of a government prize for developing a better way to manufacture cars without using as many resources (e.g. 3D printing). If the federal government gives the prize, this makes some sense: assuming the prize hits its mark, national taxpayers will eventually benefit when the innovation is perfected and widely adopted, and the information on how to do it becomes public. But the impacts of the prize are going to be very different for different parts of the country. First off, the prize winner has to locate its research and operations somewhere. Presumably, it's going to choose a state like Michigan or Ohio with the resources, facilities, and human knowledge-base to do this kind of research and experimentation. The immediate benefits for local firms and residents are obvious: jobs, tax revenues, business for local companies. There is also a less perceptible but far more important benefit: easier access to new technical knowledge coming out of the experiments and inside information on emerging market developments. Plentiful research suggests that a lot of knowledge is hard to transfer and that effective exchange requires proximity, especially when science-based research and unfamiliar technology are involved. The implication for local officials seeking to boost the regional economy is clear: the more innovation that happens in your jurisdiction and the more residents who gain skills in an important new field, the better off your state or city will be. (This is the basis for innovation cluster theory and the idea that regions gain competitive advantages from localized knowledge exchange, originally discussed by UC Berkeley's AnnaLee Saxenian.)

Given that the immediate economic impacts of the 3D printing prize, including the tax revenues and most of the spillovers, are geographically localized to certain regions, do we really want federal policymakers designing these types of incentives, and do we really want taxpayers in states like Alaska and Arizona footing the bill? Or do we want significant input –  both political and financial – from the places in which the innovation is occurring? I think the answer is the latter. The benefits of decentralizing fiscal policy are numerous. I see at least two major benefits in this case: fairer shouldering of tax burdens, and more efficient innovation policies as a result of the better information and stronger incentives of local officials. Not only are they aware of the capabilities and needs of the local economy but they can act swiftly in response to local problems, liberated from the wrangling of "earmark politics" at the national level. The same principles apply to education and incentives for learning new skills – the second prong of Bessen's revitalized innovation policy. For example, would we expect national policymakers, who act in the national interest and are beholden to federal taxpayers, to supply the right amount of vocational training for future workers in the newly invented 3D printing automobile industry of my hypothetical? No: we would expect the main push for this kind of training to come from a state like Michigan with the right mix of interested workers and industry players.

In short, I suggest that innovation policy in the United States is not federal. It is bifurcated: the federal government protects exclusive rights in new inventions and original expression using patents and copyrights; states, cities and sub-national governments use innovation finance to capture the geographically localized economic benefits of innovation.

There are several responses to my argument. If innovation finance were all local, then wouldn't there be a major under-supply of research, especially for innovations without a clear market, like research into rare debilitating diseases or (until Elon Musk) space exploration? Wouldn't states compete with each other and end up spending way too much to attract firms into their jurisdictions? Aren't local politicians vulnerable to capture by local industries? I agree that all these risks exist. This is why I discuss a variety of instances where the federal government has an important role to play. Besides protecting copyrights and patents in new inventions, the federal government does a lot of direct financing for innovation too. This money goes towards education, basic research, and mission R&D (mainly in national defense) – all of which produce pervasive national spillovers as well as localized ones. On the flip side, the federal government also has a variety of means for controlling and coordinating the actions of sub-national governments in order to reduce corruption, wasted expenditures and "beggar thy neighbor" competition. Some of these preemptive forces come from discretionary judicial doctrines like the Dormant Commerce Clause (admittedly a weak source of limits on states); others are or perhaps should be statutory (the Patent Act??).

If you have comments or seek a draft of Patent Nationally, Innovate Locally, or my other working paper, Cluster Competition, which argues that the federal government is trying to "manage" state competition to grow innovation clusters through the America Competes Act's regional innovation program, please email me at: cahrdy@gmail.com or chrdy@law.upenn.edu


* The basic principle of fiscal decentralization is "the presumption that the provision of public services should be located at the lowest level of government encompassing, in a spatial sense, the relevant benefits and costs." 

Thursday, July 30, 2015

Kiesling & Silberg on Incentives for Rooftop Solar

I've written about innovation policy experimentation and about incentives beyond IP, so I was interested in a new working paper posted by Lynne Kiesling and Mark Silberg, Regulation, Innovation, and Experimentation: The Case of Residential Rooftop Solar. They are not lawyers, but their description of incentives for the development and commercialization of rooftop solar will be of interest to legal scholars of innovation, as it underscores that the role of the state is far more complex than simply providing IP incentives. (Indeed, the paper never mentions IP.)

These incentives include a 30% federal tax credit (set to expire at the end of 2016), as well as many state-level incentives, such as volumentrically reduced subsidies to benefit first movers, net metering policies requiring credits to consumers who produce excess energy, and financial regulations that allow third-party financing to help consumers avoid upfront capital expenses. As they note, "the details matter," and "[n]ot all renewable portfolio standards are equal." This paper seems to nicely encapsulate many of those details.

Monday, July 27, 2015

Rachel Sachs & Becky Eisenberg on Incentives for Diagnostic Tests

I highly recommend two recently posted articles on declining innovation incentives for diagnostic tests, particularly due to changes in patentable subject matter doctrine. In Innovation Law and Policy: Preserving the Future of Personalized Medicine, Rachel Sachs (Petrie-Flom Fellow at Harvard Law) examines the intersection of IP with FDA regulation and health law, joining a growing body of scholarship that seeks to contextualize IP in a broader economic context. Here is the abstract:
Personalized medicine is the future of health care, and as such incentives for innovation in personalized technologies have rightly received attention from judges, policymakers, and legal scholars. Yet their attention too often focuses on only one area of law, to the exclusion of other areas that may have an equal or greater effect on real-world conditions. And because patent law, FDA regulation, and health law work together to affect incentives for innovation, they must be considered jointly. This Article will examine these systems together in the area of diagnostic tests, an aspect of personalized medicine which has seen recent developments in all three systems. Over the last five years, the FDA, Congress, Federal Circuit, and Supreme Court have dealt three separate blows to incentives for innovation in diagnostic tests: they have made it more expensive to develop diagnostics, made it more difficult to obtain and enforce patents on them, and reduced the amount innovators can expect to recoup in the market. Each of these changes may have had a marginal effect on its own, but when considered together, the system has likely gone too far in disincentivizing desperately needed innovation in diagnostic technologies. Fortunately, just as each legal system has contributed to the problem, each system can also be used to solve it. This Article suggests specific legal interventions that can be used to restore an appropriate balance in incentives to innovate in diagnostic technologies.
Diagnostics Need Not Apply is a new essay by Rebecca Eisenberg (UMich Law) that was nicely summed up by Nicholson Price: "let's just admit it - diagnostic tests are unpatentable."
Diagnostic testing helps caregivers and patients understand a patient’s condition, predict future outcomes, select appropriate treatments, and determine whether treatment is working. Improvements in diagnostic testing are essential to bring about the long-heralded promise of personalized medicine. Yet it seems increasingly clear that most important advances in this type of medical technology lie outside the boundaries of patent-eligible subject matter.
The clarity of this conclusion has been obscured by ambiguity in the recent decisions of the Supreme Court concerning patent eligibility. Since its 2010 decision in Bilski v. Kappos, the Court has followed a discipline of limiting judicial exclusions from the statutory categories of patentable subject matter to a finite list repeatedly articulated in the Court’s own prior decisions for “laws of nature, physical phenomena, and abstract ideas,” while declining to embrace other judicial exclusions that were never expressed in Supreme Court opinions. The result has been a series of decisions that, while upending a quarter century of lower court decisions and administrative practice, purport to be a straightforward application of ordinary principles of stare decisis. As the implications of these decisions are worked out, the Court’s robust understanding of the exclusions for laws of nature and abstract ideas seems to leave little room for patent protection for diagnostics.
This essay reviews recent decisions on patent-eligibility from the Supreme Court and the Federal Circuit to demonstrate the obstacles to patenting diagnostic methods under emerging law. Although the courts have used different analytical approaches in recent cases, the bottom line is consistent: diagnostic applications are not patent eligible. I then consider what the absence of patents might mean for the future of innovation in diagnostic testing.
As I have written, I think changes to patentable subject matter doctrine are an important problem for medical innovation, and that policymakers should think seriously about whether additional non-patent innovation incentives are needed in this area.

Thursday, July 23, 2015

The Latest on Biosimilars: The Federal Circuit Holds that the "Patent Dance" Is Optional

In a previous post, I discussed a district court decision holding that the process for resolving patent disputes under the Biologics Price Competition and Innovation Act (BPCIA) is optional. That post contains extensive background on the BPCIA and its purpose of providing an abbreviated pathway for “biosimilar” drugs to get to market and compete with their branded analogs, resulting in lower prices for consumers. The bottom line is that, under the BPCIA, makers of biosimilar products can rely on the clinical trial data developed for the branded (or “reference”) product in order to accelerate FDA approval. Nevertheless, the BPCIA provides 12 years of data exclusivity to the manufacturer of the reference product. And beyond that period, even if the biosimilar garners FDA approval, the brand owner can try to continue to keep it out of the market by asserting claims of patent infringement. The BPCIA provides for a procedure involving pre-suit information exchange between the brand and biosimilar makers—the so-called “patent dance”—that is intended to apprise the brand of the biosimilar’s manufacturing process and narrow down the number of patents to be be asserted. But the district court, and now the Federal Circuit on appeal, have held that the biosimilar can lawfully refuse to participate in the patent dance.

Wednesday, July 22, 2015

Several Empirical Studies on Injunctions Post-eBay

Chris Seaman recently released a draft of his new paper, Permanent Injunctions in Patent Litigation After eBay: An Empirical Study. In the paper, he present the results of his empirical study of contested permanent injunction decisions in district courts for a 7½ year period following eBay (May 2006 to December 2013). This post follows up my previous posts on Seaman's WIPIP presentation and on Ryan Holte's paper assessing the effects of eBay. Kirti Gupta and Jay Kesan also just released their own study on eBay's impact.

Heidi Williams on Measuring the Effect of Patent Strength on Innovation

When I was in law school, I was surprised (and fascinated) to learn how little scholars actually know about how patent laws affect innovation. My article Patent Experimentalism explains why this is such a hard empirical question, summarizes a lot of the empirical work that has been done, and analyzes the institutional design options (including policy randomization) to help make more empirical progress. Two of my favorites among the empirical pieces I discuss are by MIT economics professor Heidi Williams—one on IP-like contractual restrictions on human genes (summarized previously on this blog), and one on the skew in cancer drug R&D toward late-stage cancer patients (with Eric Budish and Ben Roin). In June, Williams posted a new paper that reflects on the challenges of measuring the relationship between patent strength and research investments, summarizes these two studies, and discusses directions for future research.

Although "a literal interpretation of the current set of available empirical evidence . . . would be that the patent system generates little social value," Williams explains that "drawing such a conclusion would be premature." The "dearth of empirical evidence" stems from two problems: measuring specific research investments, and "finding any variation (much less 'clean' or quasi-experimental variation) in patent protection." Her recent papers "identified and took advantage of new sources of variation in the effective intellectual property protection provided to different inventions." If you aren't familiar with those two papers, this piece contains a great summary.

Looking for similar kinds of variation seems like a promising avenue for future research, although Williams cautions against jumping quickly from her work to broad conclusions for patent policy. For example, while her work on contractual restrictions on human genes led to persistent decreases in follow-on research and commercial product development, her preliminary results from a follow-on project with Bhaven Sampat suggest that "on average gene patents have had no effect on follow-on innovation." As Williams notes, the U.S. Supreme Court has been concerned about the effects of patents on follow-on research in its recent forays into patentable subject matter, and perhaps further empirical work along these lines will help inform this muddled area of doctrine.

Thursday, July 16, 2015

Greg Mandel et al. on the Plagiarism Fallacy in IP

Greg Mandel (Temple Law) has done some interesting empirical work on public perceptions of IP. In his latest work, Intellectual Property Law's Plagiarism Fallacy, he has collaborated with two psychologists, Anne Fast and Kristina Olson (University of Washington), on three new studies. They conclude that debates over whether IP should serve incentive or natural rights objectives are "orthogonal" to the most common perception about IP, which is that its function is to prevent plagiarism. They argue that this "plagiarism fallacy . . . . helps explain pervasive illegal infringing activity on the Internet" as stemming from a failure to understand what IP is rather than indifference toward IP rights.

Monday, July 13, 2015

Roger Ford: The Patent Spiral

I blogged two years ago about a terrific article by Roger Ford (now at UNH Law), and he has done it again: I enjoyed reading The Patent Spiral, forthcoming in the University of Pennsylvania Law Review. Here is the abstract:
Examination—the process of reviewing a patent application and deciding whether to grant the requested patent—improves patent quality in two ways. It acts as a substantive screen, filtering out meritless applications and improving meritorious ones. It also acts as a costly screen, discouraging applicants from seeking low-value patents. Yet despite these dual roles, the patent system has a substantial quality problem: it is both too easy to get a patent (because examiners grant invalid patents that should be filtered out by a substantive screen) and too cheap to do so (because examiners grant low-value nuisance patents that should be filtered out by a costly screen).
This article argues that these flaws in patent screening are both worse, and better, than has been recognized. They are worse because the flaws are not static; they are dynamic, interacting to reinforce each other. This interaction leads to a vicious cycle of more and more patents that should never have been granted. When patents are too easily obtained, that undermines the costly screen, because even a plainly invalid patent has a nuisance value greater than its cost. And when patents are too cheaply obtained, that undermines the substantive screen, because there will be more patent applications, and the examination system cannot scale indefinitely without sacrificing accuracy. The result is a cycle of more and more applications, being screened less and less accurately, to give more and more low-quality patents. And although it is hard to test directly if the quality of patent examination is falling, there is evidence suggesting that this cycle is affecting the patent system.
At the same time, things are better because this cycle may be surprisingly easy to solve. The cycle gives policymakers substantial flexibility in designing patent reforms, because the effect of a reform on one piece of the cycle will propagate to the rest of the cycle. Reformers can concentrate on the easiest places to make reforms (like reforming the litigation system) instead of trying to do the impossible (like eliminating examination errors). Such reforms would not only have local effects, but could help make the entire patent system work better.
Ford provides a refreshingly clear explanation of the two distinct roles that patent examination theoretically plays, and of the feedback loop between them.

Friday, July 10, 2015

Brad Shapiro on the Cost of Strategic Entry Delay in Pharmaceuticals

Pharmaceutical companies sometimes engage in "product hopping," in which they attempt to move patients to a new product with longer patent protection before the generic version of an older drug becomes available. Product hopping was recently in the news with New York state's antitrust suit against Actavis for its decision to withdraw Namenda IR, its 2x/day Alzheimer's drug (with patent protection ending July 2015), to force patients to switch to Namenda XR, a 1x/day version (with patent protection until 2029). In an opinion by Judge Walker, the Second Circuit upheld a preliminary injunction barring withdrawal of Namenda IR prior to generic entry, concluding that the "hard switch crosses the line from persuasion to coercion and is anticompetitive."

The cost to consumers of product hopping that obstructs access to generic drugs is clear. But these marketing strategies raise another potential welfare loss that receives less attention: when a pharmaceutical company delays the introduction of a new drug version until just before patent protection on the old version is set to expire, that delay can harm consumers who prefer the new version. This later cost is the focus of a new empirical paper by Professor Brad Shapiro (Chicago Booth), Estimating the Cost of Strategic Entry Delay in Pharmaceuticals: The Case of Ambien CR.

Monday, July 6, 2015

Entangled Trade Secrets and Presumptive Misappropriation

Over at Prawfsblawg, Orly Lobel discusses the case of former Goldman Sachs programmer Sergey Aleynikov,who has had an up and down (more like down and up) experience dealing with criminal trade secret prosecutions. I think the case is worthy of discussion for a variety of reasons, but I will focus on how different viewpoints will color the facts of this case. Prof. Lobel describes this as a story of "secrecy hysteria," while I view this as a run of the mill "don't copy the source code" case.

I'll discuss my point of view briefly below, but I will admit my priors: I spent my career advising companies and employees in trade secrecy: how to protect them, how to exit without getting sued, and how to win lawsuits as plaintiffs and defendants. I probably represented plaintiffs and defendants with the same frequency, and -- of course -- my client was always right.

More facts after the jump. I should make clear that I've got no position on the criminal prosecutions; my views here are more about trade secrecy than whether the criminal laws should be used to protect them (or should have applied to this particular case). Prof. Lobel and I may well agree on the latter point.

Friday, July 3, 2015

Janet Freilich vs. Ted Sichelman on Patent Searching

Over at New Private Law Blog, Janet Freilich and Ted Sichelman are having a fun exchange about patent searches. Here's an excerpt from Freilich's original post:
Since there is no easy way to index or search through most patents, it is exceedingly difficult (if not impossible) to know if one is infringing a patent. In some industries, firms simply ignore patents, because it is less expensive to pay damages ex post than to do patent clearance searches ex ante. Larger numbers of patents exacerbate this problem. Christina Mulligan and Timothy Lee provide an excellent description of the problem of patent clearance searches in their article on Scaling the Patent System. One sentence in particular drives the problem home: “In software, for example, patent clearance by all firms would require many times more hours of legal research than all patent lawyers in the United States can bill in a year.”