Showing posts with label empirics. Show all posts
Showing posts with label empirics. Show all posts

Tuesday, May 13, 2025

Oswald: Do trade secret injunctions last forever?


An injunction in a trade secret case should generally end when the trade secret does. But new empirical research by Professor Lynda Oswald sheds new light on the actual lifetime of these injunctions. The results are surprising. Oswald finds that in the vast majority (~80%) of cases in her dataset, courts simply grant an open-ended injunction without a fixed term. While defendants could in theory move to dissolve the injunction when the trade secret ceased to exist, Oswald found no evidence this happened.  In effect, the injunctions appear to have remained in effect indefinitely.

Lynda Oswald is the Louis and Myrtle Moskowitz Research Professor of Business and Law at University of Michigan's School of Business. Professor Oswald's article, An Empirical Analysis of Permanent Injunction Life in Trade Secret Misappropriation Cases, has now been published in the Iowa Law Review.

Tuesday, August 18, 2020

Race and Gender in the USPTO: Schuster’s Hard Data for Hard Issues

[I asked some of my RAs to write guest posts this summer, lightly edited by me.  This one is by Jennifer Black, a 3L at Villanova University Charles Widger School of Law]

Intellectual property rights are just that: rights.

Much like other rights, however, they have been unequally granted to people based on factors outside of their control throughout our country’s history. Intellectual property is a means for upward mobility of individuals who, through their own ingenuity, creativity, or otherwise, contribute something of value to our society. It is this exchange of benefits that the patent system is built upon. However, when certain individuals are less likely to reap the rewards of their inventions, they are both disincentivized from creating as well as from engaging with the patent system. Although the extent of these biases is yet unknown, research regarding the subject has been conducted with the intent of identifying and remedying inequity.

The scope of this inequity is difficult to comprehend except by collecting, analyzing, and comprehending the data. Mike Schuster and his coauthors did just that in his article, An Empirical Study of Patent Grant Rates as a Function of Race and Gender (published version in the American Business Law Journal), which examines the patent granting rates as a function of inventors’ races and genders. As scientists and engineers, patent practitioners and examiners will undoubtedly appreciate the amount and quality of his data.

Tuesday, September 24, 2019

Lucy Xiaolu Wang on the Medicines Patent Pool

Patent pools are agreements by multiple patent owners to license related patents for a fixed price. The net welfare effect of patent pools is theoretically ambiguous: they can reduce numerous transaction costs, but they also can impose anti-competitive costs (due to collusive price-fixing) and costs to future innovation (due to terms requiring pool members to license future technologies back to the pool). In prior posts, I've described work by Ryan Lampe and Petra Moser suggesting that the first U.S. patent pool—on sewing machine technologies—deterred innovation, and work by Rob Merges and Mike Mattioli suggesting that the savings from two high tech pools are enormous, and that those concerned with pools thus have a high burden to show that the costs outweigh these benefits. More recently, Mattioli has reviewed the complex empirical literature on patent pools.

Economics Ph.D. student Lucy Xiaolu Wang has a very interesting new paper to add to this literature, which I believe is the first empirical study of a biomedical patent pool: Global Drug Diffusion and Innovation with a Patent Pool: The Case of HIV Drug Cocktails. Wang examines the Medicines Patent Pool (MPP), a UN-backed nonprofit that bundles patents for HIV drugs and other medicines and licenses these patents for generic sales in developing countries, with rates that are typically no more than 5% of revenues. For many diseases, including HIV/AIDS, the standard treatment requires daily consumption of multiple compounds owned by different firms with numerous patents. Such situations can benefit from a patent pool for the diffusion of drugs and the creation of single-pill once-daily drug cocktails. She uses a difference-in-differences method to study the effect of the MPP on both static and dynamic welfare and finds enormous social benefits.

On static welfare, she concludes that the MPP increases generic drug purchases in developing countries. She uses "the arguably exogenous variation in the timing of when a drug is included in the pool"—which "is not determined by demand side factors such as HIV prevalence and death rates"—to conclude that adding a drug to the MPP for a given country "increases generic drug share by about seven percentage points in that country." She reports that the results are stronger in countries where drugs are patented (with patent thickets) and are robust to alternative specifications or definitions of counterfactual groups.

On dynamic welfare, Wang concludes that the MPP increases follow-on innovation. "Once a compound enters the pool, new clinical trials increase for drugs that include the compound and more firms participate in these trials," resulting in more new drug product approvals, particularly generic versions of single-pill drug cocktails. And this increase in R&D comes from both pool insiders and outsiders. She finds that outsiders primarily increase innovation for new and better uses of existing compounds, and insiders reallocate resources for pre-market trials and new compound development.

Under these estimations, the net social benefit is substantial. Wang uses a simple structural model and estimates that the MPP for licensing HIV drug patents increased consumer surplus by $700–1400 million and producer surplus by up to $181 million over the first seven years of its establishment, greatly exceeding the pool's $33 million total operating cost over the same period. Of course, estimating counterfactuals from natural experiments is always fraught with challenges. But as an initial effort to understand the net benefits and costs of the MPP, this seems like an important contribution that is worth the attention of legal scholars working in the patent pool area.

Wednesday, July 17, 2019

Pushback on Decreasing Patent Quality Narrative

It's been a while since I've posted, as I've taken on Vice Dean duties at my law school that have kept me busy. I hope to blog more regularly as I get my legs under me. But I did see a paper worth posting mid-summer.

Wasserman & Frakes have published several papers showing that as examiners gain more seniority, their time spent examining patents decreases and their allowances come more quickly. They (and many others) have taken this to mean a decrease in patent quality.

Charles A. W. deGrazia (University of London, USPTO), Nicholas A. Pairolero (USPTO), and Mike H. M. Teodorescu (Boston College Management, Harvard Business) have released a draft that pushes back on this narrative. The draft is available on SSRN, and the abstract is below:

Prior research argues that USPTO first-action allowance rates increase with examiner seniority and experience, suggesting lower patent quality. However, we show that the increased use of examiner's amendments account for this prior empirical finding. Further, the mechanism reduces patent pendency by up to fifty percent while having no impact on patent quality, and therefore likely benefits innovators and firms. Our analysis suggests that the policy prescriptions in the literature regarding modifying examiner time allocations should be reconsidered. In particular, rather than re-configuring time allocations for every examination promotion level, researchers and stakeholders should focus on the variation in outcomes between junior and senior examiners and on increasing training for examiner's amendment use as a solution for patent grant delay.
In short, they hypothesize (and then empirically show with 4.6 million applications) that as seniority increases, the likelihood of examiner amendments goes up, and it goes up on the first office action. They measure how different the amended claims are, and they use measures of patent scope to show that the amended applications are no broader than those that junior examiners take longer to prosecute.

Their conclusion is that to the extent seniority leads to a time crunch through heavier loads, it is handled by more efficient claim amendment through the examiner amendment procedures, and quality is not reduced.

As with all new studies like this one, it will take time to parse out the methodology and hear critiques. I, for one, am glad to hear of rising use of examiner amendments, as I long ago suggested that as a way to improve patent clarity.

Tuesday, May 14, 2019

The Stanford NPE Litigation Database

I've been busy with grading and end of year activities, which has limited blogging time. I did want to drop a brief note that the Stanford NPE Litigation Database appears to be live now and fully populated with 11 years of data from 2007-2017. They've been working on this database for a long while. It provides limited but important data: Case name and number, district, filing date, patent numbers, plaintiff, defendants, and plaintiff type. The database also includes a link to Lex Machina's data if you have access.

The plaintiff type, especially, is something not available anywhere else, and is the key value of the database (hence the name). There are surely some quibbles about how some are coded (I know of one where I disagree), but on the whole, the coding is much more useful than the "highly active" plaintiff designations in other databases.

I think this database is also useful as a check on other services, as it is hand coded and may correct errors in patent numbers, etc., that I've periodically found. I see the value as threefold:

  1. As a supplement to other data, adding plaintiff type
  2. As a quick, free guide to which patents were litigated in each case, or which cases involved a particular patent, etc.
  3. As a bulk data source showing trends in location, patent counts, etc., useful in its own right.

The database is here: http://npe.law.stanford.edu/ Kudos to Shawn Miller for all his hard work on this, and to Mark Lemley for having the vision to create it and get it funded and completed.

Wednesday, May 1, 2019

Measuring Patent Thickets

Measuring the effect of patenting on industry R&D is an age old pursuit in innovation economics. It's hard. The latest interesting attempt comes from Greg Day (Georgia Business) and Michael Schuster (OK State, but soon to be Georgia Business). They look at more than one million patents to determine that large portfolios tend to crowd out startups. I'm totally with them on that. As I wrote extensively during troll hysteria, patent portfolios and assertion by active companies can be harmful to innovation.

The question is how much, and what to do about it. Day and Schuster argue in their paper that the issue is patent thickets, as their abstract shows. The draft article Patent Inequality, is on SSRN:
Using an original dataset of over 1,000,000 patents and empirical methods, we find that the patent system perpetuates inequalities between powerful and upstart firms. When faced with growing numbers of patents in a field, upstart inventors reduce research and development expenditures, while those already holding many patents increase their innovation efforts. This phenomenon affords entrenched firms disproportionate opportunities to innovate as well as utilize the resulting patents to create barriers to entry (e.g., licensing costs or potential litigation).
A hallmark of this type of behavior is securing large patent holdings to create competitive advantages associated with the size of the portfolio, regardless of the value of the underlying patents. Indeed, this strategy relies on quantity, not quality. Using a variety of models, we first find evidence that this strategy is commonplace in innovative markets. Our analysis then determines that innovation suffers when firms amass many low-value patents to exclude upstart inventors. From these results, we not only provide answers to a contentious debate about the effects of strategic patenting, but also suggest remedial policies to foster competition and innovation.
The article uses portfolio sizes and maintenance renewals to find correlations with investment. They find, unsurprisingly, that the more patents there are in portfolios in an industry, the lower the R&D investment. However, the causal takeaways from this seem to me to be ambiguous. It could be the patent thickets that cause that limitation, or it could simply be that industries dominated by large players are less competitive and drive out startups. There are plenty of (non-patent) theorists that would predict such outcomes.

They also find that firms with large portfolios are more likely to renew their patents, holding other indicia of patent quality (and firm assets) equal. Even if we assume that their indicia of patent quality are complete (they use forward cites, number of inventors, and number of claims), the effect they find is really, really small. For the one reported industry - biology, the effect is something like a -0.00000982 percent likelihood of lapse for each additional patent. This is statistically significant, I assume, because of the very large sample size and a relatively small variation. But it seems barely economically significant. If you multiply it out, it means that each patent is 1% more likely to lapse for every 1,000 patents in the portfolio (that is, from 50% chance of lapse, to 49% chance of lapse. For IBM - the largest patentee of the time with about 25,000 patents during the relative time period, it's still only a 25% change. Most patentees, even with portfolios, would be nowhere near that. I'm just not sure what we can read into those numbers - certainly not the broad policy prescriptions suggested in the paper, in my view.

That said, this paper provides a lot of useful information about what drives portfolio patenting, as well as a comprehensive look at what drives maintenance rates. I would have liked to see litigation data mixed in, as that will certainly affect renewals one way or the other, but even as is, this paper is an interesting read.

Tuesday, April 23, 2019

How Does Patent Eligibility Affect Investment?

David Taylor (SMU) was interested in how patent eligibility decisions at the Supreme Court affected venture investment decisions, so he thought he would ask. He put together an ambitious survey of 14,000 investors at 3000 firms, and obtained some grant money to provide incentives. As a result, he got responses from 475 people at 422 firms. The response rate by individual is really low, but by firm it's 12% - not too bad. He performs some analysis of non-responders, and while there's a bit of an oversample on IT and on early funding, it appears to be somewhat representative.

The result is a draft on SSRN and forthcoming in Cardozo L. Rev. called Patent Eligibility and Investment. Here is the abstract:
Have the Supreme Court’s recent patent eligibility cases changed the behavior of venture capital and private equity investment firms, and if so how? This Article provides empirical data about investors’ answers to those important questions. Analyzing responses to a survey of 475 investors at firms investing in various industries and at various stages of funding, this Article explores how the Court’s recent cases have influenced these firms’ decisions to invest in companies developing technology. The survey results reveal investors’ overwhelming belief that patent eligibility is an important consideration in investment decisionmaking, and that reduced patent eligibility makes it less likely their firms will invest in companies developing technology. According to investors, however, the impact differs between industries. For example, investors predominantly indicated no impact or only slightly decreased investments in the software and Internet industry, but somewhat or strongly decreased investments in the biotechnology, medical device, and pharmaceutical industries. The data and these findings (as well as others described in the Article) provide critical insight, enabling evidence-based evaluation of competing arguments in the ongoing debate about the need for congressional intervention in the law of patent eligibility. And, in particular, they indicate reform is most crucial to ensure continued robust investment in the development of life science technologies.
The survey has some interesting results. Most interesting to me was that fewer than 40% of respondents were aware of any of the key eligibility decisions, though they may have been vaguely aware of reduced ability to patent. More on this in a minute.

There are several findings on the importance of patents, and these are consistent with the rest of the literature - that patents are important for investment decisions, but not first on the list (or second or third). Further, the survey finds that firms would invest less in areas where there are fewer patents - but this is much more pronounced for biotech and pharma than it is for IT. This, too, seems to comport with anecdotal evidence.

But I've always been skeptical of surveys that ask what people would do - stated preferences are different than revealed preferences. The best way to measure revealed preferences would be through some sort of empirical look at the numbers, for example a differences-in-differences approach before and after these cases (though having 60% of the people say they haven't heard of them would certainly affect whether the case constitutes a "shock" - a requirement of such a study).

Another way, which this survey attempts, is to ask not what investors would do but rather ask what they have done. This amounts to the most interesting part of the survey - investors who know about the key court opinions say they have moved out of biotech and pharma, and into IT. So much for Alice destroying IT investment, as some claim (though we might still see a shift in the type of projects and/or the type of protection - such as trade secrets). But more interesting to me was that there was also a similar shift among those folks who claimed not to know much about patent eligibility or think it had anything to do with their investment. In other words, even for that group who didn't actively blame the Supreme Court, they were shifting investments out of biotech and pharma and into IT.

You can, of course, come up with other explanations - perhaps biotech is just less valuable now for other reasons. But this survey is an important first step in teasing out those issues.

There are a lot more questions on the survey and some interesting answers. It's a relatively quick and useful read.



Tuesday, April 9, 2019

Making Sense of Unequal Returns to Copyright

Typically, describing an article as polarizing refers to two different groups having very different views of an article. But I read an article this week that had a polarizing effect within myself. Indeed, it took me so long to get my thoughts together, I couldn't even get a post up last week. That article is Glynn Lunney's draft Copyright's L Curve Problem, which is now on SSRN. The article is a study of user distribution on the video game platform Steam, and the results are really interesting.

The part that has me torn is the takeaway. I agree with Prof. Lunney's view that copyright need not be extended, and that current protection (especially duration) is overkill for what is needed in the industry. I disagree with his view that you could probably dial back copyright protection all the way with little welfare loss. And I'm scratching my head over whether the data in his paper actually supports one argument or the other. Here's the abstract:
No one ever argues for copyright on the grounds that superstar artists and authors need more money, but what if that is all, or mostly all, that copyright does? This article presents newly available data on the distribution of players across the PC videogame market. This data reveals an L-shaped distribution of demand. A relative handful of games are extremely popular. The vast majority are not. In the face of an L curve, copyright overpays superstars, but does very little for the average author and for works at the margins of profitability. This makes copyright difficult to justify on either efficiency or fairness grounds. To remedy this, I propose two approaches. First, we should incorporate cost recoupment into the fourth fair use factor. Once a work has recouped its costs, any further use, whether for follow-on creativity or mere duplication, would be fair and non-infringing. Through such an interpretation of fair use, copyright would ensure every socially valuable work a reasonable opportunity to recoup its costs without lavishing socially costly excess incentives on the most popular. Second and alternatively, Congress can make copyright short, narrow, and relatively ineffective at preventing unauthorized copying. If we refuse to use fair use or other doctrines to tailor copyright’s protection on a work-by-work basis and insist that copyright provide generally uniform protection, then efficiency and fairness both require that that uniform protection be far shorter, much narrower, and generally less effective than it presently is.
The paper is really an extension of Prof. Lunney's book, Copyright's Excess, which is a good read even if you disagree with it. As Chris Sprigman's JOTWELL review noted, you either buy in to his methodology or you don't. I discuss below why I'm a bit troubled.

Tuesday, March 26, 2019

Trademarking the Seven Dirty Words

With the Supreme Court agreeing to hear the Brunetti case on the registration of scandalous trademarks, one might wonder whether allowing such scandalous marks will open the floodgates of registrations. My former colleague Vicenç Feliú (Nova Southeastern) wondered as well. So he looked at the trademark database to find out. One nice thing about trademarks is that all applications show up, whether granted or not, abandoned or not. He's posted a draft of his findings, called FUCT® – An Early Empirical Study of Trademark Registration of Scandalous and Immoral Marks Aftermath of the In re Brunetti Decision, on SSRN:
This article seeks to create an early empirical benchmark on registrations of marks that would have failed registration as “scandalous” or “immoral” under Lanham Act Section 2(a) before the Court of Appeals for the Federal Circuit’s In re Brunetti decision of December, 2017. The Brunetti decision followed closely behind the Supreme Court’s Matal v. Tam and put an end to examiners denying registration on the basis of Section 2(a). In Tam, the Supreme Court reasoned that Section 2(a) embodied restrictions on free speech, in the case of “disparaging” marks, which were clearly unconstitutional. The Federal circuit followed that same logic and labeled those same Section 2(a) restrictions as unconstitutional in the case of “scandalous” and “immoral” marks. Before the ink was dry in Brunetti, commentators wondered how lifting the Section 2(a) restrictions would affect the volume of registrations of marks previously made unregistrable by that same section. Predictions ran the gamut from “business as usual” to scenarios where those marks would proliferate to astronomical levels. Eleven months out from Brunetti, it is hard to say with certainty what could happen, but this study has gathered the number of registrations as of October 2018 and the early signs seem to indicate a future not much altered, despite early concerns to the contrary.
The study focuses not on the Supreme Court, but on the Federal Circuit, which already allowed Brunetti to register FUCT. Did this lead to a stampede of scandalous marks? It's hard to define such marks, so he started with a close proxy: George Carlin's Seven Dirty Words. This classic comedy bit (really, truly classic) nailed the dirty words so well that a radio station that played the bit was fined and the case wound up in the Supreme Court, which ruled that the FCC could, in fact, ban these seven words as indecent. So, this study's assumption is that the filings of these words as trademarks are the tip of the spear. That said, his findings about prior registrations of such words (with claimed dual meaning) are interesting, and show some of the problems that the court was trying to avoid in Matal v. Tam.

It turns out, not so much. No huge jump in filings or registrations after Brunetti. More interesting, I thought, was the choice of words. Turns out (thankfully, I think) that some dirty words are way more acceptable than others in terms of popularity in trademark filings. You'll have to read the paper to find out which.

Saturday, March 23, 2019

Jotwell Review of Frakes & Wasserman's Irrational Ignorance at the Patent Office

I've previously recommended subscribing to Jotwell to keep up with interesting recent IP scholarship, but for anyone who doesn't, my latest Jotwell post highlighted a terrific forthcoming article by Michael Frakes and Melissa Wasserman. Here are the first two paragraphs:
How much time should the U.S. Patent & Trademark Office (USPTO) spend evaluating a patent application? Patent examination is a massive business: the USPTO employs about 8,000 utility patent examiners who receive around 600,000 patent applications and approve around 300,000 patents each year. Examiners spend on average only 19 total hours throughout the prosecution of each application, including reading voluminous materials submitted by the applicant, searching for relevant prior art, writing rejections, and responding to multiple rounds of arguments from the applicant. Why not give examiners enough time for a more careful review with less likelihood of making a mistake?
In a highly-cited 2001 article, Rational Ignorance at the Patent Office, Mark Lemley argued that it doesn’t make sense to invest more resources in examination: since only a minority of patents are licensed or litigated, thorough scrutiny should be saved for only those patents that turn out to be valuable. Lemley identified the key tradeoffs, but had only rough guesses for some of the relevant parameters. A fascinating new article suggests that some of those approximations were wrong. In Irrational Ignorance at the Patent Office, Michael Frakes and Melissa Wasserman draw on their extensive empirical research with application-level USPTO data to conclude that giving examiners more time likely would be cost-justified. To allow comparison with Lemley, they focused on doubling examination time. They estimated that this extra effort would cost $660 million per year (paid for by user fees), but would save over $900 million just from reduced patent prosecution and litigation costs.
Read more at Jotwell.

Tuesday, March 19, 2019

The Rise and Rise of Transformative Use

I'm a big fan of transformative use analysis in fair use law, except when I'm not. I think that it is a helpful guide for determining if the type of use is one that we'd like to allow. But I also think that it can be overused - especially when it is applied to a different message but little else.

The big question is whether transformative use is used too much...or not enough. Clark Asay (BYU) has done the research on this so you don't have to. In his forthcoming article in Boston College Law Review called, Is Transformative Use Eating the World?, Asay collects and analyzes 400+ fair use decisions since 1991. The draft is on SSRN, and the abstract is here:
Fair use is copyright law’s most important defense to claims of copyright infringement. This defense allows courts to relax copyright law’s application when courts believe doing so will promote creativity more than harm it. As the Supreme Court has said, without the fair use defense, copyright law would often “stifle the very creativity [it] is designed to foster.”
In today’s world, whether use of a copyrighted work is “transformative” has become a central question within the fair use test. The U.S. Supreme Court first endorsed the transformative use term in its 1994 Campbell decision. Since then, lower courts have increasingly made use of the transformative use doctrine in fair use case law. In fact, in response to the transformative use doctrine’s seeming hegemony, commentators and some courts have recently called for a scaling back of the transformative use concept. So far, the Supreme Court has yet to respond. But growing divergences in transformative use approaches may eventually attract its attention.
But what is the actual state of the transformative use doctrine? Some previous scholars have empirically examined the fair use defense, including the transformative use doctrine’s role in fair use case law. But none has focused specifically on empirically assessing the transformative use doctrine in as much depth as is warranted. This Article does so by collecting a number of data from all district and appellate court fair use opinions between 1991, when the transformative use term first made its appearance in the case law, and 2017. These data include how frequently courts apply the doctrine, how often they deem a use transformative, and win rates for transformative users. The data also cover which types of uses courts are most likely to find transformative, what sources courts rely on in defining and applying the doctrine, and how frequently the transformative use doctrine bleeds into and influences other parts of the fair use test. Overall, the data suggest that the transformative use doctrine is, in fact, eating the world of fair use.
The Article concludes by analyzing some possible implications of the findings, including the controversial argument that, going forward, courts should rely even more on the transformative use doctrine in their fair use opinions, not less.
In the last six years of the study, some 90% of the fair use opinions consider transformative use.*  This doesn't mean that the the reuser won every time - quite often, courts found the use to not be transformative. Indeed, while the transformativeness finding is not 100% dispositive, it is highly predictive. This supports Asay's finding that transformativeness does indeed seem to be taking over fair use.

Thursday, February 28, 2019

Sue First, Negotiate Later

Just a brief post this week, as I have a perfect storm of non-work related happenings. So, I'll just say that I'm please to announce that my draft article Sue First, Negotiate Later will be published by the Arizona Law Review. The draft is on SSRN, and the longish abstract is below. I may blog about this in more detail in the future, but this is an introduction:
One of the more curious features of patent law is that patents can be challenged by anyone worried about being sued. This challenge right allows potential defendants to file a declaratory relief lawsuit in their local federal district court, seeking a judgment that a patent is invalid or noninfringed. To avoid this home-court advantage, patent owners may file a patent infringement lawsuit first and, by doing so, retain the case in the patent owner’s venue of choice. But there is an unfortunate side effect to such preemptive lawsuits: they escalate the dispute when the parties may want to instead settle for a license. Thus, policies that allow challenges are favored, but they are tempered by escalation caused by preemptive lawsuits. To the extent a particular challenge rule leads to more preemptive lawsuits, it might be disfavored.
This article tests one such important challenge rule. In MedImmune v. Genentech, the U.S. Supreme Court made it easier for a potential defendant to sue first. Whereas the prior rule required threat of immediate injury, the Supreme Court made clear that any case or controversy would allow a challenger to file a declaratory relief action. This ruling had a real practical effect, allowing recipients of letters that boiled down to, “Let’s discuss my patent,” to file a lawsuit when they could not before.
This was supposed to help alleged infringers, but not everyone was convinced. Many observers at the time predicted that the new rule would lead to more preemptive infringement lawsuits filed by patent holders. They would sue first and negotiate later rather than open themselves up to a challenge by sending a demand letter. Further, most who predicted this behavior—including parties to lawsuits themselves—thought that non-practicing entities would lead the charge. Indeed, as time passed, most reports were that this is what happened: that patent trolls uniquely were suing first and negotiating later. But to date, no study has empirically considered the effect of the MedImmune ruling to determine who filed preemptive lawsuits. This Article tests MedImmune’s unintended consequences. The answer matters: lawsuits are costly, and while “quickie” settlements may be relatively inexpensive, increased incentive to file challenges and preemptive infringement suits can lead to entrenchment instead of settlement.
Using a novel longitudinal dataset, this article considers whether MedImmune led to more preemptive infringement lawsuits by NPEs. It does so in three ways. First, it performs a differences-in-differences analysis to test whether case duration for the most active NPEs grew shorter after MedImmune. One would expect that preemptive suits would settle more quickly because they are proxies for quick settlement cases rather than signals of drawn out litigation. Second, it considers whether, other factors equal, the rate of short-lived case filings increased after MedImmune. That is, even if cases grew longer on average, the share of shorter cases should grow if there are more placeholders. Third, it considers whether plaintiffs themselves disclosed sending a demand letter prior to suing.
It turns out that the conventional wisdom is wrong. Not only did cases not grow shorter – cases with similar characteristics grew longer after MedImmune. Furthermore, NPEs were not the only ones who sued first and negotiated later. Instead, every type of plaintiff sent fewer demand letters, NPEs and product companies alike. If anything, the MedImmune experience shows that everyone likes to sue in their preferred venue. As a matter of policy, it means that efforts to dissuade filing lawsuits should be broadly targeted, because all may be susceptible

Monday, February 25, 2019

Jiarui Liu on the Dominance and Ambiguity of Transformative Use

The Stanford Technology Law Review just published an interesting new copyright article, An Empirical Study of Transformative Use in Copyright Law by Prof. Jiarui Liu. Here is the abstract:
This article presents an empirical study based on all reported transformative use decisions in U.S. copyright history through January 1, 2017. Since Judge Leval coined the doctrine of transformative use in 1990, it has been gradually approaching total dominance in fair use jurisprudence, involved in 90% of all fair use decisions in recent years. More importantly, of all the dispositive decisions that upheld transformative use, 94% eventually led to a finding of fair use. The controlling effect is nowhere more evident than in the context of the four-factor test: A finding of transformative use overrides findings of commercial purpose and bad faith under factor one, renders irrelevant the issue of whether the original work is unpublished or creative under factor two, stretches the extent of copying permitted under factor three towards 100% verbatim reproduction, and precludes the evidence on damage to the primary or derivative market under factor four even though there exists a well-functioning market for the use.
Although transformative use has harmonized fair use rhetoric, it falls short of streamlining fair use practice or increasing its predictability. Courts diverge widely on the meaning of transformative use. They have upheld the doctrine in favor of defendants upon a finding of physical transformation, purposive transformation, or neither. Transformative use is also prone to the problem of the slippery slope: courts start conservatively on uncontroversial cases and then extend the doctrine bit by bit to fact patterns increasingly remote from the original context.
This article, albeit being descriptive in nature, does have a normative connotation. Courts welcome transformative use not despite, but due to, its ambiguity, which is a flexible way to implement their intuitive judgments yet maintain the impression of stare decisis. However, the rhetorical harmony conceals the differences between a wide variety of policy concerns in dissimilar cases, invites casual references to precedents from factually unrelated contexts, and substitutes a mechanical exercise of physical or purposive transformation for an in-depth policy analysis that may provide clearer guidance for future cases.
This article builds on and extends prior empirical work in this area, such as Barton Beebe's study of fair use decisions from 1978 to 2005. And it provides a nice mix of interesting new empirical results and normative analysis that illustrates why fair use doctrine is (at least for me) quite challenging to teach. For example, Figure 1 illustrates how transformative use has cannibalized fair use doctrine since the 1994 Campbell v. Acuff-Rose decision endorsed its use:


Liu also examines data such as the win rate for transformative use over time, by circuit, and by subject matter. But I particularly like that Liu is not just counting cases, but also arguing that courts are using this doctrine as a substitute for in-depth policy analysis.

Friday, February 22, 2019

Does Administrative Patent Law Promote Innovation About Innovation?

I am at Texas Law today for a symposium on The Intersection of Administrative & IP Law, and my panel was asked to address the question: Does Administrative Patent Law Promote Innovation? I focused my remarks on a specific aspect of this: Does Administrative Patent Law Promote Innovation About Innovation? I think the short answer, at least right now, is "no."

There is a lot we don't know about the patent system. USPTO Regional Director Hope Shimabuku started her remarks today by saying that we know IP creates nearly 30 million jobs and adds $6.6 trillion to the U.S. economy each year, citing this USPTO report. But that's not what the report says. It looks at jobs and value from "IP-intensive industries," defined as ones with "IP-count to employment ratio is higher than the average for all industries considered." As the report acknowledges, it is unable to determine how much of these firms' performance is attributable to IP.

And the real answer is: we don't know. In an article I reviewed for Jotwell, economist Heidi Williams recently summarized: "we still have essentially no credible empirical evidence on the seemingly simple question of whether stronger patent rights—either longer patent terms or broader patent rights—encourage research investments." And even on smaller questions, the existing evidence base is weak.

As I explained in Patent Experimentalism, to make empirical progress we need some source of empirical variation. Economists often look for "natural experiments" with variation across time, across jurisdictions, or across similar technologies, and the closer that variation is to random, the easier it is to draw causal inferences. Of course, it's even better to have variation that is actually random, which is why I have joined other scholars in arguing for more use of randomized policy experiments.

The USPTO has a huge opportunity here to both improve the patent system and help address the key administrative law challenge of encouraging accurate and consistent decisions by a decentralized bureaucracy. There are many questions the agency could help answer using more randomization, as I discuss in Patent Experimentalism. During the panel today, I noted two potential areas: experimenting with the time spent examining a given patent (see this great forthcoming article by Michel Frakes and Melissa Wasserman) and with the possibility that examiner bias affects the gender gap in patenting (which fits within the agency's recent mandate from Congress). I noted ways that each could be designed as opt-in progress to encourage buy-in from applicants and from examiners.

But my main point was not that the USPTO should adopt one of these particular experiments—it was that the agency should study something in a way that allows us to draw rigorous inferences. Failing to do so seems like a tremendous missed opportunity.

Tuesday, February 19, 2019

Using Insurance to Deter Lawsuits

The conventional wisdom (my anecdotal experience, anyway) is that the availability of insurance fuels lawsuits. People that otherwise might not sue would use litigation to access insurance funds. I'm sure there's a literature on this. But most insurance covers both defense and indemnity - that is, litigation costs and settlements. But what if the insurance covered the defense and not any settlement costs? Would that serve as a disincentive to bring suit? It surely would change the litigation dynamic.

In The Effect of Patent Litigation Insurance: Theory and Evidence from NPEs, Bernhard Ganglmair (University of Mannheim - Economics), Christian Helmers (Santa Clara - Economics), Brian J. Love (Santa Clara - Law) explore this question with respect to NPE patent litigation insurance. The draft is on SSRN, and the abstract is here:
We analyze the extent to which private defensive litigation insurance deters patent assertion by non-practicing entities (NPEs). We do so by studying the effect that a patent-specific defensive insurance product, offered by a leading litigation insurer, had on the litigation behavior of insured patents’ owners, all of which are NPEs. We first model the impact of defensive litigation insurance on the behavior of patent enforcers and accused infringers. Assuming that a firm’s purchase of insurance is not observed by patent enforcers, we show that the mere availability of defense litigation insurance can have an effect on how often patent enforcers will assert their patents. Next, we empirically evaluate the insurance policy’s effect on the behavior of owners of insured patents by comparing their subsequent assertion of insured patents with their subsequent assertion of their other patents not included in the policy. We additionally compare the assertion of insured patents with patents held by other NPEs with portfolios that were entirely excluded from the insurance product. Our findings suggest that the introduction of this insurance policy had a large, negative effect on the likelihood that a patent included in the policy was subsequently asserted, and our results are robust across different control groups. Our findings also have importance for ongoing debates on the need to reform the U.S. and European patent systems, and suggest that market-based mechanisms can deter so-called “patent trolling.”
On reading the abstract, I was skeptical. After all, there are a bunch of reasons why more firms would defend against NPEs , why NPEs would be less likely to assert, and so forth. But the interesting dynamics of the patent litigation insurance market have me more convinced. Apparently, the insurance didn't cover any old lawsuit; instead, only specific patents were covered. So, the authors were able to look at the differences between firms asserting covered patents, firms that held both covered and non-covered patents, and firms that had no covered patents. Because each of these firms should be equally affected by background law changes, the differences should be limited to the role of insurance.

And that's what they find, unsurprisingly. Assertions of insured patents went down as compared to uninsured patents, and those cases were less likely to settle -- even with the same plaintiff. My one concern about this finding is that patents targeted for insurance may have been weaker in the first place (hence the willingness to insure), and thus there is self-selection. The paper presents some data on the different patents in order to quell this concern, but if there is a methodological challenge, it is here.

This is a longish paper for an empirical paper, in part because they develop a complex game theory model of the insurance purchasing, patent assertion, and patent defense. It is interesting and worth a read.

Sunday, February 3, 2019

AOC on Pharma & Public Funding

Congresswoman Alexandria Ocasio-Cortez has already gotten Americans to start teaching each other about marginal taxation, and now she has started a dialog about the role of public funding in public sector research:
In these short videos (which email subscribers to this blog need to click through to see), Ocasio-Cortez and Ro Khanna are seen asking questions during a Jan. 29 House Oversight and Reform Committee hearing, "Examining the Actions of Drug Companies in Raising Prescription Drug Prices." So far, @AOC's three tweets about this issue have generated over 7,000 comments, 58,000 retweets, and 190,000 likes.

Privatization of publicly funded research through patents is one of my main areas of research, so I love to see it in the spotlight. There are enough concerns with the current system that the government should be paying attention. But as I explain below, condensing Ocasio-Cortez and Khanna's questions into a headline like "The Public, Not Pharma, Funds Drug Research" is misleading. Highlighting the role of public R&D funding is important, but I hope this attention will spur more people to learn about how that public funding interacts with private funding, and why improving the drug development ecosystem involves a lot of difficult and uncertain policy questions. This post attempts to explain some key points that I hope will be part of this conversation.

Tuesday, January 29, 2019

It's Hard Out There for a Commons

I just finished reading a fascinating draft article about the Eco-Patent Commons, a commons where about 13 companies put in a little fewer than 100 patents that could be used by any third party. A commons differs from cross-licensing or other pools in a couple of important ways. First, the owner must still maintain the patent (OK, that's common to licensing, but different from the public domain). Second, anyone, not just members of the commons, can use the patents (which is common to the public domain, but different from licensing).

The hope for the commons was that it would aid in diffusion of green patents, but it was not to be. The draft by Jorge Contreras (Utah Law), Bronwyn Hall (Berkeley Econ), and Christian Helmers (Santa Clara Econ) is called Green Technology Diffusion: A Post-Mortem Analysis of the Eco-Patent Commons. A draft is on SSRN. Here is the abstract:
We revisit the effect of the “Eco-Patent Commons” (EcoPC) on the diffusion of patented environmentally friendly technologies following its discontinuation in 2016, using both participant survey and data analytic evidence. Established in January 2008 by several large multinational companies, the not-for-profit initiative provided royalty-free access to 248 patents covering 94 “green” inventions. Hall and Helmers (2013) suggested that the patents pledged to the commons had the potential to encourage the diffusion of valuable environmentally friendly technologies. Our updated results now show that the commons did not increase the diffusion of pledged inventions, and that the EcoPC suffered from several structural and organizational issues. Our findings have implications for the effectiveness of patent commons in enabling the diffusion of patented technologies more broadly.
The findings were pretty bleak. In short, the patents were cited less than a set of matching patents, and many of them were allowed to lapse (which implies lack of value). Their survey-type data also showed a lack of importance/diffusion.

What I really love about this paper, though, is that there's an interpretation for everybody in it. For the "we need strong rights" group, this failure is evidence of the tragedy of the commons. If nobody has the right to fully profit on the inventions, then nobody will do so, and the commons will go fallow.

But for the "we don't need strong rights" group, this failure is evidence that the supposedly important patents were weak, and that it was better to essentially make these public domain than to have after the fact lawsuits.

For the "patents are useless" group, this failure shows that nobody reads patents anyway, and so they fail in their essential purpose: providing information as a quid pro quo for exclusivity.

And for the middle ground folks, you have the conclusions in the study. Maybe some commons can work, but you have to be careful about how you set them up, and this one had procedural and substantive failings that doomed the patents to go unused.

I don't know the answer, but I think cases studies like this are helpful for better understanding how patents do and do not disseminate information, as well as learning how to better structure patent pools.

Wednesday, January 2, 2019

Erin McGuire: Can Equity Crowdfunding Close the Gender Gap in Startup Finance?

As I have previously explained, there is growing interest in gender and racial gaps in patenting from both scholars and Congress—which charged the USPTO with studying these gaps. But I don't think it makes sense to study these inequalities in isolation: patent law is embedded in a larger innovation ecosystem, and patents' benefit at providing a strong ex post reward for success comes at the cost of needing to attract funding to cover R&D expenses until patent profits become available. It may be difficult to address the patenting gap without also addressing inequalities in capital markets.

In particular, there is a large and well-documented gender gap in the market for early-stage capital. For example, this Harvard Business Review article notes that women receive 2% of venture funding despite owning 38% of U.S. businesses, and that even as the percentage of female venture capitalists has crept up from 3% in 2014 to 7% in 2017, the funding gap only widened. Part of the explanation—explored in the fascinating study summarized in the HRB piece—may be that both male and female VCs ask different kinds of questions to male and female entrepreneurs: in actual Q&A sessions, VCs tended to ask men questions about the potential for gains and women about the potential for loses, with significant impacts on funding decisions.

Economist Erin McGuire, currently an NBER postdoc, has an interesting working paper on one partial solution to this problem: Can Equity Crowdfunding Close the Gender Gap in Startup Finance? Non-equity crowdfunding through sites like KickStarter and Indiegogo have grown in popularity in the past two decades; equity crowdfunding differs in that funders receive shares in the company in exchange for their investments. The average equity crowdfunding investment is $810—over ten times the average investment on Kickstarter. Equity crowdfunding was illegal in the United States before the JOBS Act of 2012, which allowed equity crowdfunding by accredited investors in September 2013. McGuire hypothesized that the introduction of this financing channel—with a more gender-diverse pool of potential investors—as an alternative to professional network connections would have a greater benefit for female entrepreneurs.

Tuesday, December 11, 2018

The Value of Patent Applications in Valuing Firms

It's an age-old question that we've blogged about here before - what role do patents have on firm value? And is any effect due to signaling or exclusivity? Does the disclosure in the patent have any value? Does anybody read patents?

These are all good questions that are difficult to measure, and so scholars try to use natural experiments or other empirical methods to divine the answer. In a recent draft, Deepak Hegde, Baruch Lev, and Chenqi Zhu (all NYU Stern Business) use the AIPA to provide some useful answers. For those unaware, the AIPA mandated that patent applications be published after 18 months by default, rather than held secretly until patent grant. The AIPA is the law that keeps on giving; there have been several studies that use the "shock" of the AIPA to measure what effect patent publications had on a variety of dependent variables.

So, too, in Patent Disclosure and Price Discovery. A draft is available on SSRN, and the abstract is here:
We focus in this study on the exogenous event of the enactment of American Inventor’s Protection Act of 1999 (AIPA), which disseminates timely, detailed, and credible public information on R&D activities through pre-grant patent disclosures. Exploiting the staggered timing of patent disclosures, we identify a significant improvement in the efficiency of stock price discovery. This improvement is stronger when patent disclosures reveal firms’ successful, new, or technologically valuable inventions. This improvement is more pronounced for firms in high-tech or fast-moving industries, or with a large institutional ownership or analyst coverage. We also find stock liquidity rises and investors’ risk perception of R&D drops after the enactment of AIPA. Our results highlight the importance of timely, detailed, and credible disclosures of R&D activities in alleviating the information problems faced by R&D-intensive firms.
This is a short abstract, so I'll fill in a few details. The authors measure the effect on  intra-period timeliness, a standard measure used to proxy for "price discovery," or how quickly information enters the market and settle the price of a stock. There are a lot of articles on this, but here's one for those interested (paywall, sorry).

In short, the authors look at how quickly price discovery occurred before and after the AIPA, correcting for firm fixed effects and other variables. One of the nice features of their model is that patent applications occurred over a period of years, and so the "shock" of patent publication was not distributed only in one year (which could have been affected by something other than the AIPA that happened in that same year).

They find that price discovery is faster after the AIPA. Interestingly, they also find that the effect is more pronounced in high-tech and fast moving fields -- that is, industries where new R&D information is critically important.

Finally, their results say something about the nature of the patent disclosure itself - the effects come from disclosure of the information, and not necessarily the patent grant. Thus, the signaling effect may really relate to information, and (some) people may well read patents after all.

Tuesday, December 4, 2018

How Important is Helsinn?

In honor of the oral argument in Helsinn today, I thought I would blog about a study that questions its importance. For those unaware, the question the Supreme Court is considering is whether the AIA's new listing of prior art in 35 U.S.C. §102(a)(1): "the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public..." changed the law.

Since forever, on sale meant any offer or actual sale, regardless of who knew about it. Some have argued that the addition of "or otherwise available to the public" means that only offers that are publicly accessible count as prior art. I think this is wrong, and signed on to an amicus brief saying so. We'll see what the Court says. Note that non-public does not mean "secret." True secret activity is often considered non-prior art, but the courts have defined "public" to mean "not-secret." The question is whether that should change to be "publicly accessible."

But how big a deal is this case? How many offers for sale would be affected? Steve Yelderman (Notre Dame, and soon to be Gorsuch clerk) wanted to know as well, so he did the hard work of finding out. In a draft paper on SSRN that he blogged about at Patently-O, he looked at all invalidity decisions to see exactly where the prior art was coming from. Here is the abstract for Prior Art in the District Court:
This article is an empirical study of the evidence district courts rely upon when invalidating patents. To construct our dataset, we collected every district court ruling, verdict form, and opinion (whether reported or unreported) invalidating a patent claim over a six-and-a-half-year period. We then coded individual invalidation events based on the prior art supporting the court’s analysis. In the end, we observed 3,320 invalidation events based on 817 distinct prior art references.
The nature of the prior art relied upon to invalidate patents informs the value of district court litigation as an error correction tool. The public interest in revoking erroneous patent grants depends significantly on the reason those grants were undeserved. Distinguishing between revocations that incentivize future inventors and those that do not requires understanding the reason individual patents are invalidated. While prior studies have explored patent invalidity in general, no study has reported data at the level of detail necessary to address these questions.
The conclusions here are mixed. On one hand, invalidations for lack of novelty bear many indicia of publicly beneficial error correction. Anticipation based on obscure prior art appears to be quite rare. When it comes to obviousness, however, a significant number of invalidations rely on prior art that would have been difficult or impossible to find at the time of invention. This complicates — though does not necessarily refute — the traditional view that obviousness challenges ought to be proactively encouraged.
So, let's get right to the point. The data seem to show that "activity" type prior art (that is sale or use) is much more prevalent in anticipation than in obviousness. This is not surprising, given that this category is often the patentee's own activities.

With respect to non-public sales, they estimate that a maximum of 14% of anticipation and 2% of obviousness invalidations based on activity were based on plausibly non-public sales. This translates to about 8% of all anticipation invalidations and 1% of all obviousness invalidations. Because there are about as many obviousness cases as anticipation cases, this averages to 4.25% of all invalidations. They note that with a different rule, some of these might have been converted to "public" sales upon more attention paid to providing such evidence.

A related question is whether the inventor's actions can invalidate, or whether the AIA overruled Metallizing Engineering, which held that an inventor's secret use can invalidate, even if a third-party's secret use does not. The study found that the plaintiff's actions were relevant in 27% of anticipation invalidations and 13% of obviousness invalidations.  Furthermore, they found that most of the secret activity was associated with either the plaintiff or defendant--this makes sense, as they have access to such secret information.

So, what's the takeaway from this? I suppose where you stand depends on where you sit. I think that wiping out 4% of the invalidations, especially when they are based on the actions of one of the two parties, is not a good thing. It's bad to allow the patentee to non-publicly sell and have the patent, and it's bad to hold the defendant liable even if it has been selling the patent in a non-public (though non-secret) way. We're talking about 20 claims per year that go the other way - too high for my taste, especially when it means we have to start defining new ways to determine whether something is truly public.

Furthermore, the stakes of reversing Metallizing are much higher. I freely admit that the "plaintiff's secret actions only" rule has a tenuous basis in the text of the statute, but it has been the law for a long time without being expressly overruled by two subsequent revisions. Given that more than 25% of the invalidations were based on the plaintiffs actions, I think it would be difficult to reverse course.