Pages

Tuesday, March 26, 2019

Trademarking the Seven Dirty Words

With the Supreme Court agreeing to hear the Brunetti case on the registration of scandalous trademarks, one might wonder whether allowing such scandalous marks will open the floodgates of registrations. My former colleague Vicenç Feliú (Nova Southeastern) wondered as well. So he looked at the trademark database to find out. One nice thing about trademarks is that all applications show up, whether granted or not, abandoned or not. He's posted a draft of his findings, called FUCT® – An Early Empirical Study of Trademark Registration of Scandalous and Immoral Marks Aftermath of the In re Brunetti Decision, on SSRN:
This article seeks to create an early empirical benchmark on registrations of marks that would have failed registration as “scandalous” or “immoral” under Lanham Act Section 2(a) before the Court of Appeals for the Federal Circuit’s In re Brunetti decision of December, 2017. The Brunetti decision followed closely behind the Supreme Court’s Matal v. Tam and put an end to examiners denying registration on the basis of Section 2(a). In Tam, the Supreme Court reasoned that Section 2(a) embodied restrictions on free speech, in the case of “disparaging” marks, which were clearly unconstitutional. The Federal circuit followed that same logic and labeled those same Section 2(a) restrictions as unconstitutional in the case of “scandalous” and “immoral” marks. Before the ink was dry in Brunetti, commentators wondered how lifting the Section 2(a) restrictions would affect the volume of registrations of marks previously made unregistrable by that same section. Predictions ran the gamut from “business as usual” to scenarios where those marks would proliferate to astronomical levels. Eleven months out from Brunetti, it is hard to say with certainty what could happen, but this study has gathered the number of registrations as of October 2018 and the early signs seem to indicate a future not much altered, despite early concerns to the contrary.
The study focuses not on the Supreme Court, but on the Federal Circuit, which already allowed Brunetti to register FUCT. Did this lead to a stampede of scandalous marks? It's hard to define such marks, so he started with a close proxy: George Carlin's Seven Dirty Words. This classic comedy bit (really, truly classic) nailed the dirty words so well that a radio station that played the bit was fined and the case wound up in the Supreme Court, which ruled that the FCC could, in fact, ban these seven words as indecent. So, this study's assumption is that the filings of these words as trademarks are the tip of the spear. That said, his findings about prior registrations of such words (with claimed dual meaning) are interesting, and show some of the problems that the court was trying to avoid in Matal v. Tam.

It turns out, not so much. No huge jump in filings or registrations after Brunetti. More interesting, I thought, was the choice of words. Turns out (thankfully, I think) that some dirty words are way more acceptable than others in terms of popularity in trademark filings. You'll have to read the paper to find out which.

Saturday, March 23, 2019

Jotwell Review of Frakes & Wasserman's Irrational Ignorance at the Patent Office

I've previously recommended subscribing to Jotwell to keep up with interesting recent IP scholarship, but for anyone who doesn't, my latest Jotwell post highlighted a terrific forthcoming article by Michael Frakes and Melissa Wasserman. Here are the first two paragraphs:
How much time should the U.S. Patent & Trademark Office (USPTO) spend evaluating a patent application? Patent examination is a massive business: the USPTO employs about 8,000 utility patent examiners who receive around 600,000 patent applications and approve around 300,000 patents each year. Examiners spend on average only 19 total hours throughout the prosecution of each application, including reading voluminous materials submitted by the applicant, searching for relevant prior art, writing rejections, and responding to multiple rounds of arguments from the applicant. Why not give examiners enough time for a more careful review with less likelihood of making a mistake?
In a highly-cited 2001 article, Rational Ignorance at the Patent Office, Mark Lemley argued that it doesn’t make sense to invest more resources in examination: since only a minority of patents are licensed or litigated, thorough scrutiny should be saved for only those patents that turn out to be valuable. Lemley identified the key tradeoffs, but had only rough guesses for some of the relevant parameters. A fascinating new article suggests that some of those approximations were wrong. In Irrational Ignorance at the Patent Office, Michael Frakes and Melissa Wasserman draw on their extensive empirical research with application-level USPTO data to conclude that giving examiners more time likely would be cost-justified. To allow comparison with Lemley, they focused on doubling examination time. They estimated that this extra effort would cost $660 million per year (paid for by user fees), but would save over $900 million just from reduced patent prosecution and litigation costs.
Read more at Jotwell.

Tuesday, March 19, 2019

The Rise and Rise of Transformative Use

I'm a big fan of transformative use analysis in fair use law, except when I'm not. I think that it is a helpful guide for determining if the type of use is one that we'd like to allow. But I also think that it can be overused - especially when it is applied to a different message but little else.

The big question is whether transformative use is used too much...or not enough. Clark Asay (BYU) has done the research on this so you don't have to. In his forthcoming article in Boston College Law Review called, Is Transformative Use Eating the World?, Asay collects and analyzes 400+ fair use decisions since 1991. The draft is on SSRN, and the abstract is here:
Fair use is copyright law’s most important defense to claims of copyright infringement. This defense allows courts to relax copyright law’s application when courts believe doing so will promote creativity more than harm it. As the Supreme Court has said, without the fair use defense, copyright law would often “stifle the very creativity [it] is designed to foster.”
In today’s world, whether use of a copyrighted work is “transformative” has become a central question within the fair use test. The U.S. Supreme Court first endorsed the transformative use term in its 1994 Campbell decision. Since then, lower courts have increasingly made use of the transformative use doctrine in fair use case law. In fact, in response to the transformative use doctrine’s seeming hegemony, commentators and some courts have recently called for a scaling back of the transformative use concept. So far, the Supreme Court has yet to respond. But growing divergences in transformative use approaches may eventually attract its attention.
But what is the actual state of the transformative use doctrine? Some previous scholars have empirically examined the fair use defense, including the transformative use doctrine’s role in fair use case law. But none has focused specifically on empirically assessing the transformative use doctrine in as much depth as is warranted. This Article does so by collecting a number of data from all district and appellate court fair use opinions between 1991, when the transformative use term first made its appearance in the case law, and 2017. These data include how frequently courts apply the doctrine, how often they deem a use transformative, and win rates for transformative users. The data also cover which types of uses courts are most likely to find transformative, what sources courts rely on in defining and applying the doctrine, and how frequently the transformative use doctrine bleeds into and influences other parts of the fair use test. Overall, the data suggest that the transformative use doctrine is, in fact, eating the world of fair use.
The Article concludes by analyzing some possible implications of the findings, including the controversial argument that, going forward, courts should rely even more on the transformative use doctrine in their fair use opinions, not less.
In the last six years of the study, some 90% of the fair use opinions consider transformative use.*  This doesn't mean that the the reuser won every time - quite often, courts found the use to not be transformative. Indeed, while the transformativeness finding is not 100% dispositive, it is highly predictive. This supports Asay's finding that transformativeness does indeed seem to be taking over fair use.

Tuesday, March 12, 2019

Cicero Cares what Thomas Jefferson Thought about Patents

 One of my favorite article titles (and also an article a like a lot) is Who Cares What Thomas Jefferson Thought About Patents? Reevaluating the Patent 'Privilege' in Historical Context, by Adam Mossoff. The article takes on the view that Jefferson's utilitarian view of patents should somehow reign, when there were plenty of others who had different, natural law views of patenting.

And so I read with great interest Jeremy Sheff's latest article, Jefferson's Taper. This article challenges everyone's understanding of Jefferson. The draft is on SSRN, and the abstract is here:
This Article reports a new discovery concerning the intellectual genealogy of one of American intellectual property law’s most important texts. The text is Thomas Jefferson’s often-cited letter to Isaac McPherson regarding the absence of a natural right of property in inventions, metaphorically illustrated by a “taper” that spreads light from one person to another without diminishing the light at its source. I demonstrate that Thomas Jefferson likely copied this Parable of the Taper from a nearly identical passage in Cicero’s De Officiis, and I show how this borrowing situates Jefferson’s thoughts on intellectual property firmly within a natural law theory that others have cited as inconsistent with Jefferson’s views. I further demonstrate how that natural law theory rests on a pre-Enlightenment Classical Tradition of distributive justice in which distribution of resources is a matter of private judgment guided by a principle of proportionality to the merit of the recipient — a view that is at odds with the post-Enlightenment Modern Tradition of distributive justice as a collective social obligation that proceeds from an initial assumption of human equality. Jefferson’s lifetime correlates with the historical pivot in the intellectual history of the West from the Classical Tradition to the Modern Tradition, but modern readings of the Parable of the Taper, being grounded in the Modern Tradition, ignore this historical context. Such readings cast Jefferson as a proto-utilitarian at odds with his Lockean contemporaries, who supposedly recognized property as a pre-political right. I argue that, to the contrary, Jefferson’s Taper should be read from the viewpoint of the Classical Tradition, in which case it not only fits comfortably within a natural law framework, but points the way toward a novel natural-law-based argument that inventors and other knowledge-creators actually have moral duties to share their knowledge with their fellow human beings.
I don't have much more to say about the article, other than that it is a great and interesting read. I'm a big fan of papers like this, and I think this one is done well.

Tuesday, March 5, 2019

Defining Patent Holdup

There are few patent law topics that are so heatedly debated as patent holdup. Those who believe in it, really believe in it. Those who don't, well, don't. I was at a conference once where a professor on one side of this divide just..couldn't...even, and walked out of a presentation taking the opposite viewpoint.

The debate is simply the following. The patent holdup story is that patent holders can extract more than they otherwise would by asserting patents after the targeted infringer has invested in development and manufacturing. The "classic" holdup story in the economics literature relates to incomplete contracts or other partial relationships that allow one party to take advantage of an investment by the other to extract rents.

You can see the overlap, but the "classic" folks think that patent holdup story doesn't count, because there's no prior negotiation - the party investing has the opportunity to research patents, negotiate beforehand, plan their affairs, etc.

In their new article forthcoming in Washington & Lee Law Review, Tom Cotter (Minnesota), Erik Hovenkamp (Harvard Law Post-doc), and Norman Siebrasse (New Brunswick Law) try to solve this debate. They have put Demystifying Patent Holdup on SSRN. The abstract is here:
Patent holdup can arise when circumstances enable a patent owner to extract a larger royalty ex post than it could have obtained in an arm's length transaction ex ante. While the concept of patent holdup is familiar to scholars and practitioners—particularly in the context of standard-essential patent (SEP) disputes—the economic details are frequently misunderstood. For example, the popular assumption that switching costs (those required to switch from the infringing technology to an alternative) necessarily contribute to holdup is false in general, and will tend to overstate the potential for extracting excessive royalties. On the other hand, some commentaries mistakenly presume that large fixed costs are an essential ingredient of patent holdup, which understates the scope of the problem.
In this article, we clarify and distinguish the most basic economic factors that contribute to patent holdup. This casts light on various points of confusion arising in many commentaries on the subject. Path dependence—which can act to inflate the value of a technology simply because it was adopted first—is a useful concept for understanding the problem. In particular, patent holdup can be viewed as opportunistic exploitation of path dependence effects serving to inflate the value of a patented technology (relative to the alternatives) after it is adopted. This clarifies that factors contributing to holdup are not static, but rather consist in changes in economic circumstances over time. By breaking down the problem into its most basic parts, our analysis provides a useful blueprint for applying patent holdup theory in complex cases.
The core of their descriptive argument is that both "classic" and patent holdup are based on a path dependence: one party invests sunk costs and thus is at the mercy of the other party. In this sense, they are surely correct (if we don't ask why the party invested). And the payoff from this is nice, because it allows them to build a model that critically examines sunk costs (holdup) v. switching costs (not holdup). The irony of this, of course, is that it's theoretically irrational to worry about sunk costs when making future decisions.

But I guess I'm not entirely convinced by the normative parallel. The key in all of these cases is transactions costs. So, the question is whether the transactions costs of finding patents are high enough to warrant the investment without expending them. The authors recognize the problem, and note that when injunctions are not possible parties will refuse to pay a license because it is more profitable to do so (holdout). But their answer is that just because there is holdout doesn't mean that holdup isn't real and a problem sometimes. Well, sure, but holdout merely shifts the transactions costs, and if it is cheaper to never make an ex ante agreement (which is typical is these days), then it's hard for me to say that being hit with a patent lawsuit after investment is the sort of path dependence that we should be worried about.

I think this is an interesting and thoughtful paper. There's a lot more than my brief concerns. It attempts to respond to other critiques of patent holdup, and it provides a framework to debate these questions, even if I'm not convinced by the debate.

Monday, March 4, 2019

Recent Advances in Biologics Manufacturing Diminish the Importance of Trade Secrets: A Response to Price and Rai

Guest post by Rebecca Weires, a 2L in the J.D./M.S. Bioengineering program at Stanford

In their 2016 paper, Manufacturing Barriers to Biologics Competition and Innovation, Price and Rai argue the use of trade secrets to protect biologics manufacturing processes is a social detriment. They go on to argue policymakers should demand more enabling disclosure of biologics manufacturing processes, either in patents or biologics license applications (BLAs). The authors premise their arguments on an assessment that (1) variations in the synthesis process can unpredictably affect the structure of a biological product; (2) variations in the structure of a biological product can unpredictably affect the physiological effects of the product, including immunogenicity; and (3) analytical techniques are inadequate to characterize the structure of a biological product. I am more optimistic than Price and Rai that researchers will soon overcome all three challenges. Where private-sector funding may fall short, grant-funded research has already led to tremendous advances in biologics development technology. Rather than requiring more specific disclosure of synthesis processes, as Price and Rai recommend, FDA could and should require more specific disclosure of structure, harmonizing biologics regulation with small molecule regulation. FDA should also incentivize development of industrial scale cell-free protein synthesis processes.