Tuesday, March 19, 2019

The Rise and Rise of Transformative Use

I'm a big fan of transformative use analysis in fair use law, except when I'm not. I think that it is a helpful guide for determining if the type of use is one that we'd like to allow. But I also think that it can be overused - especially when it is applied to a different message but little else.

The big question is whether transformative use is used too much...or not enough. Clark Asay (BYU) has done the research on this so you don't have to. In his forthcoming article in Boston College Law Review called, Is Transformative Use Eating the World?, Asay collects and analyzes 400+ fair use decisions since 1991. The draft is on SSRN, and the abstract is here:
Fair use is copyright law’s most important defense to claims of copyright infringement. This defense allows courts to relax copyright law’s application when courts believe doing so will promote creativity more than harm it. As the Supreme Court has said, without the fair use defense, copyright law would often “stifle the very creativity [it] is designed to foster.”
In today’s world, whether use of a copyrighted work is “transformative” has become a central question within the fair use test. The U.S. Supreme Court first endorsed the transformative use term in its 1994 Campbell decision. Since then, lower courts have increasingly made use of the transformative use doctrine in fair use case law. In fact, in response to the transformative use doctrine’s seeming hegemony, commentators and some courts have recently called for a scaling back of the transformative use concept. So far, the Supreme Court has yet to respond. But growing divergences in transformative use approaches may eventually attract its attention.
But what is the actual state of the transformative use doctrine? Some previous scholars have empirically examined the fair use defense, including the transformative use doctrine’s role in fair use case law. But none has focused specifically on empirically assessing the transformative use doctrine in as much depth as is warranted. This Article does so by collecting a number of data from all district and appellate court fair use opinions between 1991, when the transformative use term first made its appearance in the case law, and 2017. These data include how frequently courts apply the doctrine, how often they deem a use transformative, and win rates for transformative users. The data also cover which types of uses courts are most likely to find transformative, what sources courts rely on in defining and applying the doctrine, and how frequently the transformative use doctrine bleeds into and influences other parts of the fair use test. Overall, the data suggest that the transformative use doctrine is, in fact, eating the world of fair use.
The Article concludes by analyzing some possible implications of the findings, including the controversial argument that, going forward, courts should rely even more on the transformative use doctrine in their fair use opinions, not less.
In the last six years of the study, some 90% of the fair use opinions consider transformative use.*  This doesn't mean that the the reuser won every time - quite often, courts found the use to not be transformative. Indeed, while the transformativeness finding is not 100% dispositive, it is highly predictive. This supports Asay's finding that transformativeness does indeed seem to be taking over fair use.

Tuesday, March 12, 2019

Cicero Cares what Thomas Jefferson Thought about Patents

 One of my favorite article titles (and also an article a like a lot) is Who Cares What Thomas Jefferson Thought About Patents? Reevaluating the Patent 'Privilege' in Historical Context, by Adam Mossoff. The article takes on the view that Jefferson's utilitarian view of patents should somehow reign, when there were plenty of others who had different, natural law views of patenting.

And so I read with great interest Jeremy Sheff's latest article, Jefferson's Taper. This article challenges everyone's understanding of Jefferson. The draft is on SSRN, and the abstract is here:
This Article reports a new discovery concerning the intellectual genealogy of one of American intellectual property law’s most important texts. The text is Thomas Jefferson’s often-cited letter to Isaac McPherson regarding the absence of a natural right of property in inventions, metaphorically illustrated by a “taper” that spreads light from one person to another without diminishing the light at its source. I demonstrate that Thomas Jefferson likely copied this Parable of the Taper from a nearly identical passage in Cicero’s De Officiis, and I show how this borrowing situates Jefferson’s thoughts on intellectual property firmly within a natural law theory that others have cited as inconsistent with Jefferson’s views. I further demonstrate how that natural law theory rests on a pre-Enlightenment Classical Tradition of distributive justice in which distribution of resources is a matter of private judgment guided by a principle of proportionality to the merit of the recipient — a view that is at odds with the post-Enlightenment Modern Tradition of distributive justice as a collective social obligation that proceeds from an initial assumption of human equality. Jefferson’s lifetime correlates with the historical pivot in the intellectual history of the West from the Classical Tradition to the Modern Tradition, but modern readings of the Parable of the Taper, being grounded in the Modern Tradition, ignore this historical context. Such readings cast Jefferson as a proto-utilitarian at odds with his Lockean contemporaries, who supposedly recognized property as a pre-political right. I argue that, to the contrary, Jefferson’s Taper should be read from the viewpoint of the Classical Tradition, in which case it not only fits comfortably within a natural law framework, but points the way toward a novel natural-law-based argument that inventors and other knowledge-creators actually have moral duties to share their knowledge with their fellow human beings.
I don't have much more to say about the article, other than that it is a great and interesting read. I'm a big fan of papers like this, and I think this one is done well.

Tuesday, March 5, 2019

Defining Patent Holdup

There are few patent law topics that are so heatedly debated as patent holdup. Those who believe in it, really believe in it. Those who don't, well, don't. I was at a conference once where a professor on one side of this divide just..couldn't...even, and walked out of a presentation taking the opposite viewpoint.

The debate is simply the following. The patent holdup story is that patent holders can extract more than they otherwise would by asserting patents after the targeted infringer has invested in development and manufacturing. The "classic" holdup story in the economics literature relates to incomplete contracts or other partial relationships that allow one party to take advantage of an investment by the other to extract rents.

You can see the overlap, but the "classic" folks think that patent holdup story doesn't count, because there's no prior negotiation - the party investing has the opportunity to research patents, negotiate beforehand, plan their affairs, etc.

In their new article forthcoming in Washington & Lee Law Review, Tom Cotter (Minnesota), Erik Hovenkamp (Harvard Law Post-doc), and Norman Siebrasse (New Brunswick Law) try to solve this debate. They have put Demystifying Patent Holdup on SSRN. The abstract is here:
Patent holdup can arise when circumstances enable a patent owner to extract a larger royalty ex post than it could have obtained in an arm's length transaction ex ante. While the concept of patent holdup is familiar to scholars and practitioners—particularly in the context of standard-essential patent (SEP) disputes—the economic details are frequently misunderstood. For example, the popular assumption that switching costs (those required to switch from the infringing technology to an alternative) necessarily contribute to holdup is false in general, and will tend to overstate the potential for extracting excessive royalties. On the other hand, some commentaries mistakenly presume that large fixed costs are an essential ingredient of patent holdup, which understates the scope of the problem.
In this article, we clarify and distinguish the most basic economic factors that contribute to patent holdup. This casts light on various points of confusion arising in many commentaries on the subject. Path dependence—which can act to inflate the value of a technology simply because it was adopted first—is a useful concept for understanding the problem. In particular, patent holdup can be viewed as opportunistic exploitation of path dependence effects serving to inflate the value of a patented technology (relative to the alternatives) after it is adopted. This clarifies that factors contributing to holdup are not static, but rather consist in changes in economic circumstances over time. By breaking down the problem into its most basic parts, our analysis provides a useful blueprint for applying patent holdup theory in complex cases.
The core of their descriptive argument is that both "classic" and patent holdup are based on a path dependence: one party invests sunk costs and thus is at the mercy of the other party. In this sense, they are surely correct (if we don't ask why the party invested). And the payoff from this is nice, because it allows them to build a model that critically examines sunk costs (holdup) v. switching costs (not holdup). The irony of this, of course, is that it's theoretically irrational to worry about sunk costs when making future decisions.

But I guess I'm not entirely convinced by the normative parallel. The key in all of these cases is transactions costs. So, the question is whether the transactions costs of finding patents are high enough to warrant the investment without expending them. The authors recognize the problem, and note that when injunctions are not possible parties will refuse to pay a license because it is more profitable to do so (holdout). But their answer is that just because there is holdout doesn't mean that holdup isn't real and a problem sometimes. Well, sure, but holdout merely shifts the transactions costs, and if it is cheaper to never make an ex ante agreement (which is typical is these days), then it's hard for me to say that being hit with a patent lawsuit after investment is the sort of path dependence that we should be worried about.

I think this is an interesting and thoughtful paper. There's a lot more than my brief concerns. It attempts to respond to other critiques of patent holdup, and it provides a framework to debate these questions, even if I'm not convinced by the debate.

Monday, March 4, 2019

Recent Advances in Biologics Manufacturing Diminish the Importance of Trade Secrets: A Response to Price and Rai

Guest post by Rebecca Weires, a 2L in the J.D./M.S. Bioengineering program at Stanford

In their 2016 paper, Manufacturing Barriers to Biologics Competition and Innovation, Price and Rai argue the use of trade secrets to protect biologics manufacturing processes is a social detriment. They go on to argue policymakers should demand more enabling disclosure of biologics manufacturing processes, either in patents or biologics license applications (BLAs). The authors premise their arguments on an assessment that (1) variations in the synthesis process can unpredictably affect the structure of a biological product; (2) variations in the structure of a biological product can unpredictably affect the physiological effects of the product, including immunogenicity; and (3) analytical techniques are inadequate to characterize the structure of a biological product. I am more optimistic than Price and Rai that researchers will soon overcome all three challenges. Where private-sector funding may fall short, grant-funded research has already led to tremendous advances in biologics development technology. Rather than requiring more specific disclosure of synthesis processes, as Price and Rai recommend, FDA could and should require more specific disclosure of structure, harmonizing biologics regulation with small molecule regulation. FDA should also incentivize development of industrial scale cell-free protein synthesis processes.

Thursday, February 28, 2019

Sue First, Negotiate Later

Just a brief post this week, as I have a perfect storm of non-work related happenings. So, I'll just say that I'm please to announce that my draft article Sue First, Negotiate Later will be published by the Arizona Law Review. The draft is on SSRN, and the longish abstract is below. I may blog about this in more detail in the future, but this is an introduction:
One of the more curious features of patent law is that patents can be challenged by anyone worried about being sued. This challenge right allows potential defendants to file a declaratory relief lawsuit in their local federal district court, seeking a judgment that a patent is invalid or noninfringed. To avoid this home-court advantage, patent owners may file a patent infringement lawsuit first and, by doing so, retain the case in the patent owner’s venue of choice. But there is an unfortunate side effect to such preemptive lawsuits: they escalate the dispute when the parties may want to instead settle for a license. Thus, policies that allow challenges are favored, but they are tempered by escalation caused by preemptive lawsuits. To the extent a particular challenge rule leads to more preemptive lawsuits, it might be disfavored.
This article tests one such important challenge rule. In MedImmune v. Genentech, the U.S. Supreme Court made it easier for a potential defendant to sue first. Whereas the prior rule required threat of immediate injury, the Supreme Court made clear that any case or controversy would allow a challenger to file a declaratory relief action. This ruling had a real practical effect, allowing recipients of letters that boiled down to, “Let’s discuss my patent,” to file a lawsuit when they could not before.
This was supposed to help alleged infringers, but not everyone was convinced. Many observers at the time predicted that the new rule would lead to more preemptive infringement lawsuits filed by patent holders. They would sue first and negotiate later rather than open themselves up to a challenge by sending a demand letter. Further, most who predicted this behavior—including parties to lawsuits themselves—thought that non-practicing entities would lead the charge. Indeed, as time passed, most reports were that this is what happened: that patent trolls uniquely were suing first and negotiating later. But to date, no study has empirically considered the effect of the MedImmune ruling to determine who filed preemptive lawsuits. This Article tests MedImmune’s unintended consequences. The answer matters: lawsuits are costly, and while “quickie” settlements may be relatively inexpensive, increased incentive to file challenges and preemptive infringement suits can lead to entrenchment instead of settlement.
Using a novel longitudinal dataset, this article considers whether MedImmune led to more preemptive infringement lawsuits by NPEs. It does so in three ways. First, it performs a differences-in-differences analysis to test whether case duration for the most active NPEs grew shorter after MedImmune. One would expect that preemptive suits would settle more quickly because they are proxies for quick settlement cases rather than signals of drawn out litigation. Second, it considers whether, other factors equal, the rate of short-lived case filings increased after MedImmune. That is, even if cases grew longer on average, the share of shorter cases should grow if there are more placeholders. Third, it considers whether plaintiffs themselves disclosed sending a demand letter prior to suing.
It turns out that the conventional wisdom is wrong. Not only did cases not grow shorter – cases with similar characteristics grew longer after MedImmune. Furthermore, NPEs were not the only ones who sued first and negotiated later. Instead, every type of plaintiff sent fewer demand letters, NPEs and product companies alike. If anything, the MedImmune experience shows that everyone likes to sue in their preferred venue. As a matter of policy, it means that efforts to dissuade filing lawsuits should be broadly targeted, because all may be susceptible

Monday, February 25, 2019

Jiarui Liu on the Dominance and Ambiguity of Transformative Use

The Stanford Technology Law Review just published an interesting new copyright article, An Empirical Study of Transformative Use in Copyright Law by Prof. Jiarui Liu. Here is the abstract:
This article presents an empirical study based on all reported transformative use decisions in U.S. copyright history through January 1, 2017. Since Judge Leval coined the doctrine of transformative use in 1990, it has been gradually approaching total dominance in fair use jurisprudence, involved in 90% of all fair use decisions in recent years. More importantly, of all the dispositive decisions that upheld transformative use, 94% eventually led to a finding of fair use. The controlling effect is nowhere more evident than in the context of the four-factor test: A finding of transformative use overrides findings of commercial purpose and bad faith under factor one, renders irrelevant the issue of whether the original work is unpublished or creative under factor two, stretches the extent of copying permitted under factor three towards 100% verbatim reproduction, and precludes the evidence on damage to the primary or derivative market under factor four even though there exists a well-functioning market for the use.
Although transformative use has harmonized fair use rhetoric, it falls short of streamlining fair use practice or increasing its predictability. Courts diverge widely on the meaning of transformative use. They have upheld the doctrine in favor of defendants upon a finding of physical transformation, purposive transformation, or neither. Transformative use is also prone to the problem of the slippery slope: courts start conservatively on uncontroversial cases and then extend the doctrine bit by bit to fact patterns increasingly remote from the original context.
This article, albeit being descriptive in nature, does have a normative connotation. Courts welcome transformative use not despite, but due to, its ambiguity, which is a flexible way to implement their intuitive judgments yet maintain the impression of stare decisis. However, the rhetorical harmony conceals the differences between a wide variety of policy concerns in dissimilar cases, invites casual references to precedents from factually unrelated contexts, and substitutes a mechanical exercise of physical or purposive transformation for an in-depth policy analysis that may provide clearer guidance for future cases.
This article builds on and extends prior empirical work in this area, such as Barton Beebe's study of fair use decisions from 1978 to 2005. And it provides a nice mix of interesting new empirical results and normative analysis that illustrates why fair use doctrine is (at least for me) quite challenging to teach. For example, Figure 1 illustrates how transformative use has cannibalized fair use doctrine since the 1994 Campbell v. Acuff-Rose decision endorsed its use:


Liu also examines data such as the win rate for transformative use over time, by circuit, and by subject matter. But I particularly like that Liu is not just counting cases, but also arguing that courts are using this doctrine as a substitute for in-depth policy analysis.

Friday, February 22, 2019

Does Administrative Patent Law Promote Innovation About Innovation?

I am at Texas Law today for a symposium on The Intersection of Administrative & IP Law, and my panel was asked to address the question: Does Administrative Patent Law Promote Innovation? I focused my remarks on a specific aspect of this: Does Administrative Patent Law Promote Innovation About Innovation? I think the short answer, at least right now, is "no."

There is a lot we don't know about the patent system. USPTO Regional Director Hope Shimabuku started her remarks today by saying that we know IP creates nearly 30 million jobs and adds $6.6 trillion to the U.S. economy each year, citing this USPTO report. But that's not what the report says. It looks at jobs and value from "IP-intensive industries," defined as ones with "IP-count to employment ratio is higher than the average for all industries considered." As the report acknowledges, it is unable to determine how much of these firms' performance is attributable to IP.

And the real answer is: we don't know. In an article I reviewed for Jotwell, economist Heidi Williams recently summarized: "we still have essentially no credible empirical evidence on the seemingly simple question of whether stronger patent rights—either longer patent terms or broader patent rights—encourage research investments." And even on smaller questions, the existing evidence base is weak.

As I explained in Patent Experimentalism, to make empirical progress we need some source of empirical variation. Economists often look for "natural experiments" with variation across time, across jurisdictions, or across similar technologies, and the closer that variation is to random, the easier it is to draw causal inferences. Of course, it's even better to have variation that is actually random, which is why I have joined other scholars in arguing for more use of randomized policy experiments.

The USPTO has a huge opportunity here to both improve the patent system and help address the key administrative law challenge of encouraging accurate and consistent decisions by a decentralized bureaucracy. There are many questions the agency could help answer using more randomization, as I discuss in Patent Experimentalism. During the panel today, I noted two potential areas: experimenting with the time spent examining a given patent (see this great forthcoming article by Michel Frakes and Melissa Wasserman) and with the possibility that examiner bias affects the gender gap in patenting (which fits within the agency's recent mandate from Congress). I noted ways that each could be designed as opt-in progress to encourage buy-in from applicants and from examiners.

But my main point was not that the USPTO should adopt one of these particular experiments—it was that the agency should study something in a way that allows us to draw rigorous inferences. Failing to do so seems like a tremendous missed opportunity.

Tuesday, February 19, 2019

Using Insurance to Deter Lawsuits

The conventional wisdom (my anecdotal experience, anyway) is that the availability of insurance fuels lawsuits. People that otherwise might not sue would use litigation to access insurance funds. I'm sure there's a literature on this. But most insurance covers both defense and indemnity - that is, litigation costs and settlements. But what if the insurance covered the defense and not any settlement costs? Would that serve as a disincentive to bring suit? It surely would change the litigation dynamic.

In The Effect of Patent Litigation Insurance: Theory and Evidence from NPEs, Bernhard Ganglmair (University of Mannheim - Economics), Christian Helmers (Santa Clara - Economics), Brian J. Love (Santa Clara - Law) explore this question with respect to NPE patent litigation insurance. The draft is on SSRN, and the abstract is here:
We analyze the extent to which private defensive litigation insurance deters patent assertion by non-practicing entities (NPEs). We do so by studying the effect that a patent-specific defensive insurance product, offered by a leading litigation insurer, had on the litigation behavior of insured patents’ owners, all of which are NPEs. We first model the impact of defensive litigation insurance on the behavior of patent enforcers and accused infringers. Assuming that a firm’s purchase of insurance is not observed by patent enforcers, we show that the mere availability of defense litigation insurance can have an effect on how often patent enforcers will assert their patents. Next, we empirically evaluate the insurance policy’s effect on the behavior of owners of insured patents by comparing their subsequent assertion of insured patents with their subsequent assertion of their other patents not included in the policy. We additionally compare the assertion of insured patents with patents held by other NPEs with portfolios that were entirely excluded from the insurance product. Our findings suggest that the introduction of this insurance policy had a large, negative effect on the likelihood that a patent included in the policy was subsequently asserted, and our results are robust across different control groups. Our findings also have importance for ongoing debates on the need to reform the U.S. and European patent systems, and suggest that market-based mechanisms can deter so-called “patent trolling.”
On reading the abstract, I was skeptical. After all, there are a bunch of reasons why more firms would defend against NPEs , why NPEs would be less likely to assert, and so forth. But the interesting dynamics of the patent litigation insurance market have me more convinced. Apparently, the insurance didn't cover any old lawsuit; instead, only specific patents were covered. So, the authors were able to look at the differences between firms asserting covered patents, firms that held both covered and non-covered patents, and firms that had no covered patents. Because each of these firms should be equally affected by background law changes, the differences should be limited to the role of insurance.

And that's what they find, unsurprisingly. Assertions of insured patents went down as compared to uninsured patents, and those cases were less likely to settle -- even with the same plaintiff. My one concern about this finding is that patents targeted for insurance may have been weaker in the first place (hence the willingness to insure), and thus there is self-selection. The paper presents some data on the different patents in order to quell this concern, but if there is a methodological challenge, it is here.

This is a longish paper for an empirical paper, in part because they develop a complex game theory model of the insurance purchasing, patent assertion, and patent defense. It is interesting and worth a read.

Sunday, February 17, 2019

Foreign Meaning Matters: Brauneis and Moerland on Trademark's Doctrine of Foreign Equivalents

I was enjoying some siggi's® yogurt, and noticed, just below the trademark name siggi's®, an interesting piece of trivia: "skyr, that's Icelandic for thick yogurt!" You learn something new every day.

Robert Brauneis and Anke Moerland's recent article argues that it would not be good policy to allow the company that distributes siggi's ® yogurt to trademark the name SKYR for yogurt in the United States, even though most people in the United States do not currently know what the word "skyr" means. In short, they argue that when reviewing trademarks for purposes of distinctiveness, the U.S. Patent & Trademark Office (USPTO) and the courts should translate foreign terms that are generic or merely descriptive in their home country, because allowing such marks would cause unexpected harms for competition.

This is a fascinating paper that warrants serious thinking, and perhaps re-thinking, of how trademark law currently treats foreign terms.

Tuesday, February 12, 2019

IP and the Right to Repair

I ran across an interesting article last week that I thought I would share. It's called Intellectual Property Law and the Right to Repair, by Leah Chan Grinvald (Suffolk Law) and Ofer Tur-Sinai (Ono Academic College). A draft is on SSRN and the abstract is here:
In recent years, there has been a growing push in different U.S. states towards legislation that would provide consumers with a “right to repair” their products. Currently 18 states have pending legislation that would require product manufacturers to make available replacement parts and repair manuals. This grassroots movement has been triggered by a combination of related factors. One such factor is the ubiquity of microchips and software in an increasing number of consumer products, from smartphones to cars, which makes the repair of such products more complicated and dependent upon the availability of information supplied by the manufacturers. Another factor is the unscrupulous practices of large, multinational corporations designed to force consumers to repair their products only through their own offered services, and ultimately, to manipulate consumers into buying newer products instead of repairing them. These factors have rallied repair shops, e-recyclers, and other do-it-yourselfers to push forward, demanding a right to repair.
Unfortunately, though, this legislation has stalled in many of the states. Manufacturers have been lobbying the legislatures to stop the enactment of the right to repair laws based on different concerns, including how these laws may impinge on their intellectual property rights. Indeed, a right to repair may not be easily reconcilable with the United States’ far-reaching intellectual property rights regime. For example, requiring manufacturers to release repair manuals could implicate a whole host of intellectual property laws, including trade secret. Similarly, employing measures undercutting a manufacturer's control of the market for replacement parts might conflict with patent exclusivity. Nonetheless, this Article’s thesis holds that intellectual property laws should not be used to inhibit the right to repair from being fully implemented.
In support of this claim, this Article develops a theoretical framework that enables justifying the right to repair in a manner that is consistent with intellectual property protection. In short, the analysis demonstrates that a right to repair can be justified by the very same rationales that have been used traditionally to justify intellectual property rights. Based on this theoretical foundation, this Article then explores, for the first time, the various intellectual property rules and doctrines that may be implicated in the context of the current repair movement. As part of this overview, this Article identifies those areas where intellectual property rights could prevent repair laws from being fully realized, even if some of the states pass the legislation, and recommends certain reforms that are necessary to accommodate the need for a right to repair and enable it to take hold.
I thought this was an interesting and provocative paper, even if I am skeptical of the central thesis. I should note that the first half of the paper or so makes the normative case, and the authors do a good job of laying out the case.

Many of the topics are those you see in the news, like how laws that forbid breaking DRM stop others from repairing their stuff (which now all has a computer) or how patent law can make it difficult to make patented repair parts.

The treatment of trade secrets, in particular, was a useful addition to the literature. As I wrote on the economics of trade secret many years ago, my view is that trade secrecy doesn't serve as an independent driver of innovation because people will keep their information secret anyway. Thus, any innovation effects are secondary, in the sense that savings made from not having to protect secrets so carefully can be channeled to R&D. But there was always a big caveat: this assumes that firms can "keep their information secret anyway," and that there's no forced disclosure rule.

So, when this article's hypothesized right to repair extended to disclosure of manuals, schematics, and other information necessary to repair, it caught my eye. On the one hand, as someone who has been frustrated by lack of manuals and reverse engineered repair of certain things, I love it. On the other hand, I wonder how requiring disclosure of such information would change the incentive to dynamics. With respect to schematics, companies would probably continue to create them, but perhaps they might make a second, less detailed schematic. Or, maybe nothing would happen because that information is required anyway. But with respect to manuals, I wonder whether companies would lose the incentive to keep detailed records of customer service incidents if they could not profit from it. Keeping such records is costly, and if repairs are charged to customers, it might be better to reinvent the wheel every time than to pay to maintain an information system that others will use. I doubt it, though, as there is still value in having others repair your goods, and if people can repair their own, then the market becomes even more competitive.

While the paper discusses the effect on the incentive to innovate with respect to other forms of IP, it does not do so for trade secrets.

With respect to other IP, the paper seems to take two primary positions on the effect of immunizing IP infringement for repair. The first is that the right to repair can also promote the progress, and thus it should be considered as part of the entire system. While I agree with the premise from a utilitarian point of view, I was not terribly convinced that the right to repair would somehow create incentives for more development that would outweigh initial design IP rights. It might, of course, but there's not a lot of nuanced argument (or evidence) in either direction.

The second position is that loosening IP rights will not weaken "core" incentives to develop the product in the first place, because manufacturers will still want to make the best/most innovative products possible. I think this argument is incomplete in two ways. Primarily, it assumes that manufacturers are monolithic. But the reality is that multiple companies design parts, and their incentive to do so (and frankly their ability to stay in business) may well depend on the ability to protect designs/copyright/etc. At the very least, it will affect pricing. For example, if a company charged for manuals, it may be because it had to pay a third party for each copy distributed. Knowing that such fees are not going to be paid, the original manual author will charge more up front, increasing the price of the product (indeed, the paper seems to assume very little effect on original prices to make up for lost repair revenue). Secondarily, downstream repairs may drive innovation in component parts. For example, how repairs are done might cause manufacturers to not improve parts for easy repair. The paper doesn't seem to grapple with this nuance.

This was an interesting paper, and worth a read. It's a long article - the authors worked hard to cover a large number of bases, and it certainly made me think harder about the right to repair.

Wednesday, February 6, 2019

Using Antitrust to Fix Broken Markets

The prescription drug market is a real mess, in my view. At the very least, it is complicated, and prices for drugs seem to be higher in the U.S. than elsewhere (though teasing that out is hard, since very few pay sticker price and insurance companies negotiate deals). But I don't know what the solution is, and I'm not convinced anyone else does, either. The recent article that triggers my thoughts on this is: A Non-Coercive Approach to Product Hopping, 33 Antitrust 102 (2018), by Michael Carrier (Rutgers) and Steve Shadowen (Hilliard & Shadowen LLP). Their paper is on SSRN, and the abstract is here:
The antitrust analysis of product hopping is nuanced. The conduct, which consists of a drug company’s reformulation of its product and encouragement of doctors to switch prescriptions to the reformulated product, sits at the intersection of antitrust law, patent law, the Hatch-Waxman Act, and state substitution laws, and involves uniquely complicated markets with different buyers (insurance companies, patients) and decision-makers (physicians).
In Doryx, Namenda, and Coercion, Jack E. Pace III and Kevin C. Adam applaud some courts’ use of a product-hopping analysis that finds liability only where there is an element of coercion. In this response, we explain that the unique characteristics of pharmaceutical markets render such a coercion-based approach misguided. We also show that excessively deferential analyses would give brand-name drug firms free rein to evade generic-promoting regulatory regimes. Finally, we offer a conservative framework for analyzing product hopping rooted in the economics and realities of the pharmaceutical industry.
This very brief response essay does a good job of highlighting some of the difficult nuances associated with product hopping (something I'll describe more in a minute) in the prescription drug market. I don't necessarily agree with it - I actually disagree with the final proposal. I share this here because it prompted me to think more closely about an issue I had mostly ignored, and I respect Mike Carrier and think that he does about as good a job at presenting this particular viewpoint as anyone.

Still, I'm troubled by the use of the blunt weapon of antitrust against product hopping, even as I have misgivings about other areas of this market. Product hopping occurs when a name brand (read, patented) drug is pulled from the market in favor of a "new and improved" product (that is also patented). The concern is that the removal of the old brand from the market just as generics are arriving will fail to trigger mandatory generic substitution rules, because doctors can't prescribe the old name brand. Instead, the fear is that doctors will only fill prescriptions for the new, improved (though the parties debate the improved point) drug, for which there is no generic to substitute. Further, they won't ever prescribe the generic once the name brand is off the market.

Here's what gives me a bit of trouble about this. While I am a big fan of automatic generic substitution laws, I am skeptical that they should be used as innovation policy, to the point where the inability of a generic to take advantage of it creates an antitrust injury and also forces a company to make a product that it may or may not want to make for profit or other reasons. In other words, a system that requires a company to make a product so that other companies can sell a substitute product that sellers are required by law to sell is a broken system indeed.

The question is, where is the system broken? Some will, no doubt say that it is the patent system and exclusivity. Surely that plays a role. But I'd like to point to three other points of market failure that drug makers point to. None of these are really new, but I'm taking the blogger's prerogative to talk about them.

1. I think it is a huge assumption that doctors will only prescribe the new drug and won't prescribe the generic once it hits the market. The argument in the article above is that because doctors don't have to pay for it, they have no incentive to cost minimize. Which the incentive is certainly diminished in theory, this is not my experience with (many) doctors at all. I have had many doctors prescribe me (or my wife) "older" generic versions of drugs because they were cheaper. One funny thing is that often times those older versions were awful - like a drug my wife had to take as an awful tasting liquid because the pill version cost 3 or 4 times as much and wasn't covered by the (generally good) insurer, or a blood pressure medication I had to take because it was the standards--until I could show that I had a rare side effect.

2. I think insurance companies can regulate much of this through formularies. If you make the new drug non-formulary (or even brand cost), people will migrate to the cheaper generic substitute, even if there is no prescription available. Even if doctors don't pay, insurers sure do, and they have every incentive to make sure that generic substitutes are used if the new drug is not really an improvement.

3. I bristle a bit at the notion that generic companies must rely on substitution laws to get doctors to prescribe. I realize that's how it is done now, and while substitution laws are a good thing, there is nothing stopping generics from telling doctors why the new-fangled drug is no better than the one that was just removed from the market for the same money. We make every other industry do this. I suspect that litigation is cheaper than advertising, and while I am happy to give the industry a leg up, I am wary of giving it a pass on basic business requirements, and I am wary to taking something that's a regulatory windfall to generics (even if it is good for consumers) and making it an affirmative innovation policy. .

4. Taking this last point further, from an innovation standpoint, is there any reason why generics can't innovate their own product hopping drug? Knowing that a deadline is coming, why can't they innovate (or license) their own extended relief version in addition? Indeed, I take an extended release version of a generic drug. It costs a fortune (before insurance), and the generic is benefiting from having created/licensed it. I suppose a more salient example is Mylan's Epi-Pen, which uses a protected delivery system for a generic drug. While most people are not thrilled with Mylan's pricing strategy, it illustrates the basic point I'm trying to make: generic manufacturers are not helpless victims of the system stacked against them, they are active, rent-seeking participants willing to exploit systematic flaws in their favor.


Finally, I will say that all of my arguments are empirically testable, as are the arguments on the other side. If folks can point to studies that have demonstrated actual physician, consumer, and insurer behavior in product hopping cases, I will be happy to post here and assess!

Sunday, February 3, 2019

AOC on Pharma & Public Funding

Congresswoman Alexandria Ocasio-Cortez has already gotten Americans to start teaching each other about marginal taxation, and now she has started a dialog about the role of public funding in public sector research:
In these short videos (which email subscribers to this blog need to click through to see), Ocasio-Cortez and Ro Khanna are seen asking questions during a Jan. 29 House Oversight and Reform Committee hearing, "Examining the Actions of Drug Companies in Raising Prescription Drug Prices." So far, @AOC's three tweets about this issue have generated over 7,000 comments, 58,000 retweets, and 190,000 likes.

Privatization of publicly funded research through patents is one of my main areas of research, so I love to see it in the spotlight. There are enough concerns with the current system that the government should be paying attention. But as I explain below, condensing Ocasio-Cortez and Khanna's questions into a headline like "The Public, Not Pharma, Funds Drug Research" is misleading. Highlighting the role of public R&D funding is important, but I hope this attention will spur more people to learn about how that public funding interacts with private funding, and why improving the drug development ecosystem involves a lot of difficult and uncertain policy questions. This post attempts to explain some key points that I hope will be part of this conversation.

Tuesday, January 29, 2019

It's Hard Out There for a Commons

I just finished reading a fascinating draft article about the Eco-Patent Commons, a commons where about 13 companies put in a little fewer than 100 patents that could be used by any third party. A commons differs from cross-licensing or other pools in a couple of important ways. First, the owner must still maintain the patent (OK, that's common to licensing, but different from the public domain). Second, anyone, not just members of the commons, can use the patents (which is common to the public domain, but different from licensing).

The hope for the commons was that it would aid in diffusion of green patents, but it was not to be. The draft by Jorge Contreras (Utah Law), Bronwyn Hall (Berkeley Econ), and Christian Helmers (Santa Clara Econ) is called Green Technology Diffusion: A Post-Mortem Analysis of the Eco-Patent Commons. A draft is on SSRN. Here is the abstract:
We revisit the effect of the “Eco-Patent Commons” (EcoPC) on the diffusion of patented environmentally friendly technologies following its discontinuation in 2016, using both participant survey and data analytic evidence. Established in January 2008 by several large multinational companies, the not-for-profit initiative provided royalty-free access to 248 patents covering 94 “green” inventions. Hall and Helmers (2013) suggested that the patents pledged to the commons had the potential to encourage the diffusion of valuable environmentally friendly technologies. Our updated results now show that the commons did not increase the diffusion of pledged inventions, and that the EcoPC suffered from several structural and organizational issues. Our findings have implications for the effectiveness of patent commons in enabling the diffusion of patented technologies more broadly.
The findings were pretty bleak. In short, the patents were cited less than a set of matching patents, and many of them were allowed to lapse (which implies lack of value). Their survey-type data also showed a lack of importance/diffusion.

What I really love about this paper, though, is that there's an interpretation for everybody in it. For the "we need strong rights" group, this failure is evidence of the tragedy of the commons. If nobody has the right to fully profit on the inventions, then nobody will do so, and the commons will go fallow.

But for the "we don't need strong rights" group, this failure is evidence that the supposedly important patents were weak, and that it was better to essentially make these public domain than to have after the fact lawsuits.

For the "patents are useless" group, this failure shows that nobody reads patents anyway, and so they fail in their essential purpose: providing information as a quid pro quo for exclusivity.

And for the middle ground folks, you have the conclusions in the study. Maybe some commons can work, but you have to be careful about how you set them up, and this one had procedural and substantive failings that doomed the patents to go unused.

I don't know the answer, but I think cases studies like this are helpful for better understanding how patents do and do not disseminate information, as well as learning how to better structure patent pools.

Tuesday, January 22, 2019

The Name's the Thing

Much to my chagrin, my kids like to waste their time not just playing video games, but also watching videos of others playing video games. This is a big business. Apparently the top Fortnite streamer made some $10 million last year. Whaaaaat? But these services aren't interchangeable. The person doing the streaming is important to the viewer.

But what if two streamers have the same name, say Fred, or Joan, or...Kardashian. Should we allow someone to lock others with the same name out? Under what circumstances? And what if the service is simply being famous-for endorsements, etc.

Bill McGeveran (Minnesota) has posted an article that discusses these issues called Selfmarks, now published in the Houston Law Review. It is on SSRN, and the abstract is here:
“Selfmarks” are branded personal identifiers that can be protected as trademarks. From Kim Kardashian West to BeyoncĂ©’s daughter, attempts to propertize persona through trademark protection are on the rise. But should they be? The holder of a selfmark may use it to send a signal about products, just like the routine types of brand extension, cross-branding, and merchandising arrangements fully embraced under modern trademark law. Yet traditional trademark doctrine has adjusted to selfmarks slowly and unevenly. Instead, the law has evolved to protect selfmarks through mechanisms other than trademarks. In an age where brands have personalities and people nurture their individual brands, it is time to ask what principled reasons we have not to protect the individual persona as a trademark.
I liked this article a lot--especially its straightforward approach. It looks at these marks through the lens of trademark law (as it should), considering use (that is what goods and services) and distinctiveness. In doing so, it provides several useful hypotheticals that illustrate the problems of using names as trademarks. The paper also considers Lanham Act sections that specifically deal with names.

Finally, the paper discusses a couple of concerns. First is endorsement confusion. As the "service" of a celebrity becomes endorsement, then everything the celebrity does is potentially an endorsement, even though that may not be the intention. McGeveran discusses this concern. Second is the ever-present speech concern. If names are protected as marks, then it is harder to use that name in speech.

This article is a really good primer on names as marks. I think a good extension for the next one would be a topic that a student of mine wrote about last year: joint marks. That is, when multiple people have the same name - together even - then the mark can cease being distinctive of their individual goods. My student did a great case study of the Kardashian marks, showing that several of them may well be invalid, but I think this could be extended to a longer theoretical piece if it hasn't been done already.

Tuesday, January 15, 2019

The Copyright Law of Interfaces

Winter break has ended and so, too, has my brief blogging break. I've blogged before (many times) about the ongoing Oracle v. Google case. My opinion has been and continues to be that nobody is getting the law exactly right here, to the point where I may draft my own amicus brief supporting grant of certiorari. But to the extent I do agree with one of the sides, it is the side that says API (Application Programming Interfaces) developers must be allowed to reuse the command and parameter structure of the original API without infringing copyright. My disagreement is merely with the way you get there. Some believe that API's are not copyrightable at all. I've blogged before that I'm not so sure about this. Some believe that this should be fair use. I think this is probably true but the factors don't cleanly line up. My view is that this should be handled on the infringement side: that API's, even if copyrightable, are not infringing when used in a particular way (that is, they are filtered out of an infringement analysis). It's the same result, but (for me, at least) much cleaner theoretically and doctrinally.

But make no mistake, this sort of reuse is critically important, as Charles Duan (R Street Institute) points out in his latest draft: Internet of Infringing Things: The Effect of Computer Interface Copyrights on Technology Standards (forthcoming in Rutgers Computer and Technology Law Journal). The draft is on SSRN and an abstract is here:
This article aims to explain how copyright in computer interfaces implicates the operation of common technologies. An interface, as used in industry and in this article, is a means by which a computer system communicates with other entities, either human programmers or other computers, to transmit information and receive instructions. Accordingly, if it is copyright infringement to implement an interface (a technical term referring to using the interface in its expected manner), then common technologies such as Wi-Fi, web pages, email, USB, and digital TV all infringe copyright.
By reviewing the intellectual property practices of the standard-setting organizations that devise and promulgate standards for these and other communications technologies, the article demonstrates that, at least in the eyes of standard-setting organizations and by extension in the eyes of technology industry members, implementation of computer interfaces is not an infringement of copyright. It concludes that courts should act consistent with these industry expectations rather than upending those expectations and leaving the copyright infringement status of all sorts of modern technologies in limbo.
 As noted, I agree with the end result, so any critique here should be taken as one of the paper, and not of the final position. I think Duan does a very nice job of explaining what an interface is: namely, the set of commands that third-party programmers send to a server/system to make it operate. There is value in standardization of these interfaces - it allows people to write one program that will work with multiple systems. Duan uses two good examples. The first is HTML/CSS programming, which allows people to write a single web document and have it run in any browser and/or server that supports the same language. The second is SMTP, which allows email clients to communicate with any email server. The internet was built on these sorts of interfaces, called RFCs.

Duan then does a nice job of showing the creativity that goes into selecting the commands - as with Java, there were choices (though limited) to make about each command. Because the set of functions is limited, number of ways to describe the function is limited, but there are some choices to be made. The article then shows how those commands are grouped together in functional ways.

Finally, Duan nicely shows how many important standards are out there that follow this same pattern, and shows how standards organizations handle any copyright--they don't. In short, allowing contributors to claim copyright ownership would destroy systems, because there is no requirement that contributors allow others to use the interface. Duan's concern is that if individual authors owned the IP in their interface contributions to standards (a potential extension of Oracle v. Google) then holdup might occur that harms adoption. This, of course, is hotly debated, as it is in the patent area.

I think it's a really interesting and well-written paper. Before I get to a couple critiques, I should note that Duan is focused more on how the current legal8 rulings might affect standards than critiquing the rulings themselves (as I have done here). Thus, my comments here may simply not have been on his radar.

My primary thought reading this is that the paper doesn't deal with the declaring code. That is, in order to implement the Java commands, Google created short code blocks that defined the functions, the parameters, etc.  Here is an example from the original district court opinion:

package java.lang;
java.lang public
class Math {
class Math public static int max (int x, int y) {

This code is what the jury found to be copied (though presumably Google wrote it in some other language). But the standards interfaces don't provide any code, per se. They only provide explanations. Here is an example definition from the RFC for the SMTP protocol discussed in the paper:
mail = "MAIL FROM:" Reverse-path
In other words, standards will define the commands that must be sent, but there's not a language based implementation (e.g. public, static, integer, etc.). As with the sample line above. Most say: send x command to do y. And people writing software are on their own to figure out how to do that. And you can bet the implementing code looks very similar, but there's something different about how it is specified at the outset (a full header declaration v. a looser description). So, the questions this raises are a) does this make standards less likely to infringe, even under the Federal Circuit's rules (I think yes), and b) does this change how we think about declaring code? (I think no, because the code is still minimal and functional, but Oracle presumably disagrees).

Secondarily, I don't think the article considers the differences between Oracle's position (now - it changed, which is one of the problems) and that of a contribution to standards. Contribution to a standard is made so that others will adopt it, presumably because it gives you a competitive advantage of some sort. By not being part of the standard, you risk having a fragmented (smaller) set of users. But if Oracle doesn't want others adopting Java language and would rather be limited, then that makes the analogy inapt. If Google had known this was not allowed and gone another way, it may well be that Java is dead today (figure that in to damages calculations). But a fear of companies submitting to standards and then taking it back is to me different in kind from companies that never want to be part of the standard. (Of course, as noted above, there is some dispute about this, as Sun apparently did act as if they wanted this language to be an open standard).

A final point: two sentences in the article caught my eye, because they support my view of the world (confirmation bias, of course). When speaking of standard setting organization policies, Duan writes: "To the extent that a copyright license is sought from contributors to standards, the license is solely directed to distributing the text of the standard. This suggests that copyright is simply not an issue with regard to implementing interfaces." Roughly interpreted, this means that these organizations think that maybe you can copyright your API, but that copyright only applies to slavish copying of the entire textual document. But when it comes to reuse of the technical requirements of the standard, we filter out the functionality and allow the reuse. This has always been my position, but nobody has argued it in this case.