Tuesday, February 12, 2019

IP and the Right to Repair

I ran across an interesting article last week that I thought I would share. It's called Intellectual Property Law and the Right to Repair, by Leah Chan Grinvald (Suffolk Law) and Ofer Tur-Sinai (Ono Academic College). A draft is on SSRN and the abstract is here:
In recent years, there has been a growing push in different U.S. states towards legislation that would provide consumers with a “right to repair” their products. Currently 18 states have pending legislation that would require product manufacturers to make available replacement parts and repair manuals. This grassroots movement has been triggered by a combination of related factors. One such factor is the ubiquity of microchips and software in an increasing number of consumer products, from smartphones to cars, which makes the repair of such products more complicated and dependent upon the availability of information supplied by the manufacturers. Another factor is the unscrupulous practices of large, multinational corporations designed to force consumers to repair their products only through their own offered services, and ultimately, to manipulate consumers into buying newer products instead of repairing them. These factors have rallied repair shops, e-recyclers, and other do-it-yourselfers to push forward, demanding a right to repair.
Unfortunately, though, this legislation has stalled in many of the states. Manufacturers have been lobbying the legislatures to stop the enactment of the right to repair laws based on different concerns, including how these laws may impinge on their intellectual property rights. Indeed, a right to repair may not be easily reconcilable with the United States’ far-reaching intellectual property rights regime. For example, requiring manufacturers to release repair manuals could implicate a whole host of intellectual property laws, including trade secret. Similarly, employing measures undercutting a manufacturer's control of the market for replacement parts might conflict with patent exclusivity. Nonetheless, this Article’s thesis holds that intellectual property laws should not be used to inhibit the right to repair from being fully implemented.
In support of this claim, this Article develops a theoretical framework that enables justifying the right to repair in a manner that is consistent with intellectual property protection. In short, the analysis demonstrates that a right to repair can be justified by the very same rationales that have been used traditionally to justify intellectual property rights. Based on this theoretical foundation, this Article then explores, for the first time, the various intellectual property rules and doctrines that may be implicated in the context of the current repair movement. As part of this overview, this Article identifies those areas where intellectual property rights could prevent repair laws from being fully realized, even if some of the states pass the legislation, and recommends certain reforms that are necessary to accommodate the need for a right to repair and enable it to take hold.
I thought this was an interesting and provocative paper, even if I am skeptical of the central thesis. I should note that the first half of the paper or so makes the normative case, and the authors do a good job of laying out the case.

Many of the topics are those you see in the news, like how laws that forbid breaking DRM stop others from repairing their stuff (which now all has a computer) or how patent law can make it difficult to make patented repair parts.

The treatment of trade secrets, in particular, was a useful addition to the literature. As I wrote on the economics of trade secret many years ago, my view is that trade secrecy doesn't serve as an independent driver of innovation because people will keep their information secret anyway. Thus, any innovation effects are secondary, in the sense that savings made from not having to protect secrets so carefully can be channeled to R&D. But there was always a big caveat: this assumes that firms can "keep their information secret anyway," and that there's no forced disclosure rule.

So, when this article's hypothesized right to repair extended to disclosure of manuals, schematics, and other information necessary to repair, it caught my eye. On the one hand, as someone who has been frustrated by lack of manuals and reverse engineered repair of certain things, I love it. On the other hand, I wonder how requiring disclosure of such information would change the incentive to dynamics. With respect to schematics, companies would probably continue to create them, but perhaps they might make a second, less detailed schematic. Or, maybe nothing would happen because that information is required anyway. But with respect to manuals, I wonder whether companies would lose the incentive to keep detailed records of customer service incidents if they could not profit from it. Keeping such records is costly, and if repairs are charged to customers, it might be better to reinvent the wheel every time than to pay to maintain an information system that others will use. I doubt it, though, as there is still value in having others repair your goods, and if people can repair their own, then the market becomes even more competitive.

While the paper discusses the effect on the incentive to innovate with respect to other forms of IP, it does not do so for trade secrets.

With respect to other IP, the paper seems to take two primary positions on the effect of immunizing IP infringement for repair. The first is that the right to repair can also promote the progress, and thus it should be considered as part of the entire system. While I agree with the premise from a utilitarian point of view, I was not terribly convinced that the right to repair would somehow create incentives for more development that would outweigh initial design IP rights. It might, of course, but there's not a lot of nuanced argument (or evidence) in either direction.

The second position is that loosening IP rights will not weaken "core" incentives to develop the product in the first place, because manufacturers will still want to make the best/most innovative products possible. I think this argument is incomplete in two ways. Primarily, it assumes that manufacturers are monolithic. But the reality is that multiple companies design parts, and their incentive to do so (and frankly their ability to stay in business) may well depend on the ability to protect designs/copyright/etc. At the very least, it will affect pricing. For example, if a company charged for manuals, it may be because it had to pay a third party for each copy distributed. Knowing that such fees are not going to be paid, the original manual author will charge more up front, increasing the price of the product (indeed, the paper seems to assume very little effect on original prices to make up for lost repair revenue). Secondarily, downstream repairs may drive innovation in component parts. For example, how repairs are done might cause manufacturers to not improve parts for easy repair. The paper doesn't seem to grapple with this nuance.

This was an interesting paper, and worth a read. It's a long article - the authors worked hard to cover a large number of bases, and it certainly made me think harder about the right to repair.

Wednesday, February 6, 2019

Using Antitrust to Fix Broken Markets

The prescription drug market is a real mess, in my view. At the very least, it is complicated, and prices for drugs seem to be higher in the U.S. than elsewhere (though teasing that out is hard, since very few pay sticker price and insurance companies negotiate deals). But I don't know what the solution is, and I'm not convinced anyone else does, either. The recent article that triggers my thoughts on this is: A Non-Coercive Approach to Product Hopping, 33 Antitrust 102 (2018), by Michael Carrier (Rutgers) and Steve Shadowen (Hilliard & Shadowen LLP). Their paper is on SSRN, and the abstract is here:
The antitrust analysis of product hopping is nuanced. The conduct, which consists of a drug company’s reformulation of its product and encouragement of doctors to switch prescriptions to the reformulated product, sits at the intersection of antitrust law, patent law, the Hatch-Waxman Act, and state substitution laws, and involves uniquely complicated markets with different buyers (insurance companies, patients) and decision-makers (physicians).
In Doryx, Namenda, and Coercion, Jack E. Pace III and Kevin C. Adam applaud some courts’ use of a product-hopping analysis that finds liability only where there is an element of coercion. In this response, we explain that the unique characteristics of pharmaceutical markets render such a coercion-based approach misguided. We also show that excessively deferential analyses would give brand-name drug firms free rein to evade generic-promoting regulatory regimes. Finally, we offer a conservative framework for analyzing product hopping rooted in the economics and realities of the pharmaceutical industry.
This very brief response essay does a good job of highlighting some of the difficult nuances associated with product hopping (something I'll describe more in a minute) in the prescription drug market. I don't necessarily agree with it - I actually disagree with the final proposal. I share this here because it prompted me to think more closely about an issue I had mostly ignored, and I respect Mike Carrier and think that he does about as good a job at presenting this particular viewpoint as anyone.

Still, I'm troubled by the use of the blunt weapon of antitrust against product hopping, even as I have misgivings about other areas of this market. Product hopping occurs when a name brand (read, patented) drug is pulled from the market in favor of a "new and improved" product (that is also patented). The concern is that the removal of the old brand from the market just as generics are arriving will fail to trigger mandatory generic substitution rules, because doctors can't prescribe the old name brand. Instead, the fear is that doctors will only fill prescriptions for the new, improved (though the parties debate the improved point) drug, for which there is no generic to substitute. Further, they won't ever prescribe the generic once the name brand is off the market.

Here's what gives me a bit of trouble about this. While I am a big fan of automatic generic substitution laws, I am skeptical that they should be used as innovation policy, to the point where the inability of a generic to take advantage of it creates an antitrust injury and also forces a company to make a product that it may or may not want to make for profit or other reasons. In other words, a system that requires a company to make a product so that other companies can sell a substitute product that sellers are required by law to sell is a broken system indeed.

The question is, where is the system broken? Some will, no doubt say that it is the patent system and exclusivity. Surely that plays a role. But I'd like to point to three other points of market failure that drug makers point to. None of these are really new, but I'm taking the blogger's prerogative to talk about them.

1. I think it is a huge assumption that doctors will only prescribe the new drug and won't prescribe the generic once it hits the market. The argument in the article above is that because doctors don't have to pay for it, they have no incentive to cost minimize. Which the incentive is certainly diminished in theory, this is not my experience with (many) doctors at all. I have had many doctors prescribe me (or my wife) "older" generic versions of drugs because they were cheaper. One funny thing is that often times those older versions were awful - like a drug my wife had to take as an awful tasting liquid because the pill version cost 3 or 4 times as much and wasn't covered by the (generally good) insurer, or a blood pressure medication I had to take because it was the standards--until I could show that I had a rare side effect.

2. I think insurance companies can regulate much of this through formularies. If you make the new drug non-formulary (or even brand cost), people will migrate to the cheaper generic substitute, even if there is no prescription available. Even if doctors don't pay, insurers sure do, and they have every incentive to make sure that generic substitutes are used if the new drug is not really an improvement.

3. I bristle a bit at the notion that generic companies must rely on substitution laws to get doctors to prescribe. I realize that's how it is done now, and while substitution laws are a good thing, there is nothing stopping generics from telling doctors why the new-fangled drug is no better than the one that was just removed from the market for the same money. We make every other industry do this. I suspect that litigation is cheaper than advertising, and while I am happy to give the industry a leg up, I am wary of giving it a pass on basic business requirements, and I am wary to taking something that's a regulatory windfall to generics (even if it is good for consumers) and making it an affirmative innovation policy. .

4. Taking this last point further, from an innovation standpoint, is there any reason why generics can't innovate their own product hopping drug? Knowing that a deadline is coming, why can't they innovate (or license) their own extended relief version in addition? Indeed, I take an extended release version of a generic drug. It costs a fortune (before insurance), and the generic is benefiting from having created/licensed it. I suppose a more salient example is Mylan's Epi-Pen, which uses a protected delivery system for a generic drug. While most people are not thrilled with Mylan's pricing strategy, it illustrates the basic point I'm trying to make: generic manufacturers are not helpless victims of the system stacked against them, they are active, rent-seeking participants willing to exploit systematic flaws in their favor.


Finally, I will say that all of my arguments are empirically testable, as are the arguments on the other side. If folks can point to studies that have demonstrated actual physician, consumer, and insurer behavior in product hopping cases, I will be happy to post here and assess!

Sunday, February 3, 2019

AOC on Pharma & Public Funding

Congresswoman Alexandria Ocasio-Cortez has already gotten Americans to start teaching each other about marginal taxation, and now she has started a dialog about the role of public funding in public sector research:
In these short videos (which email subscribers to this blog need to click through to see), Ocasio-Cortez and Ro Khanna are seen asking questions during a Jan. 29 House Oversight and Reform Committee hearing, "Examining the Actions of Drug Companies in Raising Prescription Drug Prices." So far, @AOC's three tweets about this issue have generated over 7,000 comments, 58,000 retweets, and 190,000 likes.

Privatization of publicly funded research through patents is one of my main areas of research, so I love to see it in the spotlight. There are enough concerns with the current system that the government should be paying attention. But as I explain below, condensing Ocasio-Cortez and Khanna's questions into a headline like "The Public, Not Pharma, Funds Drug Research" is misleading. Highlighting the role of public R&D funding is important, but I hope this attention will spur more people to learn about how that public funding interacts with private funding, and why improving the drug development ecosystem involves a lot of difficult and uncertain policy questions. This post attempts to explain some key points that I hope will be part of this conversation.

Tuesday, January 29, 2019

It's Hard Out There for a Commons

I just finished reading a fascinating draft article about the Eco-Patent Commons, a commons where about 13 companies put in a little fewer than 100 patents that could be used by any third party. A commons differs from cross-licensing or other pools in a couple of important ways. First, the owner must still maintain the patent (OK, that's common to licensing, but different from the public domain). Second, anyone, not just members of the commons, can use the patents (which is common to the public domain, but different from licensing).

The hope for the commons was that it would aid in diffusion of green patents, but it was not to be. The draft by Jorge Contreras (Utah Law), Bronwyn Hall (Berkeley Econ), and Christian Helmers (Santa Clara Econ) is called Green Technology Diffusion: A Post-Mortem Analysis of the Eco-Patent Commons. A draft is on SSRN. Here is the abstract:
We revisit the effect of the “Eco-Patent Commons” (EcoPC) on the diffusion of patented environmentally friendly technologies following its discontinuation in 2016, using both participant survey and data analytic evidence. Established in January 2008 by several large multinational companies, the not-for-profit initiative provided royalty-free access to 248 patents covering 94 “green” inventions. Hall and Helmers (2013) suggested that the patents pledged to the commons had the potential to encourage the diffusion of valuable environmentally friendly technologies. Our updated results now show that the commons did not increase the diffusion of pledged inventions, and that the EcoPC suffered from several structural and organizational issues. Our findings have implications for the effectiveness of patent commons in enabling the diffusion of patented technologies more broadly.
The findings were pretty bleak. In short, the patents were cited less than a set of matching patents, and many of them were allowed to lapse (which implies lack of value). Their survey-type data also showed a lack of importance/diffusion.

What I really love about this paper, though, is that there's an interpretation for everybody in it. For the "we need strong rights" group, this failure is evidence of the tragedy of the commons. If nobody has the right to fully profit on the inventions, then nobody will do so, and the commons will go fallow.

But for the "we don't need strong rights" group, this failure is evidence that the supposedly important patents were weak, and that it was better to essentially make these public domain than to have after the fact lawsuits.

For the "patents are useless" group, this failure shows that nobody reads patents anyway, and so they fail in their essential purpose: providing information as a quid pro quo for exclusivity.

And for the middle ground folks, you have the conclusions in the study. Maybe some commons can work, but you have to be careful about how you set them up, and this one had procedural and substantive failings that doomed the patents to go unused.

I don't know the answer, but I think cases studies like this are helpful for better understanding how patents do and do not disseminate information, as well as learning how to better structure patent pools.

Tuesday, January 22, 2019

The Name's the Thing

Much to my chagrin, my kids like to waste their time not just playing video games, but also watching videos of others playing video games. This is a big business. Apparently the top Fortnite streamer made some $10 million last year. Whaaaaat? But these services aren't interchangeable. The person doing the streaming is important to the viewer.

But what if two streamers have the same name, say Fred, or Joan, or...Kardashian. Should we allow someone to lock others with the same name out? Under what circumstances? And what if the service is simply being famous-for endorsements, etc.

Bill McGeveran (Minnesota) has posted an article that discusses these issues called Selfmarks, now published in the Houston Law Review. It is on SSRN, and the abstract is here:
“Selfmarks” are branded personal identifiers that can be protected as trademarks. From Kim Kardashian West to BeyoncĂ©’s daughter, attempts to propertize persona through trademark protection are on the rise. But should they be? The holder of a selfmark may use it to send a signal about products, just like the routine types of brand extension, cross-branding, and merchandising arrangements fully embraced under modern trademark law. Yet traditional trademark doctrine has adjusted to selfmarks slowly and unevenly. Instead, the law has evolved to protect selfmarks through mechanisms other than trademarks. In an age where brands have personalities and people nurture their individual brands, it is time to ask what principled reasons we have not to protect the individual persona as a trademark.
I liked this article a lot--especially its straightforward approach. It looks at these marks through the lens of trademark law (as it should), considering use (that is what goods and services) and distinctiveness. In doing so, it provides several useful hypotheticals that illustrate the problems of using names as trademarks. The paper also considers Lanham Act sections that specifically deal with names.

Finally, the paper discusses a couple of concerns. First is endorsement confusion. As the "service" of a celebrity becomes endorsement, then everything the celebrity does is potentially an endorsement, even though that may not be the intention. McGeveran discusses this concern. Second is the ever-present speech concern. If names are protected as marks, then it is harder to use that name in speech.

This article is a really good primer on names as marks. I think a good extension for the next one would be a topic that a student of mine wrote about last year: joint marks. That is, when multiple people have the same name - together even - then the mark can cease being distinctive of their individual goods. My student did a great case study of the Kardashian marks, showing that several of them may well be invalid, but I think this could be extended to a longer theoretical piece if it hasn't been done already.

Tuesday, January 15, 2019

The Copyright Law of Interfaces

Winter break has ended and so, too, has my brief blogging break. I've blogged before (many times) about the ongoing Oracle v. Google case. My opinion has been and continues to be that nobody is getting the law exactly right here, to the point where I may draft my own amicus brief supporting grant of certiorari. But to the extent I do agree with one of the sides, it is the side that says API (Application Programming Interfaces) developers must be allowed to reuse the command and parameter structure of the original API without infringing copyright. My disagreement is merely with the way you get there. Some believe that API's are not copyrightable at all. I've blogged before that I'm not so sure about this. Some believe that this should be fair use. I think this is probably true but the factors don't cleanly line up. My view is that this should be handled on the infringement side: that API's, even if copyrightable, are not infringing when used in a particular way (that is, they are filtered out of an infringement analysis). It's the same result, but (for me, at least) much cleaner theoretically and doctrinally.

But make no mistake, this sort of reuse is critically important, as Charles Duan (R Street Institute) points out in his latest draft: Internet of Infringing Things: The Effect of Computer Interface Copyrights on Technology Standards (forthcoming in Rutgers Computer and Technology Law Journal). The draft is on SSRN and an abstract is here:
This article aims to explain how copyright in computer interfaces implicates the operation of common technologies. An interface, as used in industry and in this article, is a means by which a computer system communicates with other entities, either human programmers or other computers, to transmit information and receive instructions. Accordingly, if it is copyright infringement to implement an interface (a technical term referring to using the interface in its expected manner), then common technologies such as Wi-Fi, web pages, email, USB, and digital TV all infringe copyright.
By reviewing the intellectual property practices of the standard-setting organizations that devise and promulgate standards for these and other communications technologies, the article demonstrates that, at least in the eyes of standard-setting organizations and by extension in the eyes of technology industry members, implementation of computer interfaces is not an infringement of copyright. It concludes that courts should act consistent with these industry expectations rather than upending those expectations and leaving the copyright infringement status of all sorts of modern technologies in limbo.
 As noted, I agree with the end result, so any critique here should be taken as one of the paper, and not of the final position. I think Duan does a very nice job of explaining what an interface is: namely, the set of commands that third-party programmers send to a server/system to make it operate. There is value in standardization of these interfaces - it allows people to write one program that will work with multiple systems. Duan uses two good examples. The first is HTML/CSS programming, which allows people to write a single web document and have it run in any browser and/or server that supports the same language. The second is SMTP, which allows email clients to communicate with any email server. The internet was built on these sorts of interfaces, called RFCs.

Duan then does a nice job of showing the creativity that goes into selecting the commands - as with Java, there were choices (though limited) to make about each command. Because the set of functions is limited, number of ways to describe the function is limited, but there are some choices to be made. The article then shows how those commands are grouped together in functional ways.

Finally, Duan nicely shows how many important standards are out there that follow this same pattern, and shows how standards organizations handle any copyright--they don't. In short, allowing contributors to claim copyright ownership would destroy systems, because there is no requirement that contributors allow others to use the interface. Duan's concern is that if individual authors owned the IP in their interface contributions to standards (a potential extension of Oracle v. Google) then holdup might occur that harms adoption. This, of course, is hotly debated, as it is in the patent area.

I think it's a really interesting and well-written paper. Before I get to a couple critiques, I should note that Duan is focused more on how the current legal8 rulings might affect standards than critiquing the rulings themselves (as I have done here). Thus, my comments here may simply not have been on his radar.

My primary thought reading this is that the paper doesn't deal with the declaring code. That is, in order to implement the Java commands, Google created short code blocks that defined the functions, the parameters, etc.  Here is an example from the original district court opinion:

package java.lang;
java.lang public
class Math {
class Math public static int max (int x, int y) {

This code is what the jury found to be copied (though presumably Google wrote it in some other language). But the standards interfaces don't provide any code, per se. They only provide explanations. Here is an example definition from the RFC for the SMTP protocol discussed in the paper:
mail = "MAIL FROM:" Reverse-path
In other words, standards will define the commands that must be sent, but there's not a language based implementation (e.g. public, static, integer, etc.). As with the sample line above. Most say: send x command to do y. And people writing software are on their own to figure out how to do that. And you can bet the implementing code looks very similar, but there's something different about how it is specified at the outset (a full header declaration v. a looser description). So, the questions this raises are a) does this make standards less likely to infringe, even under the Federal Circuit's rules (I think yes), and b) does this change how we think about declaring code? (I think no, because the code is still minimal and functional, but Oracle presumably disagrees).

Secondarily, I don't think the article considers the differences between Oracle's position (now - it changed, which is one of the problems) and that of a contribution to standards. Contribution to a standard is made so that others will adopt it, presumably because it gives you a competitive advantage of some sort. By not being part of the standard, you risk having a fragmented (smaller) set of users. But if Oracle doesn't want others adopting Java language and would rather be limited, then that makes the analogy inapt. If Google had known this was not allowed and gone another way, it may well be that Java is dead today (figure that in to damages calculations). But a fear of companies submitting to standards and then taking it back is to me different in kind from companies that never want to be part of the standard. (Of course, as noted above, there is some dispute about this, as Sun apparently did act as if they wanted this language to be an open standard).

A final point: two sentences in the article caught my eye, because they support my view of the world (confirmation bias, of course). When speaking of standard setting organization policies, Duan writes: "To the extent that a copyright license is sought from contributors to standards, the license is solely directed to distributing the text of the standard. This suggests that copyright is simply not an issue with regard to implementing interfaces." Roughly interpreted, this means that these organizations think that maybe you can copyright your API, but that copyright only applies to slavish copying of the entire textual document. But when it comes to reuse of the technical requirements of the standard, we filter out the functionality and allow the reuse. This has always been my position, but nobody has argued it in this case.

Monday, January 14, 2019

Bruno Latour, Mario Biagioli, and the Rhetoric of "Balance" in IP Law (and Climate Change)

I just read Jennifer Szalai's fascinating review in the New York Times of the French anthropologist and philosopher Bruno Latour's new book on politics and the debate over climate change.  As I recall from my history of science days, the whole point of Latour's body of work was that "facts," in science, are not really facts. They are the social constructions of scientists who are real people with childhoods, values, and careers, whose conclusions cannot be divorced from the environment in which they were produced. "[T]he essential point," Latour wrote,
is that the facts, contrary to the old adage, obviously do not 'speak for themselves’: to claim that they do would be to overlook scientists, their controversies, their laboratories, their instruments, their articles, and their hesitant, interrupted, and occasionally deictic speech...
Thus, we might think Latour would be sympathetic to so-called climate change deniers, who greet with skepticism the science community's conclusions about humans' impact on global warming. As Szalai puts it in her review, Latour "has spent a career studying how knowledge is socially constructed." So, surely, "[the] kind of postmodernism" that lies behind the "conservative tradition" of "performing a skepticism so extreme that it makes the ancient Greek skeptics look like babes in the woods[]" would appeal to him.

But it's not so, Szalai writes. To the contrary, Latour sees "[s]uch pretensions to reality-creating grandeur" as "amount[ing] to little more than a vulgar, self-defeating cynicism." Perhaps even Bruno Latour, in the end, was a "realist"  at least when it comes to some things.

Revisiting Latour's skepticism of facts, I can't help but wonder (although I think I know) what Latour would say about patents. This brings me to a gem that I was lucky to get ahold of over break: an article by esteemed historian of science and expert on the Scientific Revolution, and now a law professor at the University of California Davis School of Law, Mario Biagioli. Adding another layer of irony, everything in this post will be colored by fact that Mairo is a long-time mentor and supervised my undergraduate thesis in the Department of History of Science at Harvard. His paper, Patent Republic, tracing the development of the patent system from the Venetian Republic to early America, inspired me to study IP.

Wednesday, January 2, 2019

Erin McGuire: Can Equity Crowdfunding Close the Gender Gap in Startup Finance?

As I have previously explained, there is growing interest in gender and racial gaps in patenting from both scholars and Congress—which charged the USPTO with studying these gaps. But I don't think it makes sense to study these inequalities in isolation: patent law is embedded in a larger innovation ecosystem, and patents' benefit at providing a strong ex post reward for success comes at the cost of needing to attract funding to cover R&D expenses until patent profits become available. It may be difficult to address the patenting gap without also addressing inequalities in capital markets.

In particular, there is a large and well-documented gender gap in the market for early-stage capital. For example, this Harvard Business Review article notes that women receive 2% of venture funding despite owning 38% of U.S. businesses, and that even as the percentage of female venture capitalists has crept up from 3% in 2014 to 7% in 2017, the funding gap only widened. Part of the explanation—explored in the fascinating study summarized in the HRB piece—may be that both male and female VCs ask different kinds of questions to male and female entrepreneurs: in actual Q&A sessions, VCs tended to ask men questions about the potential for gains and women about the potential for loses, with significant impacts on funding decisions.

Economist Erin McGuire, currently an NBER postdoc, has an interesting working paper on one partial solution to this problem: Can Equity Crowdfunding Close the Gender Gap in Startup Finance? Non-equity crowdfunding through sites like KickStarter and Indiegogo have grown in popularity in the past two decades; equity crowdfunding differs in that funders receive shares in the company in exchange for their investments. The average equity crowdfunding investment is $810—over ten times the average investment on Kickstarter. Equity crowdfunding was illegal in the United States before the JOBS Act of 2012, which allowed equity crowdfunding by accredited investors in September 2013. McGuire hypothesized that the introduction of this financing channel—with a more gender-diverse pool of potential investors—as an alternative to professional network connections would have a greater benefit for female entrepreneurs.

Wednesday, December 19, 2018

All about IP & Price Discrimination

It's a grading/break week, so just a short post. A recent article that I enjoyed a lot, but that hasn't found much love on SSRN is Price Discrimination & Intellectual Property, by Ben Depoorter (Hastings) and Mike Meurer (Boston University). The paper has the following abstract:
This chapter reviews the law and economics literature on intellectual property law and price discrimination. We introduce legal scholars to the wide range of techniques used by intellectual property owners to practice price discrimination; in many cases the link between commercial practice and price discrimination may not be apparent to non-economists. We introduce economists to the many facets of intellectual property law that influence the profitability and practice of price discrimination. The law in this area has complex effects on customer sorting and arbitrage. Intellectual property law offers fertile ground for analysis of policies that facilitate or discourage price discrimination. We conjecture that new technologies are expanding the range of techniques used for price discrimination while inducing new wrinkles in intellectual property law regimes. We anticipate growing commentary on copyright and trademark liability of e-commerce platforms and how that connects to arbitrage and price discrimination. Further, we expect to see increasing discussion of the connection between intellectual property, privacy, and antitrust laws and the incentives to build and use databases and algorithms in support of price discrimination.
They call it a chapter, but they don't identify the book that the chapter will appear in. It's probably an interesting book.

In any event, the chapter is a really interesting, thorough look at price discrimination generally, in addition to price discrimination as it relates to IP. It discusses the pros and cons as well as the assumptions that underlie each. If you are interested in a better understanding of the economics of IP (and secondarily, the internet), this is a good read.

Tuesday, December 11, 2018

The Value of Patent Applications in Valuing Firms

It's an age-old question that we've blogged about here before - what role do patents have on firm value? And is any effect due to signaling or exclusivity? Does the disclosure in the patent have any value? Does anybody read patents?

These are all good questions that are difficult to measure, and so scholars try to use natural experiments or other empirical methods to divine the answer. In a recent draft, Deepak Hegde, Baruch Lev, and Chenqi Zhu (all NYU Stern Business) use the AIPA to provide some useful answers. For those unaware, the AIPA mandated that patent applications be published after 18 months by default, rather than held secretly until patent grant. The AIPA is the law that keeps on giving; there have been several studies that use the "shock" of the AIPA to measure what effect patent publications had on a variety of dependent variables.

So, too, in Patent Disclosure and Price Discovery. A draft is available on SSRN, and the abstract is here:
We focus in this study on the exogenous event of the enactment of American Inventor’s Protection Act of 1999 (AIPA), which disseminates timely, detailed, and credible public information on R&D activities through pre-grant patent disclosures. Exploiting the staggered timing of patent disclosures, we identify a significant improvement in the efficiency of stock price discovery. This improvement is stronger when patent disclosures reveal firms’ successful, new, or technologically valuable inventions. This improvement is more pronounced for firms in high-tech or fast-moving industries, or with a large institutional ownership or analyst coverage. We also find stock liquidity rises and investors’ risk perception of R&D drops after the enactment of AIPA. Our results highlight the importance of timely, detailed, and credible disclosures of R&D activities in alleviating the information problems faced by R&D-intensive firms.
This is a short abstract, so I'll fill in a few details. The authors measure the effect on  intra-period timeliness, a standard measure used to proxy for "price discovery," or how quickly information enters the market and settle the price of a stock. There are a lot of articles on this, but here's one for those interested (paywall, sorry).

In short, the authors look at how quickly price discovery occurred before and after the AIPA, correcting for firm fixed effects and other variables. One of the nice features of their model is that patent applications occurred over a period of years, and so the "shock" of patent publication was not distributed only in one year (which could have been affected by something other than the AIPA that happened in that same year).

They find that price discovery is faster after the AIPA. Interestingly, they also find that the effect is more pronounced in high-tech and fast moving fields -- that is, industries where new R&D information is critically important.

Finally, their results say something about the nature of the patent disclosure itself - the effects come from disclosure of the information, and not necessarily the patent grant. Thus, the signaling effect may really relate to information, and (some) people may well read patents after all.

Monday, December 10, 2018

Adam Mossoff: Are Property Rights Created By Statute "Public Rights"?

I greatly enjoyed Professor Adam Mossoff's new article, Statutes, Common-Law Rights, and the Mistaken Classification of Patents as Public Rights, forthcoming in the Iowa Law Review.  Mossoff's article is written in the wake of Oil States Energy Services v. Green's Energy Group, where the Supreme Court held it is not unconstitutional for the Patent Trial & Appeals Board (PTAB), an agency in the Department of Commerce, to hear post-issuance challenges to patents, without the process and protections of an Article III court. Justice Thomas' opinion concluded that patents are "public rights" for purposes of Article III; therefore, unlike, say, property rights in land, patents can be retracted without going through an Article III court.

Mossoff's article objecting to this conclusion is a logical follow on to his prior work, while also providing new insights about the nature of patents, property, and the public rights doctrine. He does so quite concisely too, with the article coming in at only 21 pages.

Wednesday, December 5, 2018

Helsinn Argument Recap: Did the AIA Change the Meaning of Patent Law's "On Sale" Bar?

As Michael previewed this morning, the Supreme Court heard argument today in Helsinn v. Teva, which is focused on the post-America Invents Act § 102(a)(1) bar on patents if "the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public" before the relevant critical date. The Federal Circuit held that Helsinn's patents were invalid because Helsinn had sold the claimed invention to a distributor more than one year before filing for a patent, but Helsinn (supported by the United States as amicus) argues that the "on sale" bar is triggered only by sales that make the invention "available to the public" under a broad reading of "public."

During argument, none of the Justices seemed inclined to favor Helsinn's attempt to argue that "on sale" clearly means on sale to everybody—Justice Kavanaugh said "it's pretty hard to say something that has been sold was not on sale," and Chief Justice Robert's noted that Helsinn's interpretation "might not be consistent with the actual meaning of the world 'sale'" because "if something's on sale, it doesn't have to be on sale to everybody." Nor did they jump at the government's argument that "on sale" means a product can be purchased by its ultimate consumers—Justice Sotomayor said: "This definition of 'on sale,' to be frank with you, I've looked at the history cited in the briefs, I looked at the cases, I don't find it anywhere."

Helsinn's better statutory argument is that the meaning of "on sale" is modified by "or otherwise available to the public" to require that the sale be publicly available. Indeed, for a reader with no background in patent law, this might seem like the most natural reading of the statute. Justice Alito said that "the most serious argument" against the Federal Circuit's position is "the fairly plain meaning of the new statutory language," and that he "find[s] it very difficult to get over the idea that this means that all of the things that went before are public." And Justice Gorsuch suggested, at least for hypothetical purposes, that "the introduction of the 'otherwise' clause introduced some ambiguity about what 'on sale' means now." But if there was more support to reverse the Federal Circuit, it was not apparent from the argument.

Much of the statutory language used in the Patent Act—including "on sale"—has developed a technical legal meaning over time, generally due to courts' attention to the law's utilitarian focus. For example, patentable subject matter caselaw is "implicit" in § 101, courts have put a highly specialized gloss on the word "obvious" in § 103, and—relevant here—the § 102 categories of prior art have long been interpreted to include relatively obscure and private uses. Although this expansive definition of prior art might seem unfair to patentees, there are also strong policy arguments in its favor, including (1) encouraging patentees to get to the patent office early (leading to earlier disclosure and patent expiration) and (2) avoiding patents when their costs (including higher prices for consumers and subsequent innovators) aren't likely to be outweighed by their innovation-incentivizing benefits, such as when there is independent invention—even when evidence of that invention is relatively obscure.

As Justice Kavanaugh noted at argument today, Mark Lemley's amicus brief on behalf of forty-five IP professors describes the long history of treating relatively non-public disclosures as prior art, including (1) "noninforming public use" cases, (2) "output of a patented machine or process" cases, and (3) cases involving secret, confidential, and nonpublic sales transactions. Justice Breyer also mentioned the Lemley brief, and he said it "seems right" to have the on-sale bar include private sales "to prevent people from benefitting from their invention prior to and beyond the 20 years that they're allowed." The legislative history of the AIA does not suggest that Congress intended to do sweep away all of these cases—Justice Kavanaugh said that he thinks "the legislative history, read as a whole, goes exactly contrary" to Helsinn's contention because "there were a lot of efforts … to actually change the 'on sale' language, and those all failed," leaving the losers "trying to snatch victory from defeat" with "a couple statements said on the floor."

It is perhaps because of this history that Helsinn and the government seemed more focused on the argument that "on sale" has always excluded nonpublic sales than on the argument that the AIA changed the law. Justice Ginsburg's only comment during argument was to ask Helsinn to clarify this: "I thought that one argument was that the AIA changed the way it was. But … you seem to say there was no change; 'on sale' never included the secret sale." Arguing for the government, Malcolm Stewart even conceded—in response to questioning from Justice Kagan—that if the law was settled pre-AIA such that "on sale" included nonpublic sales, then the new AIA language ("or otherwise available to the public") "would be a fairly oblique way of attempting to overturn" the law. But based on my reading of the transcript, it doesn't seem likely that the argument that "on sale" has always meant "on sale publicly" will get five votes.

I waited until after writing the above to get Ronald Mann's take at SCOTUSblog, but I think I very much agree on his bottom line conclusion: while this isn't "a case in which the argument clearly presages the result," the overall transcript "suggests that the most likely outcome will be an affirmance."

Tuesday, December 4, 2018

How Important is Helsinn?

In honor of the oral argument in Helsinn today, I thought I would blog about a study that questions its importance. For those unaware, the question the Supreme Court is considering is whether the AIA's new listing of prior art in 35 U.S.C. §102(a)(1): "the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public..." changed the law.

Since forever, on sale meant any offer or actual sale, regardless of who knew about it. Some have argued that the addition of "or otherwise available to the public" means that only offers that are publicly accessible count as prior art. I think this is wrong, and signed on to an amicus brief saying so. We'll see what the Court says. Note that non-public does not mean "secret." True secret activity is often considered non-prior art, but the courts have defined "public" to mean "not-secret." The question is whether that should change to be "publicly accessible."

But how big a deal is this case? How many offers for sale would be affected? Steve Yelderman (Notre Dame, and soon to be Gorsuch clerk) wanted to know as well, so he did the hard work of finding out. In a draft paper on SSRN that he blogged about at Patently-O, he looked at all invalidity decisions to see exactly where the prior art was coming from. Here is the abstract for Prior Art in the District Court:
This article is an empirical study of the evidence district courts rely upon when invalidating patents. To construct our dataset, we collected every district court ruling, verdict form, and opinion (whether reported or unreported) invalidating a patent claim over a six-and-a-half-year period. We then coded individual invalidation events based on the prior art supporting the court’s analysis. In the end, we observed 3,320 invalidation events based on 817 distinct prior art references.
The nature of the prior art relied upon to invalidate patents informs the value of district court litigation as an error correction tool. The public interest in revoking erroneous patent grants depends significantly on the reason those grants were undeserved. Distinguishing between revocations that incentivize future inventors and those that do not requires understanding the reason individual patents are invalidated. While prior studies have explored patent invalidity in general, no study has reported data at the level of detail necessary to address these questions.
The conclusions here are mixed. On one hand, invalidations for lack of novelty bear many indicia of publicly beneficial error correction. Anticipation based on obscure prior art appears to be quite rare. When it comes to obviousness, however, a significant number of invalidations rely on prior art that would have been difficult or impossible to find at the time of invention. This complicates — though does not necessarily refute — the traditional view that obviousness challenges ought to be proactively encouraged.
So, let's get right to the point. The data seem to show that "activity" type prior art (that is sale or use) is much more prevalent in anticipation than in obviousness. This is not surprising, given that this category is often the patentee's own activities.

With respect to non-public sales, they estimate that a maximum of 14% of anticipation and 2% of obviousness invalidations based on activity were based on plausibly non-public sales. This translates to about 8% of all anticipation invalidations and 1% of all obviousness invalidations. Because there are about as many obviousness cases as anticipation cases, this averages to 4.25% of all invalidations. They note that with a different rule, some of these might have been converted to "public" sales upon more attention paid to providing such evidence.

A related question is whether the inventor's actions can invalidate, or whether the AIA overruled Metallizing Engineering, which held that an inventor's secret use can invalidate, even if a third-party's secret use does not. The study found that the plaintiff's actions were relevant in 27% of anticipation invalidations and 13% of obviousness invalidations.  Furthermore, they found that most of the secret activity was associated with either the plaintiff or defendant--this makes sense, as they have access to such secret information.

So, what's the takeaway from this? I suppose where you stand depends on where you sit. I think that wiping out 4% of the invalidations, especially when they are based on the actions of one of the two parties, is not a good thing. It's bad to allow the patentee to non-publicly sell and have the patent, and it's bad to hold the defendant liable even if it has been selling the patent in a non-public (though non-secret) way. We're talking about 20 claims per year that go the other way - too high for my taste, especially when it means we have to start defining new ways to determine whether something is truly public.

Furthermore, the stakes of reversing Metallizing are much higher. I freely admit that the "plaintiff's secret actions only" rule has a tenuous basis in the text of the statute, but it has been the law for a long time without being expressly overruled by two subsequent revisions. Given that more than 25% of the invalidations were based on the plaintiffs actions, I think it would be difficult to reverse course.

Tuesday, November 27, 2018

Judging Patents by their Rejection Use

The quest for an objective measure of patent quality continues. Scholars have attempted many, many ways to calculate such value, including citations, maintenance fee payments, number of claims, length of claims, and so forth. As each new data source has become available, more creative ways of measuring value have been developed (and old ways of measuring value have been validated/questioned).

Today, I'd like to briefly introduce a new one: the use of patents rejecting other patents. Chris Cotropia (Richmond) and David Schwartz (Northwestern) have posted a short essay on SSRN introducing their methodology.* The abstract for the cleverly named Patents Used in Patent Office Rejections as Indicators of Value is here:
The economic literature emphasizes the importance of patent citations, particularly forward citations, as an indicator of a cited patent’s value. Studies have refined which forward citations are better indicators of value, focusing on examiner citations for example. We test a metric that arguably is closer tied to private value—the substantive use of a patent by an examiner in a patent office rejection of another pending patent application. This paper assesses how patents used in 102 and 103 rejections relate to common measures of private value—specifically patent renewal, the assertion of a patent in litigation, and the number of patent claims. We examine rejection data from U.S. patent applications pending from 2008 to 2017 and then link value data to rejection citations to patents issued from 1999 to 2007. Our findings show that rejection patents are independently, positively correlated with many of the value measurements above and beyond forward citations and examiner citations.

The essay is a short, easy read, and I recommend it. They examine nearly 700,000 patents used in anticipation and obviousness rejections and find that not all patent citations are equal, and that those citations that were used in a rejection have additional ability to explain value, even when other predictors, such as forward citations and examiner citations are included in the model. The only value measure that had no statistically significant relationship to rejection patents was use in litigation (even though forward citations did). This may say something about the types of patents that are litigated or about the role of rejection patents in litigation.

That's about all I'll say about this essay. The paper is a brief introduction to the way this new data set might be used, and this blog post is a brief introduction to the paper.

*At least, I think it's theirs. If you know of an earlier article that measures this on any kind of scale, please let me know!

Tuesday, November 20, 2018

The Role of IP in Industry Structure

I've long been a fan of Peter Lee's (UC Davis) work at the intersection of IP and organizational theory. His latest article is another in a long line of interesting takes on how IP affects and is affected by the structure and culture of its creators. The latest draft, forthcoming in Vanderbilt Law Review, is titled Retheorizing the Impact of Intellectual Property Rights on Industry Structure. The draft is on SSRN, and the abstract is here:
Technological and creative industries are critical to economic and social welfare, and the forces that shape such industries are important subjects of legal and policy examination. These industries depend on patents and copyrights, and scholars have long debated whether exclusive rights promote industry consolidation (through shoring up barriers to entry) or fragmentation (by promoting entry of new firms). Much hangs in the balance, for the structure of these IP-intensive industries can determine the amount, variety, and quality of drugs, food, software, movies, music, and books available to society. This Article retheorizes the role of patents and copyrights in shaping industry structure by examining empirical profiles of six IP-intensive industries: biopharmaceuticals; agricultural biotechnology, seeds, and agrochemicals; software; film production and distribution; music recording; and book publishing. It makes two novel arguments that illuminate the impacts of patents and copyrights on industry structure. First, it distinguishes along time, arguing that patents and copyrights promote the initial entry of new firms and early-stage viability, but that over time industry incumbents wielding substantial IP portfolios often absorb such entrants, thus reconsolidating those industries. It also distinguishes along the value chain, arguing that exclusive rights most prominently promote entry in “upstream” creative functions—from creating biologic compounds to coordinating movie production—while tending to promote concentration in downstream functions related to commercialization, such as marketing and distribution of drugs and movies. This Article provides legal and policy decision makers with a more robust understanding of how patents and copyrights promote both fragmentation and concentration, depending on context. Drawing on these insights, it proposes calibrating the acquisition of exclusive rights based on the size and market position of a rights holder.
Professor Lee surveys six industries, looking for commonalities in how they are structured, and how IP fits in with entry and consolidation. This is not an empirical paper in the sense of, say Cockburn & MacGarvie, who found that patents reduced entry into the software industry unless the entrant had patent applications. Instead, it looks at the history of entry and consolidation in the different industries as a whole, using studies like Cockburn & MacGarvie (which is discussed in some detail) as the foundational base for the theoretical view that puts all the empirical findings together.

The result is a sort of two dimensional axis (though Prof. Lee provides no chart, which wouldn't have added much). He finds that, in general, IP leads to entry early in time, but as the industry (or product area) matures, then IP leads instead to consolidation, as companies find it easier to acquire IP than create it on its own in crowded areas. He also finds, however (and I think this is a key insight in the paper), that IP leads to more entry upstream (early creation stage) and more consolidation downstream (commercialization and marketing).

This second axis is the more interesting one (there are lots of articles about development of thickets over time), but it is also the harder one to prove, and it depends a lot on your definition. For example, Professor Lee discusses video streaming services such as Netflix and Hulu but doesn't discuss whether he views them as horizontally consolidated because there are so few of them. I've always thought of IP as fragmenting video streaming, because rights holders want to monetize their IP by holding on to it. Hence, we have to pay separately to get Star Trek: Discovery on CBS streaming, Hulu has many TV shows that Netflix doesn't, and soon Disney will pull out of its exclusive deal with Netflix to create its own service. That's 5 or more services I have to sign up with if I want to get all the shows (contrast this with the story he tells about music streaming, in which the music distributors all distribute all the music, and the distributor record labels consolidate to enhance market power against the distributor streamers). Indeed, this issue is so important that the services have (as Prof. Lee points out) vertically integrated by consolidating production with distribution (Netflix and Amazon making its own shows, Comcast and NBC/Universal, and AT&T buying Warner). Professor Lee discusses this as a penchant for consolidation, but it is not clear why IP drives it. I think it is consolidation caused by upstream entry (as he would predict) by the likes of Netflix and Amazon in the creation space, because they also happen to be distributors. But then why don't the record labels become streamers? Why does this fragmentation work for video and not music? I'd be interested in hearing how Professor Lee breaks this down.

As you can probably tell, this is a thoughtful and thought-provoking paper, and I recommend it, especially to those unfamiliar with the literature on the role of IP in industry organization and entry.