Tuesday, September 18, 2018

No Fair Use for Mu(sic)

It's an open secret that musicians will sometimes borrow portions of music or lyrics from prior works. But how much borrowing is too much? One would think that this is the province of fair use, but it turns out not to be the case - at least not in those cases that reach a decision.  Edward Lee (Chicago-Kent) has gathered up the music infringement cases and shown that fair use (other than parody) is almost never a defense - not just that defendants lose, but that they don't even raise it most of the time. His article Fair Use Avoidance in Music Cases is forthcoming in the Boston College Law Review, and a draft is available on SSRN. Here's the abstract:
This Article provides the first empirical study of fair use in cases involving musical works. The major finding of the study is surprising: despite the relatively high number of music cases decided under the 1976 Copyright Act, no decisions have recognized non-parody fair use of a musical work to create another musical work, except for a 2017 decision involving the copying of a narration that itself contained no music (and therefore might not even constitute a musical work). Thus far, no decision has held that copying musical notes or elements is fair use. Moreover, very few music cases have even considered fair use. This Article attempts to explain this fair use avoidance and to evaluate its costs and benefits. Whether the lack of a clear precedent recognizing music fair use has harmed the creation of music is inconclusive. A potential problem of “copyright clutter” may arise, however, from the buildup of copyrights to older, unutilized, and underutilized musical works. This copyright clutter may subject short combinations of notes contained in older songs to copyright assertions, particularly after the U.S. Supreme Court’s rejection of laches as a defense to copyright infringement. Such a prospect of copyright clutter makes the need for a clear fair use precedent for musical works more pressing.
The results here are pretty interesting, as I discuss below.

Wednesday, September 12, 2018

Erie and Intellectual Property Law

When it comes to choice of law, U.S. federal courts hearing intellectual property law claims generally do one of two things. They either construct and apply the federal IP statutes (Title 18, Title 35, Title 17, and Title 15, respectively), remaining as faithful to Congress' meaning as possible; or they construct and apply state law claims brought under supplemental (or diversity) jurisdiction, remaining as faithful as possible to the meaning of the relevant state statutes and state judicial decisions. In the former case, they apply federal law; in the latter case, they apply the law of the state in which they sit.

Simple, right? Or maybe not.

This Friday, University of Akron School of Law is hosting a conference called Erie At Eighty: Choice of Law Across the Disciplines, exploring the implications of the Erie doctrine across a variety of fields, from civil procedure to constitutional law to evidence to remedies. I will be moderating a special panel: Erie in Intellectual Property Law.  Joe Miller (Georgia) will present his paper, "Our IP Federalism: Thoughts on Erie at Eighty"; Sharon Sandeen (Mitchell-Hamline) will present her paper, "The Erie/Sears-Compco Squeeze: Erie's Effects on Unfair Competition and Trade Secret Law”; and Shubha Ghosh (Syracuse) will present his paper "Jurisdiction Stripping and the Federal Circuit: A Path for Unlocking State Law Claims from Patent."

Other IP scholars in attendance include Bryan Frye (Kentucky), whose paper The Ballad of Harry James Tompkins provides a riveting, surprising, and (I think) convincing re-telling of the Erie story, and Megan LaBelle (Catholic University of America), whose paper discusses the crucial issue of whether the Erie line of cases directs federal courts sitting in diversity to apply state privilege law. All papers will be published in the Akron Law Review.

If you have written a paper that touches on the Erie doctrine's implications for intellectual property, I would really appreciate it if you would send it to me: chrdy@uakron.edu or cahrdy@gmail.com I will link to them in a subsequent post in order provide a resource for future research. Thank you!


Tuesday, September 11, 2018

Bargaining Power and the Hypothetical Negotiation

As I detail in my Boston University Law Review article, (Un)Reasonable Royalties, one of the big problems with using the hypothetical negotiation for calculating damages  (aside from the fact that it strains economic rationality and also has no basis in the legal history of reasonable royalties) is differences in bargaining power. The more explicit problem is when litigants try to use their bargaining power to argue that the patent owner would have agreed to a lower hypothetical rate. More implicitly, bargaining power can affect royalty rates in pre-existing (that is, comparable) licenses. This gives rise to competing claims in top 14 law reviews about whether royalty damages are spiraling up or down based on the trend of comparable licensing terms.

For what it's worth, my article dodges the spiral question, but suggests that existing licenses only be used if they can be directly tied to the value of the patented technology (and thus settlements should never be used). Patent damages experts who have read my article uniformly hate that part of it, because preexisting licenses (including settlements) are sometimes their best or even only granular source of data.

But much of this is theory. What about the data?  Gaurav Kankanhalli (Cornell Management - finance) and Alan Kwan (U. Hong Kong) have posted An Empirical Analysis of Bargaining Power in Licensing Contract Terms to SSRN. Here is the abstract:
This paper studies a new, large sample of intellectual property licensing agreements, sourced from filings by public corporations, under the lens of a surplus-bargaining framework. This framework motivates several new empirical findings on the determinants of royalty rates. We find that licensors command premium royalty rates for exclusivity (particularly in competitive industries), and for exchange of know-how. Licensors with differentiated technology and high market power charge higher royalty rates, while larger-than-rival licensees pay lower rates. Finally, using this framework, we study how the nature of disclosure by public firms affects transaction value. Firms transact at lower royalty rates when they redact contracts, preserving pricing power for future negotiations. This suggests that practitioners modeling fair value in transfer pricing and litigation contexts based on publicly-known comparables are over-estimating royalties, potentially impacting substantial cumulative transaction value.
The paper uses SEC reported licenses (more on that below), but one clever twist is that they obtained redacted terms via FOIA requests, so they could both expand their dataset and also see what types of terms are missing. They model the following transactions. Every firm has the most they are willing to pay, and the least they are willing to accept. If those two overlap, then the parties will agree to some price in the middle that splits the surplus.  Where that price is set is based on bargaining power. The authors then hypothesize what types of characteristics will affect that price, and most of them are borne out.

They focus on several kinds of bargaining power contract characteristics, firm specific characteristics, technology characteristics and license characteristics. I'm not sure I would call all of these bargaining power, as they do. I think some relate more to the value of the thing being licensed. Technically this will affect the division of surplus, but it's not really the type of bargaining power I think about. So long as the effect on license value is clear, however, the results are helpful for use in patent cases regardless of the technical designation.

So, for example, universities, non-profits, and individuals receive lower rates because they  have no credible BATNA for self-commercialization. They argue that this sheds light on conventional wisdom that individuals produce less valuable inventions. Further, firms in weaker financial condition do worse, and firms with more pricing power among their rivals do better.

On the other hand, licenses including know-how or exclusivity receive higher royalties, while amendments typically lead to lower royalties (presumably due to underperformance). I don't consider this to be bargaining power, but rather added value. That said, the authors test exclusivity and find that that highly competitive industries have higher royalties for exclusivity than non-competitive industries, which implies a mix of both bargaining power and value in competition.

The authors do look at technological value and find, unsurprisingly, that substitutability leads to lower rates.

The paper points to one interesting combination, though: territorial restrictions. Contracts with territorial restrictions have higher rates. You would think they have lower rates because the license covers less. But the contrary implication here is that a territorial restriction is imposed where the owner has the leverage to impose it, and that means a higher rate. That could be due to value or bargaining power, I suppose. I wonder, though, how many expert reports say that a royalty rate should be greater because the comparable license only covered a territory. Any readers who want to chime in would be appreciated.

There is a definite selection effect here, though, which further implies that use of preexisting licenses gathered via SEC filings be treated carefully. First, the authors note that there is a selection effect in the redactions. They find that not only are lower rates redacted, but that these redactions are driven by non-exclusive licenses, because firms want to hide their lowest willingness to sell (reservation) price. This finding is as valuable as the rest, in my opinion. It means, as the authors note, that any reliance on reported licenses may be over-weighting. It also means, in terms of my own views, that the hypothetical negotiation is not a useful way to calculate damages, because the value of the patent shouldn't change based on who is buying and selling. A second selection effect is not within the data, but what is not in the data: these are only material licenses. If the licenses are not material, they will not be reported. Those licenses are likely to be smaller, whether due to patent value or bargaining power.

This is a really interesting and useful paper, and worth a look.

Monday, September 3, 2018

Boundary Maintenance on Twitter

Last Saturday was cut-down day in the NFL, when rosters are shaved from 90 players down to 53. For the first time, I decided to follow the action for my team by spending time (too much time, really - the kids were monopolizing the TV with video games) watching a Twitter list solely dedicated to reporters and commentators discussing my team.

I've never used Twitter this way, but from an academic point of view I'm glad I did, because I witnessed first-hand the full microcosm of Twitter journalism. First, there were the reporters, who were all jockeying to be the first to report someone was cut (and confirm it with "sources."). Then, there were the aggregators, sites with a lot of writers devoted to team analysis and discussion, but who on this day were simply tracking all of cuts/trades/etc. Ironically, the aggregators were better sources of info than the reporters' own sites, because the reporters didn't publish a full list until later in the day, along with an article that they were too busy to write because they were gathering facts.

Then there were the professional commentators - journalists and semi-professional social media types who have been doing this a long time or have some experience in the sport, but who were not gathering facts. They mostly commented on transactions. Both the reporters and commentators answered fan questions. And then...there were the fans, commenting on the transactions, commenting on the reporters, commenting on the commentators, etc. This is where it got interesting.

Apparently experienced commentators don't like it when fans tell them they're wrong. They like to make clear that either a) they have been doing this a long time, or b) they have a lot of experience in the league, and therefore their opinion should not be questioned. Indeed, in one case a commentator's statement seemed so ridiculous that the "new reporter" in town made fun of it, and all the other reporters circled the wagons to say that the new guy shouldn't be questioning the other men and women on the beat, all of whom had once held his job but left for better jobs. Youch! It turns out the statement was, in fact, both wrong and ridiculous (and proven so the next morning).

This type of boundary maintenance is not new, but it is the first time I've seen it so clearly, explicitly, and unrelentingly (there is some in legal academia, which I'll discuss below). This is a blog about scholarly works, so I point you to an interesting article called The Tension between Professional Control and Open Participation:Journalism and its Boundaries, by Seth Lewis, now a professor in the communications department at the University of Oregon. The article is published in Information, Communication & Society. It is behind a paywall, so a prepublication draft is here. Here is the abstract:
Amid growing difficulties for professionals generally, media workers in particular are negotiating the increasingly contested boundary space between producers and users in the digital environment. This article, based on a review of the academic literature, explores that larger tension transforming the creative industries by extrapolating from the case of journalism – namely, the ongoing tension between professional control and open participation in the news process. Firstly, the sociology of professions, with its emphasis on boundary maintenance, is used to examine journalism as boundary work, profession, and ideology – each contributing to the formation of journalism's professional logic of control over content. Secondly, by considering the affordances and cultures of digital technologies, the article articulates open participation and its ideology. Thirdly, and against this backdrop of ideological incompatibility, a review of empirical literature finds that journalists have struggled to reconcile this key tension, caught in the professional impulse toward one-way publishing control even as media become a multi-way network. Yet, emerging research also suggests the possibility of a hybrid logic of adaptability and openness – an ethic of participation – emerging to resolve this tension going forward. The article concludes by pointing to innovations in analytical frameworks and research methods that may shed new light on the producer–user tension in journalism.
The article includes a fascinating literature review on the sociology of journalism, and focuses on what it means to be a journalist in a world when your readers participate with you.

Bringing it back to IP for a moment (and legal academia more generally), I certainly see some of this among bloggers and tweeters. I see very little of it as a producer of content, presumably because I am always right. 😀 But I know that as a consumer I bleed into the boundaries of others, both in legal academia and elsewhere. I can't help myself - my law school classmates surely remember me as a gunner.

Many of my producer colleagues (mostly women, surprise surprise) see it much worse. Practicing lawyers tell them they don't know what they are talking about. Some may be making valid points, some not. Some are nice about it, while others are not. I'm speaking mostly of good faith boundary issues here, not trolling or harassment, which is a different animal in my mind.

I guess the real question is what to do about it. If you are in an "open" area, boundaries will get pushed. Some people welcome this, and some despise it. Some are challenged more fairly than others. I suspect that people have different ways of managing their boundaries, and it depends heavily on who and how folks are commenting. Some may ignore it, some may swat back about relative expertise, some engage with everyone, some disengage selectively or entirely, going so far as block and mute. I suspect it's a mix.

In any event, I don't have any policy prescriptions here. I know so little about it that I have no clue what the right answer is. I just thought I would make explicit what is usually implicit, point out an interesting article about it, and suggest that readers be mindful of boundaries and Diff'rent Strokes - what might be right for you, may not be right for some.

Friday, August 31, 2018

Maggie Chon on IP and Critical Theories

I tend to approach IP law primarily through a law-and-economics lens, but I enjoy learning about how scholars with different methodological toolkits tackle the same subject matter—especially when their work is clear and accessible. I was thus delighted to see a draft chapter by Margaret Chon, IP and Critical Methods, for the forthcoming Handbook on Intellectual Property Research (edited by Irene Calboli and Lillà Montagnani). Chon provides a concise review of critical legal theory and its application to IP law.

According to Chon, critical theory includes a critique of liberal legal theory as based on the fallacy that legal institutions fairly reflect constituents' interests (as reflected in the marketplace or ballot box). Instead, the interests of privileged or empowered social groups are over-represented, and institutions contribute to these inequalities to the extent that enduring change requires reimagining these institutions themselves. Of course, as she notes, "critical theory would not exist without some belief (however thin) that law and legal systems contain some of the tools necessary for structural transformation."

Chon argues that one need not be a self-identified Crit to engage in critical methodology, and that many IP scholars have stepped closer to critical method by moving from doctrinal to structural analysis, and by "perform[ing] this structural analysis with attention to power disparities." And she gives a number of examples of the influence of critical theory across different areas of IP.

Wednesday, August 29, 2018

Data Driven Creativity

My school started much earlier than my kids' school this year, so I spent a couple weeks at home while the rest of the family visited relatives across the country. I am not too proud to admit that I bingewatched an obscene amount of TV during the two weeks they were gone while I was completing some writing projects. It's really the first time I have done so; while I have shows that I like, I rarely get to watch them all at once, or to pick the next one on the list in rapid succession.

So, it was with a new interest that I enjoyed The Second Digital Disruption: Data, Algorithms & Authorship in the 21st Century by Kal Raustiala (UCLA) and Chris Sprigman (NYU). A draft of the article is on SSRN, and they blogged about it in a series of posts at Volokh Conspiracy. Here is the abstract:
This article explores the intellectual property ramifications that flow from the explosive growth of mass streaming technologies. Two decades ago rising internet usage led to what we call the first digital disruption: Napster, file-sharing, and the transformation of numerous content industries, from music to news. The second digital disruption is about the age of streaming and, specifically, how streaming enables firms to harvest massive amounts of data about consumer preferences and consumption patterns. Coupled to powerful computing, this data—what Mark Cuban has called “the new gold”—allows firms such as Netflix, Amazon, and Apple to know in incredible detail what content consumers like and how they consume it. The leading edge of this phenomenon—and the primary vehicle for our examination—is the adult entertainment industry. We show how Mindgeek, the little-known parent company of Pornhub and the dominant player in pornography today, has leveraged data about viewing patterns to not only organize and suggest content but even to dictate creative decisions. We first show how the adult industry adapted to the internet and the attendant explosion of free content. That story aligns with many similar accounts of how creative industries adapt to a loss of control over IP by restructuring and recasting revenue streams. We then show how content streaming firms have used data to make decisions about content aggregation, dissemination, and investment. Finally, we consider what these trends suggest for IP theory and doctrine. A key feature is that by making creative production less risky, what we call “data-driven authorship” drives down the need for strong IP rights.
I thought the discussion about how data drives what to create to be fascinating, and the article is well worth a read. I think the perfect example of what the authors are describing is the Netflix movie Bright, in which Will Smith plays a cop who teams up with an Orc on the LA Police. The movie was critically panned. Rotten Tomatoes: 26%. But viewers seem to like it a lot: Rotten Tomatoes Audience Score: 84%. Netflix is surely on to something here.

I could certainly see it playing out as I watched. I watched "The Five," a show by one of my favorite authors, Harlan Coben. So then Netflix gave me nothing but mysteries and suspense to watch, plus another show by Coben, Safe (both were great, by the way). But I'm not really a mystery show person - I like sci-fi. So, I watched one show, and then the suggestions got weird: do I like mystery? sci-fi? sci-fi mysteries? I wound up having to dig a bit for the next show.

But here's the interesting thing: the quality of the shows varied wildly, even among the genres that I liked. The writing, acting, editing, and direction mattered. I don't know about the Mindgeek and porn clips, but I will note a couple distinguishing factors. First, there is likely a...er...utilitarian factor associated with those works; people are not watching for the articles, as it were. Second, the works are much shorter; it is much easier to have a highly focused 15-25 minute clip than a 10 episode series. Even with these differences, I suspect viewers have their preferences about what they see in the different clips with the same data driven attributes.

My broader point, then, is that how we consider the effect of data driven works will depend a lot on how we view creativity. The data certainly reduces the creativity in certain major plot points, as well as the quantity of different types of works. But to some extent studios have always done this, only with rules of thumb and intuition rather than actual knowledge. In that sense, data will democratize creativity - if viewers want more women in better roles, there will be more women in better roles; no need to rely on a male studio executive's views on the matter.

Beyond selection, though, I suspect there is still room for surprise, storytelling, differentiation, and other forms of creativity. Consider Bright: write what you want, but it just has to star Will Smith, include the police, and feature orcs and elves. At the limit, too much data may constrain creativity, of course - the more you add, the less you can create.

To be clear, Raustiala and Sprigman don't say anything that contradicts my intuitions here. They make clear that creativity is on a continuum, and that data merely slides to one side. But they do question how viewers will perceive works, and it is there that I disagree with them. I suppose that we could hit that limit where everything is automated, but my gut says that despite having preferences for particular story aspects, viewers will always be able to separate the wheat from the chaff (though not the way I would - as just about every American Idol vote shows) and thus will always look for something new and different within their preferences. At least, I sure hope so.

Saturday, August 25, 2018

Yochai Benkler on Innovation & Networks

Yochai Benkler is a giant within the intellectual history of IP law; some of his work will surely end up on my Classic Patent Scholarship page if I expand it to post-2000 works. Even though I don't agree with all of his conclusions, I think IP scholars should at least be familiar with his arguments. For those who haven't read his earlier works—or who just want a refresher on his take—you might enjoy his recent review article, Law, Innovation, and Collaboration in Networked Economy and Society, 13 Ann. Rev. L. & Soc. Sci. 231 (2017). Here is the abstract:
Over the past 25 years, social science research in diverse fields has shifted its best explanations of innovation from (a) atomistic invention and development by individuals, corporate or natural, to networked learning; (b) market-based innovation focused on material self-interest to interaction between market and nonmarket practices under diverse motivations; and (c) property rights exclusively to interaction between property and commons. These shifts have profound implications for how we must think about law and innovation. Patents, copyrights, noncompete agreements, and trade secret laws are all optimized for an increasingly obsolete worldview. Strong intellectual property impedes, rather than facilitates, innovation when we understand that knowledge flows in learning networks, mixing of market and nonmarket models and motivations, and weaving of commons with property are central to the innovation process.
Note that the shift Benkler is describing is a shift both in scholars' understanding of innovation and in the nature of innovation itself—particularly due to changes in organizational structure made possible by technologies such as the internet. The optimal innovation policy 100 years ago was likely different from the optimal innovation policy in today's more networked economy. To be sure, historical innovation studies can still be quite illuminating—but it is always important to consider how applicable the conclusions are likely to be in the modern context.

Tuesday, August 21, 2018

Abstraction, Filtration, and Comparison in Patent Law

Last April, I had the good fortune to participate in a symposium at Penn Law School. The symposium gathered a variety of IP scholars to focus on the "historic" kinship between copyright and patent law. That kinship, first identified in Sony v. Universal Pictures, supposedly shows parallels between the two legal regimes. I use scare quotes because it is unclear that the kinship is either historic or real. Even so, there are some parallels, and a collection of papers about those parallels will be published in the inaugural issue of Penn's new Law & Innovation Journal.

My article is about the use of abstraction, filtration, and comparison (a distinctly copyright notion) in patent law. I have cleverly named it Abstraction, Filtration, and Comparison in Patent Law. A draft of the article is now on SSRN. Here is the abstract:
This essay explores how copyright's doctrine of abstraction, filtration, and comparison is being used in patent law, and how that use could be improved. This test, which finds its roots in the 1930s but wasn't fully developed until the 1990s, is one that defines scope for determining infringement. The copyrighted work is abstracted into parts, from ideas at the highest level to literal expression at the lowest. Then, unprotected elements are filtered out. Finally what remains of the original work is compared to the accused work to determine if the copying was illicit.
This sounds far removed from patent law, but there is a kinship, though perhaps one that is not so historic and a bit hidden. The essence of the test is determining protectable subject matter. These same needs permeate patent law as well. This essay explores how the test is implicitly used and should be explicitly used.
With design patents, the test might apply as it does in copyright, with functional elements being filtered out during infringement. Current precedent allows for this filtering, but not clearly or consistently. With utility patents, the abstraction, filtration, and comparison happen earlier, during the test for patentable subject matter. Here, the comparison is with what is conventional or well known. The essay concludes by discussing why the application is different for design and utility patents.
I think the article is interesting and brings some useful insights into how we should think about patentable subject matter, but you'll have to be the judge.

Tuesday, August 14, 2018

Use Based Copyright Terms

I didn't blog last week because, well, I was at Disneyland. But I love IP, and when you're a hammer, everything is a nail. So, I couldn't help but think as I looked at the gigantic Mickey Mouse on the Ferris wheel that things are going to start getting messy when the copyright in Mickey runs out.

It occurs to me that serial, long term uses of copyrighted works are different than one time publications. To the extent that copyright is intended to incentivize investment in creative works, then losing protection over time can limit the incentive to develop quality long term work.  I'm not just talking about Mickey - Superman (and the additional complication of rights clawback) and other serial comics create issues. Star Trek is 50, Rocky and Star Wars are 40, and even Jurassic Park is 25 years old. The solution we got to this problem, a longer term for everything, was not the right one. A better solution is that terms should last as long as copyrights are in use, plus a few years. Works that are simply "sold" without any new derivative work would be capped, so works without improvement could not last forever.

Now, this is not to say there aren't costs to protecting copyrights while they are still in use. There is a path dependency that can reduce incentives to come up with new works (in other words, bad sequels instead of new creativity). There is also value associated with the public being able to use works in their own ways.

I'm personally not worried about either of these. On the first, there are plenty of incentives for new entrants to create new works (we got Star Trek, then Star Wars, then Battlestar Galactica (I and II), and now the Expanse), and even serial works become stale after a while (there was no Rocky 50, as some parodies predicted). On the second, I think it is inconsistent with the first concern to worry about path dependence while also worrying that others should be able to use the works. Of course, fresh eyes can bring new ideas to the expression, but hopefully the original owners do that. At this point, non-utilitarian concerns come into play. As between a party who has invested in making a work valuable over a long period of time and a party who would like to use that value, I side with the investor and say newcomers can create their own new value. I realize that many disagree with me on this point. That said, I think there are some noncompetitive uses - fan fiction, say - that can bring new ideas and allow some new works.

Note that a use-based term cuts both ways. As Paul Heald has demonstrated, there is a significant drop in availability for older books that are still within the copyright term. A use-based rule would either end the term, or perhaps create a commercialization bounty similar to that proposed by Ted Sichelman for patents.

A great idea, right? Except, I thought no way that nobody else looked at Mickey and thought the same thing. So, at the IP Scholars Conference last week (which was great, by the way), I asked my colleague Shyam Balganesh from Penn about it, and he didn't even blink before saying that Landes and Posner wrote an article 15 years ago called Indefinitely Renewable Copyright, located here.

As you would expect from these two authors, they detail the costs and benefits of copyright terms, and they provide empirical evidence that showed how shorter copyright terms led to very few renewals. The primary divergence from my idea is that I would allow a challenge based on lack of use, whereas Landes and Posner seem to assume that use is synonymous with registration (as their data shows). But I can imagine times when parties renew but then do not use their works. Thus, the system should look a bit more like trademarks.

I've done no literature review, so it's entirely possible that others have written about this. If you wrote or know of such an article, feel free to pass it along, and I'll add it here for posterity.

Friday, August 3, 2018

#IPSC18 Preview: General IP

This week I've been previewing all 140+ abstracts for the 18th Annual IP Scholars Conference next week at Berkeley, with patents & innovation on Monday, copyright on Tuesday, trademarks on Wednesday, and design/trade secrets/publicity yesterday. Here are all the remaining panels, in which multiple areas of IP are combined (either in individual papers or across the panel). Looking forward to seeing everyone next week!

Thursday, August 2, 2018

#IPSC18 Preview: Design, Trade Secrets, and Right of Publicity

To get ready for IP scholar speed dating at Berkeley next week, I've previewed the panels focused on patents and innovation, copyright, and trademarks. Today: design, right of publicity, and trade secrets (including some notes on other panels where you can also find papers on these topics).

Wednesday, August 1, 2018

#IPSC18 Preview: Trademarks

The 18th Annual IP Scholars Conference is Aug. 9-10 at Berkeley Law. Monday I previewed the eighteen panels primarily related to patents and innovation, and yesterday I previewed the six panels related to copyright. There are only two trademark-focused panels, and I didn't see any trademark-focused papers on general IP panels.

Tuesday, July 31, 2018

#IPSC18 Preview: Copyright

Yesterday I previewed the panels on patents and innovation at next week's IP Scholars Conference at Berkeley Law. Here are the copyright-focused panels:

Monday, July 30, 2018

#IPSC18 Preview: Patents & Innovation

The 18th Annual IP Scholars Conference is next week (Aug. 9-10) at Berkeley Law, and it includes over 140 academic talks given in six parallel tracks. It's not a great format for deep substantive engagement, but it's my favorite conference for getting an overview of what the IP academic community is working on. Of course, you can only see one-sixth of the projects, so if you want a taste of everything: I just read all the abstracts for this year's conference and wrote one sentence on each of them.

Here are all the panels that seem primarily focused on patents and innovation; I'll post about other IP areas (including panels combining patents with other areas of IP) in the coming days. For coauthored papers, the presenting author is listed first.

The Real World Impact of the Copyright Registration Prerequisite

Just before the summer recess, the Supreme Court snuck in a certiorari grant that I don't think has received much attention in proportion to its importance--Fourth Estate Public Benefit Corp. v. Wall-Street.com LLC. The issue is seemingly simple: before filing a lawsuit, a copyright owner must register the copyright. But what does it mean to register the copyright? Simply file the application, or actually receive the registration certificate.

I'd say this question is one of the most practically important IP questions the Court has faced in the last decade. When I was in active practice, I would estimate that a quarter to a third of our clients did not have a registration at the time they wanted to sue, and we relied on the Ninth Circuit's permissive "application is enough" rule to get a case filed (and sometimes seek injunctive relief). The alternative was to file and wait, sometimes months or even more than a year, to get a registration. (I've read that pendency is now about six to eight months). Alternatively, one can pay $800 for an expedited registration within 10 days.

Why might someone not file a registration well in advance of suing? First, because they don't have to. The post-Berne Convention adoption amendments from 1989 allow copyright to vest from the time of fixation. Indeed, in order to maintain compliance with Berne, the pre-filing registration requirement only applies to U.S. Works. Foreign works may sue at will--more on this later.

More practically, there are plenty of reasons why one might not file. In an era of mass digital photography, it would be ridiculously expensive to register every work in case one was infringed; it is far more efficient to see if anyone infringes, and then register that work. In software, new versions are created all the time--almost literally so in software as a service platforms. It would be impossible to file a new derivative work registration for every single released version, especially for open source (though I bet Microsoft does it).

As a result, the registration requirement would become a hammer that would keep rightful owners from bringing suit. The Supreme Court even recognized this several years ago in Reed Elsevier, Inc. v. Muchnick. In that case, a class of journalists filed suit for transfer of their print works into electronic databases. Some putative class members objected to a settlement, but they had not registered. The Court ruled that registration was not jurisdictional. Does this mean that one can apply and sue, so long as registration occurs before any final determination?

I won't run through the pro and con arguments in detail, as arguments can be made on each side from different interpretive points of view. The statute clearly states that registration is required. But another section states that registration is effective from the date of application. But another part states that one may sue if the registration has been denied, which implies that registration is not complete until accepted. But then one wonders how long an applicant must wait until there is an assumption that the work has been "pocket denied," especially when registration is a ministerial act. But then the copyright office might argue that registration is not a ministerial act. And so forth. But the outcome of the arguments will have a real effect on real people and businesses.

I'd like to end with the challenge not made in the case: equal protection. While a couple commentators here and there have mentioned this problem with the dual registration rules, I can find no case with "411(a) & 'equal protection'" as search terms. Requiring a separate hurdle for some works and not others is about as unequal as I can think of. It is unclear why SAP can file suit immediately, but Oracle may not. I don't know if the Supreme Court can reach this issue as part of its interpretive determination (it's not an issue and it wasn't briefed), but I hope it does.

Wednesday, July 25, 2018

NBER Summer Institute 2018: Innovation

Last week I was a discussant at the Innovation section of the 2018 NBER Summer Institute (full schedule here), which I highly recommend to scholars interested in the economics of innovation. The quality of the papers and the discussion was pretty uniformly high. There were a few examples of the insularity of economics, such as remarks about topics that "no one has studied" that have been studied by legal scholars, but I think this just illustrates the benefits of having scholars familiar with different literatures at disciplinary conferences.

Here are links and brief summaries of the innovation-related papers. (There was also a great panel discussion on gender and academic credit, which I might post about separately at some point.)

Monday, July 23, 2018

What Drives Product Companies to Sue?

There are many studies of patent litigation, including the reasons that firms litigate - I have worked on some myself. Much of it is really helpful information, but all of the studies lack one key component: the patents that get litigated are highly selected. They are selected for a) the firms that litigate (practicing v. non-practicing), b) the patents that are litigated (individual, portfolio, lead), and c) the cases that are litigated to judgment (default, settlement, summary judgment, trial).

In the realm of which firms and patents litigate, most of the studies have looked at the litigation level, comparing characteristics of patents and technology with samples of those patents and technologies that were not litigated. This is helpful information, but it certainly doesn't tell the whole story. So, Dirk Czarnitzki and Kristof Van Criekingen (KU Leuven Managerial Economics) have used suvey data of Belgian firms to be better understand which firms litigate. A draft of their paper New Evidence on Determinants of IP Litigation: A Market-Based Approach is posted on SSRN.  Here is the abstract:
We contribute to the economic literature on patent litigation by taking a new perspective. In the past, scholars mostly focused on specific litigation cases at the patent level and related technological characteristics to the event of litigation. However, observing IP disputes suggests that not only technological characteristics may trigger litigation suits, but also the market positions of firms, and that firms dispute not only about single patents but often about portfolios. Consequently, this paper examines the occurrence of IP litigation cases in Belgian firms using the 2013 Community Innovation Survey with supplemental information on IP litigation and patent portfolios. The rich survey information regarding firms’ general innovation strategies enables us to introduce market-related variables such as sales with new products as well as sales based mainly on imitation and incremental innovation. Our results indicate that when controlling for firms’ IP portfolio, the composition of turnover in terms of innovations and imitations has additional explanatory power regarding litigation propensities. Firms with a high turnover from innovations are more likely to become plaintiffs in court. Contrastingly, firms with a high turnover from incremental innovation and imitation are more likely to become defendants in court, and, moreover, are more likely to negotiate settlements outside of court.
The paper itself is relatively straightforward and the results are unsurprising: firms that seem to rely heavily on big innovation sue more, and firms that "imitate" or make incremental innovations tend to get sued more.

I'm not sure what to make of the finding that defendants who imitate are more likely to settle pre-suit (patent portfolio quality being held equal). I suppose that defendants who are making their own big innovations are more likely to challenge validity or argue noninfringement. Then again, the study finds that imitator defendants are more likely to seek patent invalidity, so it may be that either a) they settle when they cannot do win the challenge, or b) innovator defendants rely more on noninfringement.

I suppose that my primary critique is not so much with the empirical method but with the literature review. I think the discussion could have been informed a bit by reference to some of the legal literature in this area. I realize that most economists see law reviews as articles non grata due to lack of peer review, but there's been plenty of decent enough work in this area to merit comment. For example, this draft argues that it is the first to consider out of court settlements, but Lemley, Richardson and Oliver circulated a draft of comprehensive survey results in 2017. Similarly, the article discusses patent portfolios in enforcement, but doesn't mention any of the several legal articles focusing on these dynamics. This is a small point, but an important one. I think legal scholars should look to the economics literature much more often than they do, and I think economic research wouldn't hurt by doing the opposite every once in a while.

In any event, this is an interesting paper that adds new information about how we should think about what drives competitive company litigation.

Friday, July 20, 2018

The Trade Secret-Contract Interface

Deepa Varadarajan's new article, The Trade Secret-Contract Interface, published in the Iowa Law Review, explores the role of contracts in trade secret law. This article returns to an issue that remained unresolved following rich exchanges between Robert Bone and other scholars such as Michael Risch and Mark Lemley. Varadarajan's article is a welcome follow up.

Monday, July 16, 2018

What do Generic Drug Patent Settlements Say about Patent Quality?

An interesting study about Orange Book patents challenged both under Hatch-Waxman and Inter Partes Review caught my eye this week, but perhaps not for the ordinary reasons. One of the hot topics in drug patent challenges today is reverse payments: when the patentee pays the generic to stop a challenge. The Supreme Court has ruled that these payments can constitute antitrust violations. Though the drug companies give reasons, I'll admit that I've always been skeptical of these types of payments.

One of the key questions is whether the patent was going to survive. Most seem to assume that if a company pays to settle, then the patent was likely going to be invalidated. That's where the draft, Maintaining the Balance: An Empirical Study on Inter Partes Review Outcomes of Orange Book-Listed Drug Patents and its Effect on Hatch-Waxman Litigation, by Tulip Mahaseth (a recent Northwestern Law grad) comes in. Here is the abstract from SSRN:
The Hatch-Waxman Act intended to strike a delicate balance between encouraging pioneer drug innovation and promoting market entry of affordable generic versions of pioneer drugs by providing a streamlined pathway to challenge validity of Orange Book patents in federal district courts. In 2012, the America Invents Act introduced Inter Partes Review (IPR) proceedings which provide a faster, cheaper pathway to challenge Orange Book patents than Hatch-Waxman district court litigation. IPRs also have a lower evidentiary burden of proof and broader claim construction standard, which should make it easier, in theory, to obtain patent invalidation in IPRs as compared to Hatch-Waxman litigation. This empirical study on IPR outcomes of Orange Book patents in the past six years shows that both generic manufacturers and patent owners obtain more favorable final decisions in IPRs as compared to their Hatch-Waxman litigation outcomes because the rate of settlement in IPRs is much lower than in Hatch-Waxman litigation. Moreover, generic manufacturers do not appear to be targeting Orange Book patents in IPRs during their drug exclusivity period. Only 2 out of more than 400 IPRs against Orange Book patents were filed by generic petitioners during the patents’ New Chemical Entity exclusivity period. About 90% of the 230 Orange Book patents challenged in IPR proceedings were also challenged in Hatch-Waxman litigation. It is likely that generic manufacturers are not deterred from Hatch-Waxman litigation because of the lucrative 180-day exclusivity period, which gives the first generic filer 180 days to exclusively market their generic version without competition from other generics when the Orange Book drug patent is successfully invalidated in a subsequent district court proceeding. Therefore, IPR proceedings do not appear to be disrupting the delicate balance sought by the Hatch-Waxman Act. Instead, the IPR process has provided generic manufacturers a dual track option for challenging Orange Book patents by initiating Hatch-Waxman litigation in district courts and also pursuing patent invalidity in IPRs before the Patent Trial and Appeal Board, which has reduced rate of settlements resulting in more patents being upheld and invalidated.
There's a lot of great data in this paper, comparing Orange Book IPRs with non-Orange Book IPRs, including comparison of win rates and settlement rates.

But I want to focus on one seemingly minor point: as the number of IPRs has increased, the rate of settlement has decreased. And, more important, the decreasing rate of settlement has led to more invalidation and more affirmance of patents.

This result gives a nice window into how we might view settlements. Traditional Priest-Klein analysis says that this is exactly what we should see - that the previously settled cases were 50/50. But proving this is harder, and this data set would allow for a nice differences-in-differences analysis in future work.

Additionally, a split among outcomes implies that the settlements were not necessarily because the patentee believed the patent was at risk.  If anti-competitive settlements were ruling the day, I would have predicted that most of the (recent) non-settlements would have resulted in patent invalidation. Then again, it is possible that a 50% chance was risky enough to merit a reverse payment settlement in the past. Regardless of how one comes out on this issue, this study provides some helpful details for the argument.

Tuesday, July 10, 2018

How did TC Heartland Affect Firm Value?

In Recalibrating Patent Venue, Colleen Chien and I did a nationwide study of forum shopping in patent cases (shocker - everybody did it, and not just in Texas), and predicted that many patent cases would shift from the Eastern District to the District of Delaware. And, lo, it has come to pass. Delaware is super busy. This has been good for us at Villanova (only 30 miles away from the court), as our students are getting some great patent experience in externships and internships.

But how much did firms value not being sued in Texas? The TC Heartland case is a clear shock event, so an event study can measure this. In Will Delaware Be Different? An Empirical Study of TC Heartland and the Shift to Defendant Choice of Venue, Ofer Eldar (Duke Law) and Neel Sukhatme (Georgetown Law) examine this question. The article is forthcoming in Cornell Law Review and a draft is on SSRN. Here is the abstract:
Why do some venues evolve into litigation havens while others do not? Venues might compete for litigation for various reasons, such as enhancing their judges’ prestige and increasing revenues for the local bar. This competition is framed by the party that chooses the venue. Whether plaintiffs or defendants primarily choose venue is crucial because, we argue, the two scenarios are not symmetrical.
The Supreme Court’s recent decision in TC Heartland v. Kraft Foods illustrates this dynamic. There, the Court effectively shifted venue choice in many patent infringement cases from plaintiffs to corporate defendants. We use TC Heartland to empirically measure the impact of this shift using an event study, which measures how the stock market reacted to the decision. We find that likely targets of “patent trolls”— entities that own and assert patented inventions but do not otherwise use them—saw their company valuations increase the most due to TC Heartland. This effect is particularly pronounced for Delaware-incorporated firms. Our results match litigation trends since TC Heartland, as new cases have dramatically shifted to the District of Delaware from the Eastern District of Texas, previously the most popular venue for infringement actions.
Why do investors believe Delaware will do better than Texas in curbing patent troll litigation? Unlike Texas, Delaware’s economy depends on attracting large businesses that pay high incorporation fees; it is thus less likely to encourage disruptive litigation and jeopardize its privileged position in corporate law. More broadly, we explain why giving defendants more control over venue can counterbalance judges’ incentives to increase their influence by encouraging excessive litigation. Drawing on Delaware’s approach to corporate litigation and bankruptcy proceedings, we argue that Delaware will compete for patent litigation through an expert judiciary and well- developed case law that balances both patentee and defendant interests.
As I discuss below, I have a like/dislike reaction to this paper.

Monday, June 25, 2018

The False Hope of WesternGeco

The Supreme Court issued its opinion in WesternGeco last week. The holding (7-2) was relatively straightforward: if an infringer exports a component in violation of 35 USC 271(f)(2) (that is, the component has no substantial noninfringing use), then the presumption of extraterritoriality will not bar damages that occur overseas. And that's about all it ruled. It left harder questions, like proximate cause, for another day.

I spent the end of the week and weekend reading commentary on the case (and tussling a bit on Facebook and Twitter). A couple blog posts worth checking out are Tim Holbrook's and Tom Cotter's. I had just a few thoughts to add.

Monday, June 18, 2018

Evidence on Patent Disclosure via Depository Libraries

When I first started practice, the place to go for patents was the Patent Depository Library at the Sunnyvale Public Library. Not only did they have copies of all the patents, they had other disclosures, like the IBM Technical Disclosure series. For those who wonder whether people actually read patents, I can attest that I never went to that library and found it empty. Many people, mostly individual inventors who did not want to pay for Delphion or some other electronic service, went there to look at the prior art. Sadly, the library ceased to be at the end of 2017. Widespread free availability on the Internet, plus a new USPTO center in San Jose siphoned off all the traffic.

Rather than rely on my anecdotal evidence, a new NBER paper examines the role of Patent Depository Libraries as evidence of patent disclosure. Jeffrey Furman (Boston U. Strategy & Policy Dept), Markus Nagler, and Martin Watzinger (both of Ludwig Maximillian U. in Munich) have posted Disclosure and Subsequent Innovation: Evidence from the Patent Depository Library Program to NBER's website (sorry, it's a paywall unless you've got .gov or .edu rights). The abstract is here:
How important is information disclosure through patents for subsequent innovation? Although disclosure is regarded as essential to the functioning of the patent system, legal scholars have expressed considerable skepticism about its value in practice. To adjudicate this issue, we examine the expansion of the USPTO Patent and Trademark Depository Library system between 1975 to 1997. Whereas the exclusion rights associated with patents are national in scope, the opening of these patent libraries during the pre-Internet era yielded regional variation in the costs to access the technical information (prior art) disclosed in patent documents. We find that after a patent library opens, local patenting increases by 17% relative to control regions that have Federal Depository Libraries. A number of additional analyses suggest that the disclosure of technical information in the patent documents is the mechanism underlying this boost in patenting: the response to patent libraries is significant and of important magnitude among young companies, library opening induces local inventors to cite more geographically distant and more technologically diverse prior art, and the library boost ceases to be present after the introduction of the Internet. We find that library opening is also associated with an increase in local business formation and job creation, which suggests that the impact of libraries is not limited to patenting outcomes. Taken together, our analyses provide evidence that the information disclosed in patent prior art plays an important role in supporting cumulative innovation.
 The crux of the study is the match to other, similar areas with Federal Depository (but not patent) Libraries. The authors acknowledge that the opening of a patent library might well be a leading indicator of expected future patenting, but the authors discount this by arguing that the Patent Libraries would have had to somehow predict the exact year of increased patenting, and then apply in advance of that date and get approved just in time. The odds of this seem low, especially when the results are localized to within 15 miles of the library (and no further).

The first core finding, that patenting increased, is ambiguous normatively. The authors discuss enhanced innovation, but equally likely alternatives are that people just got excited about patenting or that innovation already occurring was more easily patented. That said, they find the same quality, which implies that the patenting wasn't simply frivolous.

The second finding is more important: that the types of citations and disclosures changed (and that those changes disappeared when patents were more readily available on the Internet). This finding implies that somebody was reading these patents. The question is who. A followup study looking at how the makeup of inventorship changed would be interesting. Were the additional grants solo inventors or large companies? Who used these libraries?

Even without answering this question, this study was both useful and interesting, as well as a bit nostalgic.

Monday, June 11, 2018

Measuring Patent Thickets

Those interested in the patent system have long complained of patent thickets as a barrier to efficient production of new products and services. The more patents in an area, the argument goes, the harder it is to enter. There are several studies that attempt to measure the effect of patent thickets, with some studies arguing that thickets can ease private ordering. I'd like to briefly point out another (new) one. Charles deGrazia (U. London, Royal Holloway College), Jesse Frumkin, Nicholas Pairolero (both of USPTO) have posted a new draft on SSRN, called Embracing Technological Similarity for the Measurement of Complexity and Patent Thickets. Here is the abstract:
Clear and well-defi ned patent rights can incentivize innovation by granting monopoly rights to the inventor for a limited period of time in exchange for public disclosure of the invention. However, when a product draws from intellectual property held across multiple firms (including fragmented intellectual property or patent thickets), contracting failures may lead to suboptimal economic outcomes (Shapiro 2000). Researchers have developed several measures to gauge the extent and impact of patent thickets. This paper contributes to that literature by proposing a new measure of patent thickets that incorporates patent claim similarity to more precisely identify technological similarity, which is shown to increase the information contained in the measurement of patent thickets. Further, the measure is universally computable for all patent systems. These advantages will enable more accurate measurement and allow for novel economic research on technological complexity, fragmentation in intellectual property, and patent thickets within and across all patent jurisdictions.
 The authors use natural language processing to determine overlap in patent claims (and just the claims, arguing that's where the thicket lies) for both backward and forward citations in "triads" - patents that all cite each other. Using this methodology, they compare their results to other attempts to quantify complexity and find greater overlap in more complex technologies - a sign that their method is more accurate. Finally, they validate their results by regressing thickets against examination characteristics, showing that the examination factors more likely to come from thickets (e.g. pendency) are correlated with greater thickets.

This is an interesting study. The use of citations (versus technological class) will always be a limitation because not every patent in a thicket winds up being cited by others. However, the method used here (using forward and backward citations) is better than the alternative, which is using only blocking prior art.

The real question is what to do with all this information. Can it be applied beyond mere study of which areas have thickets? I suppose it could be helpful for portfolio purchases, and maybe to help decisions about whether to enter into a new technology.

Wednesday, June 6, 2018

A Couple Thoughts on Apple v. Samsung (part ?100?)

I've done a few interviews about the latest Apple v. Samsung design patent jury verdict, but journalistic space means I only get a couple sentences in. So, I thought I would lay out a couple points I see as important. We'll see if they hold up as predictions.

There's been a lot written about the case, so I won't rehash the epic story. Here's the short version. The design patent law affords the winning plaintiff all of the profits on the infringing article of manufacture. The Supreme Court ruled (reversing about 100 years of opposite practice) that the article of manufacture could be less than the entire accused device for sale. Because the original jury instructions did not consider this, the Court remanded for a determination of what the infringing article of manufacture was in this case (the design patents covered the shape of the phone and the default screen). The Federal Circuit remanded, and the District Court decided that, yes, in fact, the original jury instructions were defective and ordered a retrial of damages.

The District Court adopted the Solicitor General's suggested test to determine what the article of manufacture was, determined that under that test it was a disputed fact question, and sent it to the jury. Apple asked for $1 billion. Samsung asked for $28 million. The jury awarded $533 million, which is more than $100 million more than the damages were before the Supreme Court ruled.

After the trial, one or more jurors stated that the entire phone was the article of manufacture because you can't get the screen without the rest of the phone. I suppose that the half a billion is deducting expenses that Apple didn't want to deduct.

So, here are my points:

Tuesday, May 29, 2018

New Ways to Determine Patent Novelty

Jonathan Ashtor (now at Paul, Weiss) has completed a few quality empirical studies in the past. His new foray is a new and creative way to determine novelty. It's on SSRN, and the abstract is here:
I construct a measure of patent novelty based on linguistic analysis of claim text. Specifically, I employ advanced computational linguistic techniques to analyze the claims of all U.S. patents issued from 1976-2014, nearly 5 million patents in total. I use the resulting model to measure the similarity of each patented invention to all others in its technology-temporal cohort. Then, I validate the resulting measure using multiple established proxies for novelty, as well as actual USPTO Office Action rejections on grounds of lack of novelty or obviousness. I also analyze a set of pioneering patents and find that they have substantially and significantly higher novelty measures than other patents.
Using this measure, I study the relationship of novelty to patent value and cumulative innovation. I find significant correlations between novelty and patent value, as measured by returns to firm innovation and stock market responses to patent issuance. I also find strong correlations between novelty and cumulative innovation, as measured by forward citations. Furthermore, I find that patents of greater novelty give rise to more important citations, as they are more frequently cited in Office Action rejections of future patents for lack of novelty or obviousness. I also investigate how novelty relates to the USPTO examination process. In particular, I find that novelty is an inherent feature of a patented invention, which can be measured based on the claim text of either an issued patent or an early-stage patent application.
Next, I use this measure to analyze the characteristics of novel patents. I find that novelty is an effective dimension along which to stratify patents, as key patent characteristics vary significantly across the distribution of novelty measures. Moreover, novel patents are closely linked to basic scientific research, as measured by public grant funding and citations to non-patent scientific literature.
Finally, I use this measure to observe trends in novelty over a forty-year timespan of American innovation. This reveals a noticeable, albeit slight, trend in novelty in certain technology fields in recent years, which corresponds to technological maturation in those sectors.
 I'm skeptical of measures of patent quality by claim language only, but I like how he has used office actions to validate the measure. I think people will have to study this to see how it holds up, but I think it's an interesting and creative first step toward objectively judging quality.

Thursday, May 24, 2018

Brian Soucek on Aesthetic Judgment in Law

As noted in my last post, one of the most quoted lines in copyright law is from Justice Holmes's 1903 opinion in Bleistein: "It would be a dangerous undertaking for persons trained only to the law to constitute themselves final judges of the worth of pictorial illustrations." This aesthetic neutrality principle has found purchase far beyond copyright law. But in a compelling new article, Aesthetic Judgment in Law, Professor Brian Soucek challenges this dogma: "Almost no one thinks the government should decide what counts as art or what has aesthetic value. But the government often does so, and often, it should." Soucek's article may have flown under the radar for most IP scholars because he does not typically focus on copyright law, but it is well worth a look.

Tuesday, May 22, 2018

Examining the Role of Patents in Firm Financing

Just this morning, an interesting new literature review came to my mailbox via SSRN. In Is There a Role for Patents in the Financing of New Innovative Firms?, Bronwyn Hall (Berkeley economics) provides an extremely thorough, extremely helpful literature review on the subject. It's on SSRN, and the abstract is here:
It is argued by many that one of the benefits of the patent system is that it creates a property right to invention that enables firms to obtain financing for the development of that invention. In this paper, I review the reasons why ownership of knowledge assets might be useful in attracting finance and then survey the empirical evidence on patent ownership and its impact on the ability of firms to obtain further financing at different stages of their development, both starting up and after becoming established. Studies that attempt to separately identify the role of patent rights and the underlying quality of the associated innovation(s) will be emphasized, although these are rather rare.
This paper caught my eye for a few reasons.

Sunday, May 20, 2018

Barton Beebe on Bleistein

Barton Beebe’s recent article, Bleistein, the Problem of Aesthetic Progress, and the Making of American Copyright Law, was already highlighted on this blog by Shyamkrishna Balganesh, but I wanted to add a few thoughts of my own because I really enjoyed reading it—it is a richly layered dive into the intellectual history of U.S. copyright law, and a wonderful piece to savor on a weekend.

In one sense, this is an article about one case’s role in U.S. copyright law, but it uses that case to tackle a fundamental question of copyright theory: what does it mean “to promote the Progress”? Beebe’s goal is not just to correct longstanding misunderstandings of Bleistein; as I understand it, his real point is that we can and should “assess[] aesthetic progress according to the simple propositions that aesthetic labor in itself is its own reward and that the facilitation of more such labor represents progress.” He thinks Justice Holmes’s invocation of “personality” in Bleistein represents a normatively attractive “third way” between judges assessing aesthetic merit and simply leaving this judgment to the market—that aesthetic progress is shown “by the mere fact that someone was willing to make the work, either for sale or otherwise, and that in making it, someone had invested one’s personality in the work.”

This personality-centered view of copyright seems similar to the Hegelian personality theory that was drawn into IP by Peggy Radin and elaborated by Justin Hughes, though at times it seems more like Lockean theories based on the author’s labor. I think he could have done more to explain how his theory relates to this prior literature, and also how it’s different from a utilitarian theory that recognizes the value creators get from creating (à la Jeanne Fromer’s Expressive Incentives). In any case, I think Beebe’s take is interesting, particularly with the connection he draws to John Dewey’s American pragmatist vision of aesthetic progress.

Wednesday, May 16, 2018

Mark McKenna: Trademark Counterfeiting And The Problem Of Inevitable Creep

One of my favorite events at Akron Law this past school year was hearing Professor Mark McKenna deliver the Oldham Lecture on his fascinating paper, Criminal Trademark Enforcement And The Problem Of Inevitable Creep.  The completed article, forthcoming in the Akron Law Review, is now available on SSRN.

The story, in Mckenna's telling, is simple. There is a criminal remedy for trademark "counterfeiting" because, most people would agree, using an identical trademark for goods or services that are identical to the trademark owner's is an economically and morally worse act than ordinary trademark infringement. A modern-day example of this atrocious crime is the company that has been hawking dysfunctional "Philips Sonicare" toothbrush replacement heads on Amazon.com. Consumers buy them thinking they are the real thing, and are sorely disappointed when the brush heads do not work. But to deserve the classification as criminal, as a legal matter, the act of counterfeiting must be proven "beyond a reasonable doubt" to fit within the exact text of the relevant statute, the Trademark Counterfeiting Act. According to McKenna, courts have veered from the statutory text, and are instead expanding criminal counterfeiting beyond Congressional authorization. Thus, the article's reference in its title to "inevitable creep."

There are parts of this well-done article with which people are likely to agree, and other parts with which people are likely to strongly disagree.

Tuesday, May 15, 2018

A Focus on Innovators Instead of Innovations

I noticed this week that my sometimes co-author Colleen Chien (Santa Clara) has posted the abstract for a new paper called Innovators on SSRN:
This Article argues for a shift in how we view and use the patent system, to a way of understanding and cultivating innovators that patent, not just patented innovation, for three reasons. First, who is innovating and where has relevance to a myriad of current social and policy debates, including the participation of women and minorities in innovation, high-skilled immigration, and national competitiveness. Second, though largely overlooked by academics, America’s patent system has long been innovator-, not only innovation-driven, and scholarly engagement can improve the quality of relevant policymaking. Third, the application of new computational tools to open patent datasets makes it possible to more easily approximate and track salient details about innovators that patent - including the geography and settings in which they innovate and the personal demographic traits of innovators - enabling the tailoring and tracking of impacts of interventions on disparate groups of innovators. This Article details why and how to do so by applying novel empirical methods to profiling patentees, revealing broad shifts over the past four decades, and demonstrating—through three mini-case studies pertaining to diversity in the technology sector, the promotion of small and individual inventors, and innovation in medical diagnostic technologies—how improving our understanding of innovators can improve our promotion of innovation.
A draft isn't available yet, but hopefully one will be soon. My thoughts on this abstract, though, are "hear, hear!" I think that too little attention has been paid to the people who innovate. There is, to be sure, a rich history of historians and economic historians who have focused on these points. Zorina Khan, Naomi Lamoreaux, and Ken Sokoloff (z''l) come to mind. In law, Adam Mossoff has provided several case studies and Chris Beauchamp has done outstanding historical work highlighting innovators in their time. Mark Lemley leveraged some historical work in an article about simultaneous inventing, and others have looked at those same innovators to tell competing stories.

But much of this work is historical. Of late, as the abstract notes, it's all about the what: What inventions? What classes? What litigation? How many claims? I think people clamor for stories about innovators; I believe my most downloaded (by far) SSRN paper, Patent Troll Myths, resonated because it looked hard at the innovators - individuals to small entities to large companies. Dan Burk looks at innovators (but without data) in Do Patents Have Gender? 

I'm sure there are examples I'm not thinking of, but more data and analysis in this area would be welcome. Patents exist in service to their inventors, and so it makes sense to understand who those are to better understand whether patents are achieving their goals...or even what the goals are.

Tuesday, May 8, 2018

When Should Actors Get Copyrights in their Performances?

In Garcia v. Google, the en banc Ninth Circuit ruled that actors can basically never obtain a copyright in their performances. I was one of, say, ten people troubled by this decision. My IP academic colleagues will surely recall (too) long debates on the listserv on this issue. It turns out that another of the ten is Justin Hughes (Loyola LA), who has now written an article exploring when and why actors might reasonably claim copyright in a performance. The article, called Actors as Authors in American Copyright Law, is on SSRN and is forthcoming in the Connecticut Law Review. The abstract is here:
Among the different kinds of works eligible for copyright, audiovisual works are arguably the most complex, often involving scores of contributors – screenwriters, directors, actors, cinematographers, producers, set designers, costume designers, lighting technicians, etc. Some countries expressly recognize which categories of these contributors are entitled to legal protection, whether copyright, ‘neighboring rights,’ or statutory remuneration. But American copyright law does not. Given that the complex relationship among these creative contributors is usually governed by contract, there is – for such a large economic sector – relatively little case law on issues of authorship in audiovisual works. This is especially true on the question of dramatic performers as authors of audiovisual works.
This Article provides the first in-depth exploration of whether, when, and how actors are authors under American copyright law. After describing how case law, government views, and scholarly commentary support the conclusion that actors are authors, the Article turns to the strange saga of the Ninth Circuit’s 2015 en banc Garcia v. Google decision – a decision more about fraud and fatwas than clear conclusions on how copyright law applies to acting. The Article then uses some simple thought experiments to establish how dramatic performers generally meet both the Constitutional and statutory standard for “authorship.” Finally, the Article reviews the various filters that prevent actors-as-authors legal struggles and how, when all else fails, we can consider actors as joint authors of the audiovisual works embodying their dramatic performances.
The article presents a detailed and nuanced view of what it means to be an author, as well as a good discussion of the development of the law in this area. As the abstract alludes to, it turns out that much of our view of actor protections is based on how things have been done and expediency (e.g. work made for hire) rather than a detailed examination of authorship in film.

For example, it has always been unclear to me why we protect a musician's performance of pre-scripted music in a sound recording, but not an actor's performance of a pre-scripted movie in an audiovisual work. The statute allows for both protections, and the primary reason seems to be that we don't think it's right.

Similarly, joint authorship is very strange. In Aalmuhammed v. Lee, the Ninth Circuit ruled that, despite the contribution of several elements and several scenes, one must be either a full joint author or nothing. There is no in-between. Like Garcia v. Google, this appears to be for expedience (and Hughes examines several other reasons), as well as a view that the only "work" can be the final work, and not each scene before it is pieced together, a legal fiction in the modern era of copyrightability in unpublished works.

This article explores much of the thinking I had at the time of Garcia v. Google, so those who favored that ruling will likely think it is as crazy as they thought I was. However, I think the article is still worth a read, if only to pinpoint where you think it goes astray, if it does.

Friday, May 4, 2018

Academic IP Conferences

Three years ago, I posted some general advice about academic IP conferences, including links to sites that compile IP conference information. Most of that advice still stands, but the definitive IP conference compilation site has moved to https://emptydoors.com/conferences, where it is maintained by Professor Saurabh Vishnubhakat—who was a wonderful member of a conference panel I moderated last week.

The most recent entry on Saurabh's site is interesting: the AALS Remedies Section has a call for papers for a program on IP remedies at the Jan. 2019 AALS Annual Meeting in New Orleans. Abstracts are due June 1.

Tuesday, May 1, 2018

Jake Sherkow Guest Post: What the CRISPR Patent Appeal Teaches Us About Legal Scholarship

Guest post by Professor Jake Sherkow of New York Law School, who is currently a Visiting Scholar at Stanford Law School.

Yesterday, the Federal Circuit heard oral argument in the dispute between the University of California and the Broad Institute over a set of fundamental patents covering CRISPR-Cas9, the revolutionary gene-editing technology. Lisa has been kind enough to invite me to write a few words here about the dispute, and I thought I’d take that generous opportunity to discuss two aspects of yesterday’s argument: the basics of the appeal and, given that this blog is devoted to legal scholarship about patent law, what the argument can teach us, if anything, about IP scholarship in general. I think the short answer to the second question is, Quite a lot, although perhaps not for obvious reasons.

Measuring the Value of Patent Disclosure

How valuable is patent disclosure? It's a perennially asked question. There are studies, like Lisa's, that attack the problem using surveys, and the conventional wisdom seems to be that there are niche areas that read patents, but for the most part patent disclosure holds little value because nobody reads them.

Deepak Hegde (NYU Stern), Kyle Herkenhoff (Minn. Econ), and Chenqi Zhu (NYU Stern PhD candidate) have decided to attack the problem from a different angle: using the AIPA (which required patent disclosure at 18 months) as a natural experiment. The paper is on SSRN, and the abstract is here:
How does the disclosure of technical knowledge through patents affect knowledge diffusion, follow-on invention, and patenting? We study this by analyzing the American Inventor's Protection Act (AIPA), which required U.S. patent applications filed after November 28, 2000 to be published 18 months after filing, rather than at grant, and advanced the disclosure of most U.S. patents by about two years. We estimate AIPA’s causal effect by using a counterfactual sample of identical European “twins” (of U.S.patents) which were not affected by the U.S. policy change and find that AIPA (i) increased the rate and magnitude of knowledge diffusion associated with U.S. patents (ii) increased overlap between technologically distant patents and decreased overlap between similar patents. Patent abandonments and scope decreased, while patent clarity improved, after AIPA. The findings are consistent with the predictions of our theoretical framework which models AIPA as provisioning current information about related technologies to inventors. The information, in turn, reduces follow-on inventors’ R&D and patenting costs. Patent disclosure promotes knowledge diffusion and clearer property rights while reducing R&D duplication.
This was a clever project. There have been AIPA studies before, but none that try to measure the value of the diffusion, so far as I know. What makes it go is the matching with European patents (which had always been published), which allows for their measurements to be independent of quality of invention.

Friday, April 27, 2018

Berkeley Remarks on a Patent Small-Claims Tribunal

As noted in my earlier post today with my remarks from We Robot, I’ve been busy with lots of interesting conferences and workshops in the past few weeks. Because I wrote out detailed notes for two of them, I thought I would post them for people who weren’t able to attend. Here are my comments from the fantastic BCLT/BTLJ symposium on the administrative law of IP, where I was on a panel discussing IP small-claims tribunals:

You have already heard some insightful comments from Ben Depoorter and Pam Samuelson on copyright small-claims courts, including their analysis of problems with the proposed Copyright Alternatives in Small-Claims Enforcement (CASE) Act of 2017, as well as their thoughts on how a more narrowly tailored small-claims system might be beneficial. The main justification for introducing such a tribunal is that high litigation costs prevent claimants from pursuing valid small claims.

I’m here to provide some perspective from the patent law side, and the short version of my comments is that the idea of a patent small-claims court seems mostly dead in the United States, and I don’t see a reason to revive it.

We Robot Comments on Ryan Abbot's Everything is Obvious

I’ve been busy with lots of interesting conferences and workshops in the past few weeks, and since I wrote out detailed notes for two of them, I thought I would post them for people who weren’t able to attend. First, my comments from the We Robot conference two weeks ago at Stanford:

Ryan Abbott’s Everything is Obvious is part of an interesting series of articles Ryan has been working on related to how developments in AI and computing affect legal areas such as patent law. In an earlier article, I Think, Therefore I Invent, he provocatively argued that creative computers should be considered inventors for patent and copyright purposes. Here, he focuses on how these creative computers should affect one of the most important legal standards in patent law: the requirement that an invention not be obvious to a person having ordinary skill in the art.

Ryan’s definition of “creative computers” is purposefully broad. The existing creative computers he discusses are all narrow or specific AI systems that are programmed to solve particular problems, like systems from the 1980s that were programmed to design new microchips based on certain rules and IBM’s Watson, which is currently identifying novel drug targets for pharmaceutical research. And Ryan thinks patent law already needs to change in response to these developments. But I think his primary concern is the coming of artificial general intelligence that surpasses human inventors.

Tuesday, April 24, 2018

Naruto, the Article III monkey

The Ninth Circuit released its opinion in the "monkey selfie" case, reasonably ruling that Naruto the monkey doesn't have standing under the Copyright laws. The opinion dodges the hard questions about who can be an author (thus leaving for another day questions about artificial intelligence, for example) by instead focusing on mundane things like the ability to have heirs. As a result, it's not the strongest opinion, but one that's hard to take issue with.

But I'd like to focus on an issue that's received much less attention in the press and among my colleagues. The court ruled that Naruto has Article III standing because there is a case or controversy. I'll admit that I hadn't thought about this angle, having instead gone right to the copyright authorship question (when you're a hammer, everything looks like a nail). But I guess when you're an appellate court, that whole "jurisdiction and standing section" means something even though we often skim that in our non-civ pro/con law/fed courts classes in law school.

I'll first note that the court is doubtful that PETA has standing as "next friend." Footnote 3 is a scathing indictment of its actions in this case, essentially arguing that PETA leveraged the case for its own political ends rather than for any benefit of Naruto. Youch! More on this aspect here. The court also finds that the copyright statute does not allow for next friend standing, a completely non-shocking result given precedent.

Even so, the court looks to whether Naruto has individual standing even without some sort of guardian. Surprisingly enough, this was not an issue of first impression. The Ninth Circuit had already ruled that a group of whales had Article III standing. From this, the court very quickly decides that Naruto has standing: the allegation of ownership in the photograph easily creates a case or controversy.

Once again, the best part is in the footnotes. I'll reproduce part of note 5 here:
In our view, the question of standing was explicitly decided in Cetacean. Although, as we explain later, we believe Cetacean was wrongly decided, we are bound by it. Short of an intervening decision from the Supreme Court or from an en banc panel of this court, [] we cannot escape the proposition that animals have Article III standing to sue....
[The concurrence] insightfully identifies a series of issues raised by the prospect of allowing animals to sue. For example, if animals may sue, who may represent their interests? If animals have property rights, do they also have corresponding duties? How do we prevent people (or organizations, like PETA) from using animals to advance their human agendas? In reflecting on these questions, Judge Smith [in the concurrence] reaches the reasonable conclusion that animals should not be permitted to sue in human courts. As a pure policy matter, we agree. But we are not a legislature, and this court’s decision in Cetacean limits our options. What we can do is urge this court to reexamine Cetacean. See infra note 6. What we cannot do is pretend Cetacean does not exist, or that it states something other, or milder, or more ambiguous on whether cetaceans have Article III standing.
I was glad to see this, because when I read the initial account that Article III standing had been granted, I wondered why the court would come to that decision and thought of many of these questions (and more - like what if there's no statute to deny standing, like diversity tort liability).

I'll end with perhaps my favorite part of the opinion: the award of attorneys' fees. The award itself is not surprising, but the commentary is. It notes that the court does not know how or whether the settlement in the case dealt with the possibility of such an award, but also that Naruto was not part of such a settlement. It's unclear what this means. Can Slater collect from Naruto? How would that happen? Can Slater collect from PETA because Naruto was not part of the settlement? The court, I'm sure, would say to blame any complexity on the whale case.

Sunday, April 22, 2018

Chris Walker & Melissa Wasserman on the PTAB and Administrative Law

Christopher Walker is a leading administrative law scholar, and Melissa Wasserman's excellent work on the PTO has often been featured on this blog, so when the two of them teamed up to study how the PTAB fits within broader principles of administrative law, the result—The New World of Agency Adjudication (forthcoming Calif. L. Rev.)—is self-recommending. With a few notable exceptions (such as a 2007 article by Stuart Benjamin and Arti Rai), patent law scholars have paid relatively little attention to administrative law. But the creation of the PTAB has sparked a surge of interest, including multiple Supreme Court cases and a superb symposium at Berkeley earlier this month (including Wasserman, Rai, and many others). Walker and Wasserman's new article is essential reading for anyone following these recent debates, whether you are interested in specific policy issues like PTAB panel stacking or more general trends in administrative review.

Monday, April 16, 2018

Comprehensive Data about Federal Circuit Opinions

Jason Rantanen (Iowa) has already blogged about his new article, but I thought I would mention it briefly has well. He has created a database of data about Federal Circuit opinions. An article describing it is forthcoming in the American University Law Reviw on SSRN and the abstract is here:
Quantitative studies of the U.S. Court of Appeals for the Federal Circuit's patent law decisions are almost more numerous than the judicial decisions they examine. Each study painstakingly collects basic data about the decisions-case name, appeal number, judges, precedential status-before adding its own set of unique observations. This process is redundant, labor-intensive, and makes cross-study comparisons difficult, if not impossible. This Article and the accompanying database aim to eliminate these inefficiencies and provide a mechanism for meaningful cross-study comparisons.

This Article describes the Compendium of Federal Circuit Decisions ("Compendium"), a database created to both standardize and analyze decisions of the Federal Circuit. The Compendium contains an array of data on all documents released on the Federal Circuit's website relating to cases that originated in a federal district court or the United States Patent and Trademark Office (USPTO)-essentially all opinions since 2004 and all Rule 36 affirmances since 2007, along with numerous orders and other documents.

This Article draws upon the Compendium to examine key metrics of the Federal Circuit's decisions in appeals arising from the district courts and USPTO over the past decade, updating previous work that studied similar populations during earlier time periods and providing new insights into the Federal Circuit's performance. The data reveal, among other things, an increase in the number of precedential opinions in appeals arising from the USPTO, a general increase in the quantity-but not necessarily the frequency-with which the Federal Circuit invokes Rule 36, and a return to general agreement among the judges following a period of substantial disuniformity. These metrics point to, on the surface at least, a Federal Circuit that is functioning smoothly in the post-America Invents Act world, while also hinting at areas for further study.
The article has some interesting details about opinions and trends, but I wanted to point out that this is a database now available for use in scholarly work, which is really helpful. The inclusion of non-precedential opinions adds a new wrinkle as well. Hopefully some useful studies will come of this