Tuesday, September 25, 2018

Questioning Design Patent Bar Restrictions

Every once in a while an article comes along that makes you realize all the things that you just don't realize. Of course, someone else realizes these things, which makes you realize all the things you should be realizing but didn't. But this is what scholarship is about, I think - spreading knowledge. The latest such article for me is The Design Patent Bar: An Occupational Licensing Failure, by Chris Buccafusco and Jeanne Curtis (both of Cardozo Law). A draft is posted on SSRN, and the abstract is here:
Although any attorney can represent clients with complex property, tax, or administrative issues, only a certain class of attorneys can assist with obtaining and challenging patents before the U.S. Patent & Trademark Office (PTO). Only those who are members of the PTO’s patent bar can prosecute patents, and eligibility for the patent bar is only available to people with substantial scientific or engineering credentials. However much sense the eligibility rules make for utility patents—those based on novel scientific or technical inventions—they are completely irrational when applied to design patents—those based on ornamental or aesthetic industrial designs. Yet the PTO applies its eligibility rules to both kinds of patents. While chemical engineers can prosecute both utility patents and design patents (and in any field), industrial designers cannot even prosecute design patents. This Article applies contemporary research in the law and economics of occupational licensing to demonstrate how the PTO’s application of eligibility rules to design patents harms the patent system by increasing the costs of obtaining and challenging design patents. Moreover, we argue that the PTO’s rules produce a substantial disparate impact on women’s access to a lucrative part of the legal profession. By limiting design patent prosecution jobs to those with science and engineering credentials, the majority of whom are men, the PTO’s rules disadvantage women attorneys. We conclude by offering two proposals for addressing the harms caused by the current system.
It never occurred to me to think about the qualifications required for prosecuting design patents. The observation that a different set of skills goes into such work is a good one; it makes no sense that a chemistry grad can prosecute design patents but an industrial design grad cannot. There are plenty of outstanding trademark lawyers who could probably do this work, despite not having a science or engineering degree.

I like that this paper takes the issue beyond this simple observation (which could really be a blog post or op-ed), and applies some occupational licensing concepts to the issue. Furthermore, I like that the paper makes some testable assertions that can drive future scholarship, such as whether these rules have a disparate impact on women. I am skeptical about the negative impact on design patents, but I think that's testable as well.

The paper concludes with some relatively mild suggestions on how to open up the field a little bit. I think they should be considered, but I'm happy to hear from folks who disagree.

Monday, September 24, 2018

USPTO Director Iancu Proposes Revised 101 Guidance

In remarks at the annual IPO meeting today, USPTO Director Andrei Iancu said "the USPTO cannot wait" for "uncertain" legislation on patentable subject matter and is "contemplating revised guidance" to help examiners apply this doctrine. Few are likely to object to his general goal of "increased clarity," but the USPTO should be sure that any new guidance is consistent with precedent from the Supreme Court and Federal Circuit.

As most readers of this blog are well aware, the Supreme Court's recent patentable-subject-matter cases—Bilski (2010), Mayo (2012), Myriad (2013), and Alice (2014)—have made it far easier to invalidate patent claims that fall under the "implicit exception" to § 101 for "laws of nature, natural phenomena, and abstract ideas." Since Alice, the Federal Circuit has held patents challenged on patentable-subject-matter grounds to be invalid in over 90% of appeals, and the court has struggled to provide clear guidance on the contours of the doctrine. Proponents of this shift call it a necessary tool in the fight against "patent trolls"; critics claim it creates needless uncertainty in patent rights and makes it too difficult to patent important innovations in areas such as medical diagnostics. In June, Rep. Thomas Massie (R-KY) introduced the Restoring America’s Leadership in Innovation Act of 2018, which would amend § 101 to largely undo these changes—following a joint proposal of the American Intellectual Property Law Association (AIPLA) and Intellectual Property Owners Association (IPO)—but Govtrack gives it a 2% chance of being enacted and Patently-O says 0%.

In the absence of legislation, can the USPTO step in? In his IPO speech today, Director Iancu decries "recent § 101 case law" for "mush[ing]" patentable subject matter with the other patentability criteria under §§ 102, 103, and 112, and he proposes new guidance for patent examiners because this mushing "must end." The problem is that the USPTO cannot overrule recent § 101 case law. It does not have rulemaking authority over substantive patent law criteria, so it must follow Federal Circuit and Supreme Court guidance on this doctrine, mushy though it might be.

Tuesday, September 18, 2018

No Fair Use for Mu(sic)

It's an open secret that musicians will sometimes borrow portions of music or lyrics from prior works. But how much borrowing is too much? One would think that this is the province of fair use, but it turns out not to be the case - at least not in those cases that reach a decision.  Edward Lee (Chicago-Kent) has gathered up the music infringement cases and shown that fair use (other than parody) is almost never a defense - not just that defendants lose, but that they don't even raise it most of the time. His article Fair Use Avoidance in Music Cases is forthcoming in the Boston College Law Review, and a draft is available on SSRN. Here's the abstract:
This Article provides the first empirical study of fair use in cases involving musical works. The major finding of the study is surprising: despite the relatively high number of music cases decided under the 1976 Copyright Act, no decisions have recognized non-parody fair use of a musical work to create another musical work, except for a 2017 decision involving the copying of a narration that itself contained no music (and therefore might not even constitute a musical work). Thus far, no decision has held that copying musical notes or elements is fair use. Moreover, very few music cases have even considered fair use. This Article attempts to explain this fair use avoidance and to evaluate its costs and benefits. Whether the lack of a clear precedent recognizing music fair use has harmed the creation of music is inconclusive. A potential problem of “copyright clutter” may arise, however, from the buildup of copyrights to older, unutilized, and underutilized musical works. This copyright clutter may subject short combinations of notes contained in older songs to copyright assertions, particularly after the U.S. Supreme Court’s rejection of laches as a defense to copyright infringement. Such a prospect of copyright clutter makes the need for a clear fair use precedent for musical works more pressing.
The results here are pretty interesting, as I discuss below.

Wednesday, September 12, 2018

Erie and Intellectual Property Law

When it comes to choice of law, U.S. federal courts hearing intellectual property law claims generally do one of two things. They either construct and apply the federal IP statutes (Title 18, Title 35, Title 17, and Title 15, respectively), remaining as faithful to Congress' meaning as possible; or they construct and apply state law claims brought under supplemental (or diversity) jurisdiction, remaining as faithful as possible to the meaning of the relevant state statutes and state judicial decisions. In the former case, they apply federal law; in the latter case, they apply the law of the state in which they sit.

Simple, right? Or maybe not.

This Friday, University of Akron School of Law is hosting a conference called Erie At Eighty: Choice of Law Across the Disciplines, exploring the implications of the Erie doctrine across a variety of fields, from civil procedure to constitutional law to evidence to remedies. I will be moderating a special panel: Erie in Intellectual Property Law.  Joe Miller (Georgia) will present his paper, "Our IP Federalism: Thoughts on Erie at Eighty"; Sharon Sandeen (Mitchell-Hamline) will present her paper, "The Erie/Sears-Compco Squeeze: Erie's Effects on Unfair Competition and Trade Secret Law”; and Shubha Ghosh (Syracuse) will present his paper "Jurisdiction Stripping and the Federal Circuit: A Path for Unlocking State Law Claims from Patent."

Other IP scholars in attendance include Bryan Frye (Kentucky), whose paper The Ballad of Harry James Tompkins provides a riveting, surprising, and (I think) convincing re-telling of the Erie story, and Megan LaBelle (Catholic University of America), whose paper discusses the crucial issue of whether the Erie line of cases directs federal courts sitting in diversity to apply state privilege law. All papers will be published in the Akron Law Review.

If you have written a paper that touches on the Erie doctrine's implications for intellectual property, I would really appreciate it if you would send it to me: chrdy@uakron.edu or cahrdy@gmail.com I will link to them in a subsequent post in order provide a resource for future research. Thank you!


Tuesday, September 11, 2018

Bargaining Power and the Hypothetical Negotiation

As I detail in my Boston University Law Review article, (Un)Reasonable Royalties, one of the big problems with using the hypothetical negotiation for calculating damages  (aside from the fact that it strains economic rationality and also has no basis in the legal history of reasonable royalties) is differences in bargaining power. The more explicit problem is when litigants try to use their bargaining power to argue that the patent owner would have agreed to a lower hypothetical rate. More implicitly, bargaining power can affect royalty rates in pre-existing (that is, comparable) licenses. This gives rise to competing claims in top 14 law reviews about whether royalty damages are spiraling up or down based on the trend of comparable licensing terms.

For what it's worth, my article dodges the spiral question, but suggests that existing licenses only be used if they can be directly tied to the value of the patented technology (and thus settlements should never be used). Patent damages experts who have read my article uniformly hate that part of it, because preexisting licenses (including settlements) are sometimes their best or even only granular source of data.

But much of this is theory. What about the data?  Gaurav Kankanhalli (Cornell Management - finance) and Alan Kwan (U. Hong Kong) have posted An Empirical Analysis of Bargaining Power in Licensing Contract Terms to SSRN. Here is the abstract:
This paper studies a new, large sample of intellectual property licensing agreements, sourced from filings by public corporations, under the lens of a surplus-bargaining framework. This framework motivates several new empirical findings on the determinants of royalty rates. We find that licensors command premium royalty rates for exclusivity (particularly in competitive industries), and for exchange of know-how. Licensors with differentiated technology and high market power charge higher royalty rates, while larger-than-rival licensees pay lower rates. Finally, using this framework, we study how the nature of disclosure by public firms affects transaction value. Firms transact at lower royalty rates when they redact contracts, preserving pricing power for future negotiations. This suggests that practitioners modeling fair value in transfer pricing and litigation contexts based on publicly-known comparables are over-estimating royalties, potentially impacting substantial cumulative transaction value.
The paper uses SEC reported licenses (more on that below), but one clever twist is that they obtained redacted terms via FOIA requests, so they could both expand their dataset and also see what types of terms are missing. They model the following transactions. Every firm has the most they are willing to pay, and the least they are willing to accept. If those two overlap, then the parties will agree to some price in the middle that splits the surplus.  Where that price is set is based on bargaining power. The authors then hypothesize what types of characteristics will affect that price, and most of them are borne out.

They focus on several kinds of bargaining power contract characteristics, firm specific characteristics, technology characteristics and license characteristics. I'm not sure I would call all of these bargaining power, as they do. I think some relate more to the value of the thing being licensed. Technically this will affect the division of surplus, but it's not really the type of bargaining power I think about. So long as the effect on license value is clear, however, the results are helpful for use in patent cases regardless of the technical designation.

So, for example, universities, non-profits, and individuals receive lower rates because they  have no credible BATNA for self-commercialization. They argue that this sheds light on conventional wisdom that individuals produce less valuable inventions. Further, firms in weaker financial condition do worse, and firms with more pricing power among their rivals do better.

On the other hand, licenses including know-how or exclusivity receive higher royalties, while amendments typically lead to lower royalties (presumably due to underperformance). I don't consider this to be bargaining power, but rather added value. That said, the authors test exclusivity and find that that highly competitive industries have higher royalties for exclusivity than non-competitive industries, which implies a mix of both bargaining power and value in competition.

The authors do look at technological value and find, unsurprisingly, that substitutability leads to lower rates.

The paper points to one interesting combination, though: territorial restrictions. Contracts with territorial restrictions have higher rates. You would think they have lower rates because the license covers less. But the contrary implication here is that a territorial restriction is imposed where the owner has the leverage to impose it, and that means a higher rate. That could be due to value or bargaining power, I suppose. I wonder, though, how many expert reports say that a royalty rate should be greater because the comparable license only covered a territory. Any readers who want to chime in would be appreciated.

There is a definite selection effect here, though, which further implies that use of preexisting licenses gathered via SEC filings be treated carefully. First, the authors note that there is a selection effect in the redactions. They find that not only are lower rates redacted, but that these redactions are driven by non-exclusive licenses, because firms want to hide their lowest willingness to sell (reservation) price. This finding is as valuable as the rest, in my opinion. It means, as the authors note, that any reliance on reported licenses may be over-weighting. It also means, in terms of my own views, that the hypothetical negotiation is not a useful way to calculate damages, because the value of the patent shouldn't change based on who is buying and selling. A second selection effect is not within the data, but what is not in the data: these are only material licenses. If the licenses are not material, they will not be reported. Those licenses are likely to be smaller, whether due to patent value or bargaining power.

This is a really interesting and useful paper, and worth a look.

Monday, September 3, 2018

Boundary Maintenance on Twitter

Last Saturday was cut-down day in the NFL, when rosters are shaved from 90 players down to 53. For the first time, I decided to follow the action for my team by spending time (too much time, really - the kids were monopolizing the TV with video games) watching a Twitter list solely dedicated to reporters and commentators discussing my team.

I've never used Twitter this way, but from an academic point of view I'm glad I did, because I witnessed first-hand the full microcosm of Twitter journalism. First, there were the reporters, who were all jockeying to be the first to report someone was cut (and confirm it with "sources."). Then, there were the aggregators, sites with a lot of writers devoted to team analysis and discussion, but who on this day were simply tracking all of cuts/trades/etc. Ironically, the aggregators were better sources of info than the reporters' own sites, because the reporters didn't publish a full list until later in the day, along with an article that they were too busy to write because they were gathering facts.

Then there were the professional commentators - journalists and semi-professional social media types who have been doing this a long time or have some experience in the sport, but who were not gathering facts. They mostly commented on transactions. Both the reporters and commentators answered fan questions. And then...there were the fans, commenting on the transactions, commenting on the reporters, commenting on the commentators, etc. This is where it got interesting.

Apparently experienced commentators don't like it when fans tell them they're wrong. They like to make clear that either a) they have been doing this a long time, or b) they have a lot of experience in the league, and therefore their opinion should not be questioned. Indeed, in one case a commentator's statement seemed so ridiculous that the "new reporter" in town made fun of it, and all the other reporters circled the wagons to say that the new guy shouldn't be questioning the other men and women on the beat, all of whom had once held his job but left for better jobs. Youch! It turns out the statement was, in fact, both wrong and ridiculous (and proven so the next morning).

This type of boundary maintenance is not new, but it is the first time I've seen it so clearly, explicitly, and unrelentingly (there is some in legal academia, which I'll discuss below). This is a blog about scholarly works, so I point you to an interesting article called The Tension between Professional Control and Open Participation:Journalism and its Boundaries, by Seth Lewis, now a professor in the communications department at the University of Oregon. The article is published in Information, Communication & Society. It is behind a paywall, so a prepublication draft is here. Here is the abstract:
Amid growing difficulties for professionals generally, media workers in particular are negotiating the increasingly contested boundary space between producers and users in the digital environment. This article, based on a review of the academic literature, explores that larger tension transforming the creative industries by extrapolating from the case of journalism – namely, the ongoing tension between professional control and open participation in the news process. Firstly, the sociology of professions, with its emphasis on boundary maintenance, is used to examine journalism as boundary work, profession, and ideology – each contributing to the formation of journalism's professional logic of control over content. Secondly, by considering the affordances and cultures of digital technologies, the article articulates open participation and its ideology. Thirdly, and against this backdrop of ideological incompatibility, a review of empirical literature finds that journalists have struggled to reconcile this key tension, caught in the professional impulse toward one-way publishing control even as media become a multi-way network. Yet, emerging research also suggests the possibility of a hybrid logic of adaptability and openness – an ethic of participation – emerging to resolve this tension going forward. The article concludes by pointing to innovations in analytical frameworks and research methods that may shed new light on the producer–user tension in journalism.
The article includes a fascinating literature review on the sociology of journalism, and focuses on what it means to be a journalist in a world when your readers participate with you.

Bringing it back to IP for a moment (and legal academia more generally), I certainly see some of this among bloggers and tweeters. I see very little of it as a producer of content, presumably because I am always right. 😀 But I know that as a consumer I bleed into the boundaries of others, both in legal academia and elsewhere. I can't help myself - my law school classmates surely remember me as a gunner.

Many of my producer colleagues (mostly women, surprise surprise) see it much worse. Practicing lawyers tell them they don't know what they are talking about. Some may be making valid points, some not. Some are nice about it, while others are not. I'm speaking mostly of good faith boundary issues here, not trolling or harassment, which is a different animal in my mind.

I guess the real question is what to do about it. If you are in an "open" area, boundaries will get pushed. Some people welcome this, and some despise it. Some are challenged more fairly than others. I suspect that people have different ways of managing their boundaries, and it depends heavily on who and how folks are commenting. Some may ignore it, some may swat back about relative expertise, some engage with everyone, some disengage selectively or entirely, going so far as block and mute. I suspect it's a mix.

In any event, I don't have any policy prescriptions here. I know so little about it that I have no clue what the right answer is. I just thought I would make explicit what is usually implicit, point out an interesting article about it, and suggest that readers be mindful of boundaries and Diff'rent Strokes - what might be right for you, may not be right for some.

Friday, August 31, 2018

Maggie Chon on IP and Critical Theories

I tend to approach IP law primarily through a law-and-economics lens, but I enjoy learning about how scholars with different methodological toolkits tackle the same subject matter—especially when their work is clear and accessible. I was thus delighted to see a draft chapter by Margaret Chon, IP and Critical Methods, for the forthcoming Handbook on Intellectual Property Research (edited by Irene Calboli and Lillà Montagnani). Chon provides a concise review of critical legal theory and its application to IP law.

According to Chon, critical theory includes a critique of liberal legal theory as based on the fallacy that legal institutions fairly reflect constituents' interests (as reflected in the marketplace or ballot box). Instead, the interests of privileged or empowered social groups are over-represented, and institutions contribute to these inequalities to the extent that enduring change requires reimagining these institutions themselves. Of course, as she notes, "critical theory would not exist without some belief (however thin) that law and legal systems contain some of the tools necessary for structural transformation."

Chon argues that one need not be a self-identified Crit to engage in critical methodology, and that many IP scholars have stepped closer to critical method by moving from doctrinal to structural analysis, and by "perform[ing] this structural analysis with attention to power disparities." And she gives a number of examples of the influence of critical theory across different areas of IP.

Wednesday, August 29, 2018

Data Driven Creativity

My school started much earlier than my kids' school this year, so I spent a couple weeks at home while the rest of the family visited relatives across the country. I am not too proud to admit that I bingewatched an obscene amount of TV during the two weeks they were gone while I was completing some writing projects. It's really the first time I have done so; while I have shows that I like, I rarely get to watch them all at once, or to pick the next one on the list in rapid succession.

So, it was with a new interest that I enjoyed The Second Digital Disruption: Data, Algorithms & Authorship in the 21st Century by Kal Raustiala (UCLA) and Chris Sprigman (NYU). A draft of the article is on SSRN, and they blogged about it in a series of posts at Volokh Conspiracy. Here is the abstract:
This article explores the intellectual property ramifications that flow from the explosive growth of mass streaming technologies. Two decades ago rising internet usage led to what we call the first digital disruption: Napster, file-sharing, and the transformation of numerous content industries, from music to news. The second digital disruption is about the age of streaming and, specifically, how streaming enables firms to harvest massive amounts of data about consumer preferences and consumption patterns. Coupled to powerful computing, this data—what Mark Cuban has called “the new gold”—allows firms such as Netflix, Amazon, and Apple to know in incredible detail what content consumers like and how they consume it. The leading edge of this phenomenon—and the primary vehicle for our examination—is the adult entertainment industry. We show how Mindgeek, the little-known parent company of Pornhub and the dominant player in pornography today, has leveraged data about viewing patterns to not only organize and suggest content but even to dictate creative decisions. We first show how the adult industry adapted to the internet and the attendant explosion of free content. That story aligns with many similar accounts of how creative industries adapt to a loss of control over IP by restructuring and recasting revenue streams. We then show how content streaming firms have used data to make decisions about content aggregation, dissemination, and investment. Finally, we consider what these trends suggest for IP theory and doctrine. A key feature is that by making creative production less risky, what we call “data-driven authorship” drives down the need for strong IP rights.
I thought the discussion about how data drives what to create to be fascinating, and the article is well worth a read. I think the perfect example of what the authors are describing is the Netflix movie Bright, in which Will Smith plays a cop who teams up with an Orc on the LA Police. The movie was critically panned. Rotten Tomatoes: 26%. But viewers seem to like it a lot: Rotten Tomatoes Audience Score: 84%. Netflix is surely on to something here.

I could certainly see it playing out as I watched. I watched "The Five," a show by one of my favorite authors, Harlan Coben. So then Netflix gave me nothing but mysteries and suspense to watch, plus another show by Coben, Safe (both were great, by the way). But I'm not really a mystery show person - I like sci-fi. So, I watched one show, and then the suggestions got weird: do I like mystery? sci-fi? sci-fi mysteries? I wound up having to dig a bit for the next show.

But here's the interesting thing: the quality of the shows varied wildly, even among the genres that I liked. The writing, acting, editing, and direction mattered. I don't know about the Mindgeek and porn clips, but I will note a couple distinguishing factors. First, there is likely a...er...utilitarian factor associated with those works; people are not watching for the articles, as it were. Second, the works are much shorter; it is much easier to have a highly focused 15-25 minute clip than a 10 episode series. Even with these differences, I suspect viewers have their preferences about what they see in the different clips with the same data driven attributes.

My broader point, then, is that how we consider the effect of data driven works will depend a lot on how we view creativity. The data certainly reduces the creativity in certain major plot points, as well as the quantity of different types of works. But to some extent studios have always done this, only with rules of thumb and intuition rather than actual knowledge. In that sense, data will democratize creativity - if viewers want more women in better roles, there will be more women in better roles; no need to rely on a male studio executive's views on the matter.

Beyond selection, though, I suspect there is still room for surprise, storytelling, differentiation, and other forms of creativity. Consider Bright: write what you want, but it just has to star Will Smith, include the police, and feature orcs and elves. At the limit, too much data may constrain creativity, of course - the more you add, the less you can create.

To be clear, Raustiala and Sprigman don't say anything that contradicts my intuitions here. They make clear that creativity is on a continuum, and that data merely slides to one side. But they do question how viewers will perceive works, and it is there that I disagree with them. I suppose that we could hit that limit where everything is automated, but my gut says that despite having preferences for particular story aspects, viewers will always be able to separate the wheat from the chaff (though not the way I would - as just about every American Idol vote shows) and thus will always look for something new and different within their preferences. At least, I sure hope so.

Saturday, August 25, 2018

Yochai Benkler on Innovation & Networks

Yochai Benkler is a giant within the intellectual history of IP law; some of his work will surely end up on my Classic Patent Scholarship page if I expand it to post-2000 works. Even though I don't agree with all of his conclusions, I think IP scholars should at least be familiar with his arguments. For those who haven't read his earlier works—or who just want a refresher on his take—you might enjoy his recent review article, Law, Innovation, and Collaboration in Networked Economy and Society, 13 Ann. Rev. L. & Soc. Sci. 231 (2017). Here is the abstract:
Over the past 25 years, social science research in diverse fields has shifted its best explanations of innovation from (a) atomistic invention and development by individuals, corporate or natural, to networked learning; (b) market-based innovation focused on material self-interest to interaction between market and nonmarket practices under diverse motivations; and (c) property rights exclusively to interaction between property and commons. These shifts have profound implications for how we must think about law and innovation. Patents, copyrights, noncompete agreements, and trade secret laws are all optimized for an increasingly obsolete worldview. Strong intellectual property impedes, rather than facilitates, innovation when we understand that knowledge flows in learning networks, mixing of market and nonmarket models and motivations, and weaving of commons with property are central to the innovation process.
Note that the shift Benkler is describing is a shift both in scholars' understanding of innovation and in the nature of innovation itself—particularly due to changes in organizational structure made possible by technologies such as the internet. The optimal innovation policy 100 years ago was likely different from the optimal innovation policy in today's more networked economy. To be sure, historical innovation studies can still be quite illuminating—but it is always important to consider how applicable the conclusions are likely to be in the modern context.

Tuesday, August 21, 2018

Abstraction, Filtration, and Comparison in Patent Law

Last April, I had the good fortune to participate in a symposium at Penn Law School. The symposium gathered a variety of IP scholars to focus on the "historic" kinship between copyright and patent law. That kinship, first identified in Sony v. Universal Pictures, supposedly shows parallels between the two legal regimes. I use scare quotes because it is unclear that the kinship is either historic or real. Even so, there are some parallels, and a collection of papers about those parallels will be published in the inaugural issue of Penn's new Law & Innovation Journal.

My article is about the use of abstraction, filtration, and comparison (a distinctly copyright notion) in patent law. I have cleverly named it Abstraction, Filtration, and Comparison in Patent Law. A draft of the article is now on SSRN. Here is the abstract:
This essay explores how copyright's doctrine of abstraction, filtration, and comparison is being used in patent law, and how that use could be improved. This test, which finds its roots in the 1930s but wasn't fully developed until the 1990s, is one that defines scope for determining infringement. The copyrighted work is abstracted into parts, from ideas at the highest level to literal expression at the lowest. Then, unprotected elements are filtered out. Finally what remains of the original work is compared to the accused work to determine if the copying was illicit.
This sounds far removed from patent law, but there is a kinship, though perhaps one that is not so historic and a bit hidden. The essence of the test is determining protectable subject matter. These same needs permeate patent law as well. This essay explores how the test is implicitly used and should be explicitly used.
With design patents, the test might apply as it does in copyright, with functional elements being filtered out during infringement. Current precedent allows for this filtering, but not clearly or consistently. With utility patents, the abstraction, filtration, and comparison happen earlier, during the test for patentable subject matter. Here, the comparison is with what is conventional or well known. The essay concludes by discussing why the application is different for design and utility patents.
I think the article is interesting and brings some useful insights into how we should think about patentable subject matter, but you'll have to be the judge.

Tuesday, August 14, 2018

Use Based Copyright Terms

I didn't blog last week because, well, I was at Disneyland. But I love IP, and when you're a hammer, everything is a nail. So, I couldn't help but think as I looked at the gigantic Mickey Mouse on the Ferris wheel that things are going to start getting messy when the copyright in Mickey runs out.

It occurs to me that serial, long term uses of copyrighted works are different than one time publications. To the extent that copyright is intended to incentivize investment in creative works, then losing protection over time can limit the incentive to develop quality long term work.  I'm not just talking about Mickey - Superman (and the additional complication of rights clawback) and other serial comics create issues. Star Trek is 50, Rocky and Star Wars are 40, and even Jurassic Park is 25 years old. The solution we got to this problem, a longer term for everything, was not the right one. A better solution is that terms should last as long as copyrights are in use, plus a few years. Works that are simply "sold" without any new derivative work would be capped, so works without improvement could not last forever.

Now, this is not to say there aren't costs to protecting copyrights while they are still in use. There is a path dependency that can reduce incentives to come up with new works (in other words, bad sequels instead of new creativity). There is also value associated with the public being able to use works in their own ways.

I'm personally not worried about either of these. On the first, there are plenty of incentives for new entrants to create new works (we got Star Trek, then Star Wars, then Battlestar Galactica (I and II), and now the Expanse), and even serial works become stale after a while (there was no Rocky 50, as some parodies predicted). On the second, I think it is inconsistent with the first concern to worry about path dependence while also worrying that others should be able to use the works. Of course, fresh eyes can bring new ideas to the expression, but hopefully the original owners do that. At this point, non-utilitarian concerns come into play. As between a party who has invested in making a work valuable over a long period of time and a party who would like to use that value, I side with the investor and say newcomers can create their own new value. I realize that many disagree with me on this point. That said, I think there are some noncompetitive uses - fan fiction, say - that can bring new ideas and allow some new works.

Note that a use-based term cuts both ways. As Paul Heald has demonstrated, there is a significant drop in availability for older books that are still within the copyright term. A use-based rule would either end the term, or perhaps create a commercialization bounty similar to that proposed by Ted Sichelman for patents.

A great idea, right? Except, I thought no way that nobody else looked at Mickey and thought the same thing. So, at the IP Scholars Conference last week (which was great, by the way), I asked my colleague Shyam Balganesh from Penn about it, and he didn't even blink before saying that Landes and Posner wrote an article 15 years ago called Indefinitely Renewable Copyright, located here.

As you would expect from these two authors, they detail the costs and benefits of copyright terms, and they provide empirical evidence that showed how shorter copyright terms led to very few renewals. The primary divergence from my idea is that I would allow a challenge based on lack of use, whereas Landes and Posner seem to assume that use is synonymous with registration (as their data shows). But I can imagine times when parties renew but then do not use their works. Thus, the system should look a bit more like trademarks.

I've done no literature review, so it's entirely possible that others have written about this. If you wrote or know of such an article, feel free to pass it along, and I'll add it here for posterity.

Friday, August 3, 2018

#IPSC18 Preview: General IP

This week I've been previewing all 140+ abstracts for the 18th Annual IP Scholars Conference next week at Berkeley, with patents & innovation on Monday, copyright on Tuesday, trademarks on Wednesday, and design/trade secrets/publicity yesterday. Here are all the remaining panels, in which multiple areas of IP are combined (either in individual papers or across the panel). Looking forward to seeing everyone next week!

Thursday, August 2, 2018

#IPSC18 Preview: Design, Trade Secrets, and Right of Publicity

To get ready for IP scholar speed dating at Berkeley next week, I've previewed the panels focused on patents and innovation, copyright, and trademarks. Today: design, right of publicity, and trade secrets (including some notes on other panels where you can also find papers on these topics).

Wednesday, August 1, 2018

#IPSC18 Preview: Trademarks

The 18th Annual IP Scholars Conference is Aug. 9-10 at Berkeley Law. Monday I previewed the eighteen panels primarily related to patents and innovation, and yesterday I previewed the six panels related to copyright. There are only two trademark-focused panels, and I didn't see any trademark-focused papers on general IP panels.

Tuesday, July 31, 2018

#IPSC18 Preview: Copyright

Yesterday I previewed the panels on patents and innovation at next week's IP Scholars Conference at Berkeley Law. Here are the copyright-focused panels: