Thursday, December 28, 2017

Happy New Year!

It's time to start bringing in the lights! Wishing our readers a great new year. I'll be back with posts in the next couple weeks.

Tuesday, December 19, 2017

How Do We Know What's Government Speech? Ask the Listeners (with Daniel Hemel)

Note: This post is co-authored with Daniel Hemel, an assistant professor of law at the University of Chicago Law School, and cross-posted at Whatever Source Derived. Follow him on Twitter: @DanielJHemel. This project may be of particular interest to the many Written Description readers who followed Matal v. Tam and its recent follow-up, In re Brunetti.

The distinction between private expression and government speech is fundamental to First Amendment jurisprudence. As the Supreme Court has held repeatedly, the government must be viewpoint-neutral when it regulates private expression, but not when it engages in speech of its own. For example, a public school cannot prohibit students from expressing anti-war views, but the government is free to propagate its own messages in support of a war effort without any need to simultaneously promote pacifism. Yet despite the doctrinal significance of the distinction between private expression and government speech, the line that separates these two categories is often quite fuzzy. A private billboard is clearly private expression, and the Lincoln Memorial is paradigmatic government speech, but what about a temporary privately donated exhibit in a state capitol? Privately produced visitors’ guides at a state highway rest area? A state university name and logo on a student group’s T-shirt? These are a few of the scenarios federal courts have wrestled with in recent cases.

To identify government speech in close cases, the Supreme Court has placed increasing emphasis on whether members of the public reasonably perceive the relevant expression to be private or government speech. As explained below, we think this turn toward public perception is a welcome development. But the Court has so far failed to develop a reliable method for determining how ordinary citizens distinguish between private and government messages.

The Court’s three most recent government speech decisions are illustrative. In the 2009 case Pleasant Grove City v. Summum, the Court said that there was “little chance” that observers would think that monuments in a public park were anything except government speech, even when those monuments were designed and donated by private organizations. Six years later, in Walker v. Texas Division, Sons of Confederate Veterans, the justices split 5–4 as to whether specialty license plate designs submitted by private organizations constituted government speech, with the majority asserting that members of the public perceive these designs to come from the government and the dissent insisting that members of the public hold the opposite view. And this past term, in Matal v. Tam, the Court confidently concluded that members of the public do not perceive federal trademark registration to be government speech. In none of these cases did the justices or the parties bring to bear any evidence as to how members of the public actually perceive the expression in question.

In an article forthcoming in the Supreme Court Review, we begin to fill that empirical void. We presented a variety of speech scenarios to a nationally representative sample of more than 1200 respondents and asked the respondents to assess whether the speech in question was the government’s. Some of the speculative claims made by the justices in recent government speech cases are borne out by our survey: for example, we find that members of the public do routinely interpret monuments on government land as conveying a message on the government’s behalf. In other respects, however, the justices’ speculation proves less accurate: for instance, while the Court in Tam says that it is “far-fetched” to suggest that “the federal registration of a trademark makes the mark government speech,” we find that nearly half of respondents hold this “far-fetched” view. (This does not imply that Tam was wrong—just that the question of whether members of the public perceive federal trademark registration to be government speech is much closer than the Court suggests.)

Monday, December 18, 2017

IP and the Soccer, er, Football

Just a short note this winter break week about a short essay that I enjoyed. Mike Madison (Pitt) has put The Football as Intellectual Property Object on SSRN. At first, I was really excited - looking forward to hearing about the pigskin's development from rugby. But, apparently, there's another kind of football around, but the essay was interesting just the same. Here's the abstract:
The histories of technology and culture are filled with innovations that emerged and took root by being shared widely, only to be succeeded by eras of growth framed by intellectual property. The Internet is a modern example. The football, also known as the pelota, ballon, bola, balón, and soccer ball, is another, older, and broader one. The football lies at the core of football. Intersections between the football and intellectual property law are relatively few in number, but the football supplies a focal object through which the great themes of intellectual property have shaped the game: origins; innovation and standardization; and relationships among law and rules, on the one hand, and the organization of society, culture, and the economy, on the other.
The essay details some of the history of soccer and the soccer ball from a variety of IP and innovation standpoints - sponsorships, standardization, unintended consequences of innovation, etc. The discussion provides a nice, brief survey of the untold life of an everyday object. The essay is part of a larger book that I look forward to reading: A History of Intellectual Property in 50 Objects.

Monday, December 11, 2017

Big Patent Data from Google

UPDATE: I got some new information from the folks at Google, discussed below.

Getting patent data should be easier, but it's not. It is public information, but gathering, organizing, and cleaning it takes time. Combining data sets also takes time. Companies charging a fee do a good job providing different data, but none of them have it all, and some of the best ones can be costly for regular access.

Google has long had patent data, and it has been helpful. Google patents is way easier to use than the USPTO website (though I think their "improvements" have actually made it worse for my purposes, but that's for another day). They also had "bulk" data from the PTO, but those data dumps required work to import into a usable form. I spent 2 days writing a python script that would parse the xml assignment files, but then I had to keep running it to get it updated, as well as pay for storage of the huge database. The PTO Chief Economist has since released (and kept up to date) the same data in Stata format, which is a big help. But it's still not tied to, say, inventor data or litigation data.

So, now, Google is trying to change that. It has announced the Google Patents Public Datasets service on its Cloud Platform and in Google BigQuery. A blog post describing the setup is here and the actual service is here. With the service, you can use SQL search commands to search across multiple databases, including: patent data, assignment data, patent claim data, ptab data, litigation notice data, examiner data, and so forth.

There's good news and bad news with the system. The good news is that it seems to work pretty well. I was able to construct a basic query, though I thought the user interface could be improved with some of the features you see in better DB drag and drop systems (especially where there are so many long database names).

The other good news is that it is apparently expandable. Google will be working with data aggregators to include their data (assuming a membership, I presume), so that you can easily combine from multiple sources at once. Further, there is other data in the system, including Hathi Trust books - so you could, for example, see if inventors have written books, or tie book publishing to inventing over periods of years.

Now, the bad news. First, some of the databases haven't been updated in a while - they are what they were when first released. This leads to second, you are at the mercy of the the PTO and Google. If all is well, then that's great. But if the PTO doesn't update, or Google decides this isn't important any more (any Google Reader fans out there?) then bye-bye.

I look forward to using this as long as I can - it's definitely worth a look.

UPDATE:
Here is what I learned from Google:
1. Beyond data vendors, anyone can upload their own tables to BigQuery and choose who has access. This makes it a great fit for research groups analyzing private data before publication, as well as operating companies and law firms generating reports that combine private portfolio information with paid and public datasets.

2. Each incremental data addition expands the potential queries to be done, and you're no longer limited by what a single vendor can collect and license.

3. The core tables are updated quarterly, so the next update is due out shortly. As Google adds more data vendors, alternatives for the core tables will appear and be interchangeable with different update frequencies.

Tuesday, December 5, 2017

Nature Versus Nurture in the Propensity to Innovate

A colleague pointed me to a paper today that I wanted to share. NBER researchers have managed to tie patent inventor data to tax returns to test scores in order to show who is likely to be an inventor on a patent. The study makes some intuitive and nonintuitive findings. The paper, by Alexander M. Bell (Harvard Econ), Raj Chetty (Stanford Econ), Xavier Jaravel (LSE), Neviana Petkova (U.S. Treasury), and John Van Reenen (MIT Econ), is here:
We characterize the factors that determine who becomes an inventor in America by using de-identified data on 1.2 million inventors from patent records linked to tax records. We establish three sets of results. First, children from high-income (top 1%) families are ten times as likely to become inventors as those from below-median income families. There are similarly large gaps by race and gender. Differences in innate ability, as measured by test scores in early childhood, explain relatively little of these gaps. Second, exposure to innovation during childhood has significant causal effects on children's propensities to become inventors. Growing up in a neighborhood or family with a high innovation rate in a specific technology class leads to a higher probability of patenting in exactly the same technology class. These exposure effects are gender-specific: girls are more likely to become inventors in a particular technology class if they grow up in an area with more female inventors in that technology class. Third, the financial returns to inventions are extremely skewed and highly correlated with their scientific impact, as measured by citations. Consistent with the importance of exposure effects and contrary to standard models of career selection, women and disadvantaged youth are as under-represented among high-impact inventors as they are among inventors as a whole. We develop a simple model of inventors' careers that matches these empirical results. The model implies that increasing exposure to innovation in childhood may have larger impacts on innovation than increasing the financial incentives to innovate, for instance by cutting tax rates. In particular, there are many “lost Einsteins” — individuals who would have had highly impactful inventions had they been exposed to innovation.
The finding that inventors are white males that come from higher income families is hardly surprising, and one might imagine a variety of reasons: access to education, access to capital, risk aversion, discrimination, etc. But the interesting part of this paper, as I discuss below, is that the authors believe they can causally show it is not really any of the usual suspects, at least not directly.

Tuesday, November 28, 2017

Some Transparency Into Chinese Patent Litigation

Despite knowing its growing importance in global IP, I've always kept the Chinese patent system at bay in my research. I primarily focus on the U.S. Patent system (which keeps me plenty busy) and, more important, there is very little transparency in Chinese litigation. It's a different language and courts did not routinely report their decisions until very recently.

There's been some movement, though. The language remains the same, but the courts are reporting more decisions.  They are supposed to be reporting all of them, in fact. So Renjun Bian (JSD Candidate, Berkeley Law) has leveraged this new reporting to provide some details in Many Things You Know about Patent Infringement Litigation in China Are Wrong, on SSRN. The good news for me is that I don't really know anything about patent infringement litigation in China, so I'm unlikely to be wrong. But that didn't stop me from reading:
As the Chinese government continues to stimulate domestic innovation and patent activities via a variety of policies, China has become a world leader in both patent applications and litigation. These major developments have made China an integral venue of international patent protection for inventors and entrepreneurs worldwide. However, due to the lack of judicial transparency before 2014, westerners had virtually no access to Chinese patent litigation data and knew little about how Chinese courts adjudicated patent cases. Instead, outside observers were left with a variety of impressions and guesses based on the text of Chinese law and the limited number of cases released by the press. Taking advantage of ongoing judicial reform in China, including mandated public access to all judgments made since January 1, 2014 via a database called China Judgements Online (CJO), this paper analyzes 1,663 patent infringement judgments – all publicly available final patent infringement cases decided by local people’s courts in 2014. Surprisingly, many findings in this paper contradict long-standing beliefs held by westerners about patent enforcement in China. One prominent example is that foreign patent holders were as likely to litigate as domestic patent holders, and received noticeably better results – higher win rate, injunction rate, and average damages. Another example is that all plaintiffs won in 80.16% of all patent infringement cases and got permanent injunctions automatically in 90.25% of cases whose courts found patent infringement, indicating stronger patent protection in China than one might expect.
Yes, you read that right: plaintiffs win 80% of the time and 90% of the winners get a permanent injunction. The win rates are affirmed on appeal most the time. I'll admit that while I didn't know anything, it didn't stop me from having a vision of a place where you could get no relief, but that appears not to be the case. More on this below.

Tuesday, November 21, 2017

Data for the Evergreening Debate

Pharmaceutical companies would like their blockbuster drug exclusivity to last forever. But patents expire and generics enter the marketplace. This ecosystem has led to a battleground, with opposing claims about unfair competition, evergreening, patent misuse, etc. There's a fair amount of data out there, but with respect to evergreening there has been more heat than light. A recent paper by Robin Feldman (Hastings) and Connie Wang (Hastings - student) attempts to change this by gathering data on 16,000 Orange Book entries between 2005 and 2015.

For the unaware (and I'll admit that I'm mildly aware), the Orange Book is an FDA listing of all the "exclusivities" that companies claim related to their New Drug Applications (i.e., their drugs). These exclusivities might relate to patents associated with the drug, research related to the drug, approval to use the drug on new populations or for "orphan" (low incident) diseases.

Feldman and Wang argue that the Orange Book has been used by companies to "evergreen" their drugs - that is, to extend exclusivity beyond patent expiration. The paper is on SSRN and the abstract is here:
Why do drug prices remain so high? Even in sub-optimally competitive markets such as health care, one might expect to see some measure of competition, at least in certain circumstances. Although anecdotal evidence has identified instances of evergreening, which can be defined as artificially extending the protection cliff, just how pervasive is such behavior? Is it simply a matter of certain bad actors, to whom everyone points repeatedly, or is the problem endemic to the industry?
This study examines all drugs on the market between 2005 and 2015, identifying and analyzing every instance in which the company added new patents or exclusivities. The results show a startling departure from the classic conceptualization of intellectual property protection for pharmaceuticals. Key results include: 1) Rather than creating new medicines, pharmaceutical companies are recycling and repurposing old ones. Every year, at least 74% of the drugs associated with new patents in the FDA’s records were not new drugs coming on the market, but existing drugs; 2) Adding new patents and exclusivities to extend the protection cliff is particularly pronounced among blockbuster drugs. Of the roughly 100 best-selling drugs, almost 80% extended their protection at least once, with almost 50% extending the protection cliff more than once; 3) Once a company starts down this road, there is a tendency to keep returning to the well. Looking at the full group, 80% of those who added protections added more than one, with some becoming serial offenders; 4) The problem is growing across time.
I think the data the authors have gathered is extremely important, and I think that their study sheds important light on what happens in the pharmaceutical industry. That said, as I explain below, my takeaways from this paper are much different from theirs.

Tuesday, November 14, 2017

What is Essential? Measuring the Overdeclaration of Standards Patents

Standard essential patents are a relatively hot area right now, and seem to be of growing importance in the academic literature. I find the whole issue fascinating, in large part because most of the decisions are handled through private ordering, and so most of the studies are based on breakdowns.

One such breakdown occurs when companies declare too many patents essential to a standard. This happens if a company claims that too many of its patents must be practiced for the standard. The incentives for doing this are obvious: once declared essential, it is easier to argue for royalties or cross-licensing. But there are also important incentives against leaving patents out, for doing so may bring penalties in terms of participation in formation of the standard in the first place. Given that the incentives all align to disclosure, it is no wonder that some companies push back against paying. That said, if portfolio theory holds true--and I think it does in most cases--it doesn't matter much if there are 10 or 100 patents, as long as the first few are strong and essential. But that's an argument for another day.

Just how prevalent is this overdeclaration problem? One paper tries to figure that out. Robin Sitzing (Nokia), Pekka Sääskilahti (Compass Lexecon), Jimmy Royer (Analysis Group, Sherbrooke U. Economics), and Marc Van Audenrode (Analysis Group, Laval U. Economics) have posted Over-Declaration of Standard Essential Patents and Determinants of Essentiality to SSRN. Here is the abstract:
Not all Standard Essential Patents (SEPs) are actually essential – a phenomenon called over-declaration. IPR policies of standard-setting organizations require patent holders to declare any patents as SEPs that might be essential, without further SSO review or detailed compulsory declaration information. We analyze actual essentiality of 4G cellular standard SEPs. A declaration against a specific technical specification document of the standard is a strong predictor of essentiality. We also find that citations from and to SEPs declared to the same standard predict essentiality. Our results provide policy guidance and call for recognition of over-declaration in the economics literature.
This is an ambitious study. The authors used data on SEP declared patents (for the ETSI 4G LTE standard, among others) that were independently judged* by technical experts. They then performed regressions to determine whether there were specific factors that had an effect on being "actually" essential. One key finding was that when the patent was declared for a specific standards document, it was much more likely to be deemed essential than if it were declared for the standard generally. My takeaway is that when the specifics are outlined, companies know what their patents cover, but when faced with a broad standard, they will contribute anything they think might be close.

They also found that patents later assigned to NPEs were not more likely to be nonessential. Similarly, while firm size and R&D investment had a statistically significant effect on the likelihood of being actually essential, that effect was so small that it was practically insignificant. Finally, they find that longer claims (which are theoretically narrower) are, in fact, less likely to be essential.

As with other papers, there is a lot of data here that is worth looking at. But the final conclusion is an interesting one, worth carrying over to other papers: the traditional measures that economists use to judge patent value (such as citations) do not predict whether a declared patent will be technically essential. This is growing support for paper findings that question the use of these metrics.

*The authors explain the trustworthiness of their data. I'll leave it to the reader to decide whether it holds up.

Sunday, November 12, 2017

Do Machines, and Women, Need a Different Obviousness Standard?

This blog post addresses two different articles that might at first blush seem to be very different. The first is Ryan Abbott's new article Everything Is Obvious, which explores the implications of machine-generated IP for the nonobviousness standard of patentability. Abbott argues the inventiveness standard should be adjusted to take into account the new reality that inventors are frequently assisted by machines or, in some cases, are machines. The second article is Dan Burk's Diversity Levers, published in 2015 in the Duke Journal of Gender Law & Policy. In the article, Burk argues the standard for nonobviousness should be adjusted to take into account the unique mindset and institutional situation of female inventors. (To be clear, Burk is not coming at this issue out of the blue. He has previously written about feminism in collision with copyright, arguing that copyright can be used to suppress feminist discourse).

Abbott's thesis is that, in comparison to machines, humans are all a little less skilled, so a human-based obviousness standard will necessarily lead to too many patents if machines are commonly employed. Burk's point is that, in comparison to men, women are typically more risk-adverse, so a male-based obviousness standard will necessarily lead to too few female-invented patents.

Tuesday, November 7, 2017

Tracking the Sale of Patent Portfolios

Finding out about patent sales and prices is notoriously difficult, yet critically important for patent valuation. Brian Love (Santa Clara Law), Kent Richardson, Erik Oliver, and Michael Costa (Richardson Oliver Law Group) have helped us all out by posting An Empirical Look at the "Brokered" Patent Market to SSRN. Here is the abstract:
We study five years of data on patents listed and sold in the quasi-public “brokered” market. Our data covers almost 39,000 assets, an estimated 80 percent of all patents and applications offered for sale by patent brokers between 2012 and 2016. We provide statistics on the size and composition of the brokered market, including the types of buyers and sellers who participate in the market, the types of patents listed and sold on the market, and how market conditions have changed over time. We conclude with an analysis of what our data can tell us about how to accurately value technology, the costs and benefits of patent monetization, and the brokered market’s ability to measure the impact of changes to patent law.
The article provides some really useful data about brokered patent portfolios - that is, groups of patents sold by brokers rather than "secretly." While brokered transactions are also confidential, their public offering makes them more visible than company to company direct transactions.

The information is quite interesting: the number of patents in each portfolio is quite small - most are less than a dozen. The offering prices have dropped over the last five years (shocker). Operating companies sell a lot of these, and PAE's buy them (something I pointed out five years ago in Patent Troll Myths, and which gave rise to the LOT Network framework- in fact, Open Innovation Network is a now a key buyer). There is a lot more data here, and I don't want to preempt the paper by just repeating it all - it's worth a look. I will note that, as the authors point out, this isn't the whole market and they can't accurately capture sale prices, so they use a "spot check" to estimate what they expect them to be.

Having introduced the paper, I do want to ask, like every good academic, "But what about my article?" Here I'll note a couple takeaways from the paper that bear on my own work on this subject, Patent Portfolios as Securities. First, the first portion of that paper was dedicated to the notion that buying and selling portfolios isn't just about patent trolls. I told anecdotes and used some data, so I'm glad to see a broader based survey provide stronger support for that assertion. Second, my argument was that treating portfolios as securities would force more transparency in sales and valuations. This paper's results support this notion in two ways. Itt shows how difficult it is to get any kind of transparency, even when you have brokered transactions. It also shows how easy it would be to jump from a brokered transaction to a more transparent clearinghouse that might provide the type of valuation information that market participants crave. I view this paper as a useful followon to my own, and hope to write more about how it might bear on the treatment of patent portfolios as assets.

Anyone interested in real-world patent market transactions should give this paper a read. It provides a view into the system that we don't often see. I found it really useful.

Tuesday, October 31, 2017

Using Experts to Prove Software Copyright Infringement

[UPDATE: It turns out that my initial thoughts mirrored EA's here, and that Antonick filed a reply brief. It's interesting enough that I took a closer look at the initial briefing (and at the District Court), and I've updated/edited my post below.]

I ran across an interesting cert. petition today that I thought I would share and discuss. The case is Antonick v. Electronic Arts, 841 F.3d 1062 (2016), and the petition (filed by David Nimmer, Peter Menell, and Kevin Green) is here. The case is interesting because it is about software copyright infringement, a topic near and dear to my heart on which I've written and blogged several times.

It's also topically relevant, because it is about Madden Football, one of the more popular sports video game franchises (it's probably the most popular, but I didn't do a search to find out). Antonick was an author of the original game, dating all the way back to the Apple II (!), but had a license that he would be paid for any derivative works. And so the question was whether his code was incorporated into newer versions of the software published for Sega and Super Nintendo.

The problem was that nobody could find the all source code for any of the versions to compare, and the graphic displays were not admitted into evidence. There were snippets, drafts, and binary data files. Using these, "Antonick's expert, Michael Barr, opined that Sega Madden was substantially similar to certain elements of Apple II Madden. In particular, Barr opined that the games had similar formations, plays, play numberings, and player ratings; a similar, disproportionately wide field; a similar eight-point directional system; and similar variable names, including variables that misspelled 'scrimmage.'" Based on this and other circumstantial evidence, the jury decided for infringement and the plaintiff.

But the District Court reversed the judgment as a matter of law. The Court ruled that the jury could not decide infringement because it did not have the source code in evidence to compare, and that the expert's testimony was insufficient to show infringement.

And here is where the interesting legal issue comes into play: what is the role of expert testimony? I'll discuss more after the jump, but here's a teaser: I think the expert can play a role, and while that is the focus of the "legal" issues in this case, I am not sure that's what's driving the opinion. In other words, my sense is that the cert. petition's claim of a circuit split is in law more than it is in practice. That may be enough for a certiorari grant; I tend to think that Antonick got a raw deal here, so if his lawyers can convince the Court to take this case, more power to him. That said, my gut says that, perhaps through no fault of his own other than waiting too long to sue, the plaintiff just didn't have enough evidence here--and if they did, they couldn't convince the District or Appellate courts of it.

Sunday, October 29, 2017

Rebecca Wexler on IP in the Criminal Justice System

The protection of criminal justice technologies with trade secrets is a hot topic. Last Term, the Supreme Court called for the views of the solicitor general in Loomis v. Wisconsin on whether using proprietary software for sentencing is a due process violation, though they ultimately denied the cert petition. Last month, I described Natalie Ram's forthcoming article, which focuses on the innovation angle: Ram argues that trade secrecy protection is not necessary for efficient levels of innovation for these kinds of technologies. I just enjoyed another terrific article in this space by Yale Information Society Project Fellow Rebecca Wexler: Life, Liberty, and Trade Secrets: Intellectual Property in the Criminal Justice System, forthcoming in the Stanford Law Review.

Wexler describes the growing privatization of the criminal justice system, particularly through black-box algorithms. She explains that the importance of trade secrecy in this area is likely to grow: data-driven systems for forensics or risk assessment are more difficult to protect with patents post-Alice, whereas trends like the federal Defend Trade Secrets Act of 2016 seem to have strengthened the value of trade secrets. Wexler agrees that the innovation policy rationale for secrecy of criminal justice technologies is unconvincing and that this secrecy may raise due process concerns, but the focus of her article is on the problems with this trend as a matter of the law of evidence. She argues that the trade secrets privilege that two-thirds of states have codified in their evidence rules should not exist in criminal proceedings—rather, as for other sensitive information like medical records, courts should simply use protective orders to limit the distribution of trade secrets beyond the needs of the proceeding.

Since I am not an evidence law expert, I will not discuss these aspects of Wexler's argument in detail; in short, she explains that the trade secrets privilege is harmful and unnecessary in criminal cases, and that it does not serve the purpose of evidentiary privilege law. From an IP perspective, she also argues that none of the theoretical justifications for trade secrecy law support the privilege. She suggests that the privilege is most analogous to the controversial "inevitable disclosure" doctrine, under which some states will enjoin conduct based on a speculative concern rather than any direct evidence of threatened misappropriation. But even here, the trade secrets privilege doctrine overprotects because it is upheld without any reference to the circumstances of a particular case. Wexler also notes that "claims that secrecy will incentivize innovation are tenuous at best when the privilege shields information from criminal defendants who are unlikely to be business competitors." And despite the status quo of robust protection, a 2009 National Academy of Sciences report notes the "dearth of peer-reviewed, published studies establishing the scientific bases and validity of many forensic methods"; as Wexler explains, greater transparency is likely to improve rather than worsen this problem.

I think there is plenty in Wexler's article to interest scholars of IP, criminal procedure, evidence, and more. But more importantly, I hope it is read by judges in criminal cases who are faced with assertions of trade secrets privilege. And judges will have opportunities since the issue is percolating through the courts in other cases, such as California v. Johnson; see the defense attorney's brief (which cites Wexler's article), as well as amicus briefs from the ACLU, EFF, Legal Aid, and Innocence Project. It seems like it is time for the uncritical acceptance of the privilege to end, and for judges and practitioners to grapple with the concerns Wexler raises.

Thursday, October 26, 2017

Virtual Copyright

I have posted to SSRN a draft of a new book chapter that I've written with my former law partner Jack Russo (Computer Law Group LLP in Palo Alto). It is coming out in The Law of Virtual and Augmented Reality (Woody Barfield and Marc Blitz, eds). The abstract of our chapter, called Virtual Copyright, is here:
This book chapter explores the development of virtual reality technology from its rudimentary roots toward its realistic depiction of the world. It then traces the history of copyright protection for computer software user interfaces (a law that only predates virtual reality by a few years), highlighting competing approaches toward protection and infringement. While the focus is on virtual reality, this chapter contains an exhaustive examination of the state of "look and feel" protection for software interfaces.
The chapter then considers how these competing approaches -- each of which is still holds some sway in the courts -- will apply to virtual reality objects, application, worlds, and interfaces. We posit that as VR becomes more realistic, courts will find their way to allow more reuse.
We do not expect to see traditional characters and animation treated any differently in virtual reality. Mickey Mouse is still Mickey Mouse, and Pikachu lives in trading cards, cartoons, augmented reality, and virtual reality. It is whether and how realistic depiction, gesture control, modularization and sharing fit within copyright's limiting doctrines that will create important and difficult questions for future developers, judges, juries, and appellate courts.
We wrote on this topic many, many years ago (before I even went to law school), so it was fun revisiting the topic now that the state of virtual reality and of copyright have advanced somewhat.

But that's one of the interesting things about this topic. Despite the advances, there really weren't that many...you know...advances. In the chapter, we detail some of the earliest virtual reality inventions, including gloves, goggles, and gestures. And we now have much more advanced...gloves, goggles, and gestures. To be sure, the technology is faster, cheaper, more compact, and higher quality, but we are nowhere near the Star Trek holodeck--yes, we discuss CAVEs briefly, but they had those then, too--an example we used to imagine where copyright might go.

And, despite the passage of time, there really haven't been that many advances in copyright treatment of look and feel. As I noted in my article Hidden in Plain Sight, the last really important interface case was decided by an evenly split Supreme Court more than twenty years ago. To be sure, we discuss newer cases like Oracle v. Google, Author's Guild v. Google, all of the important transformative fair use cases, and so forth, but the handwriting for these cases was on the wall some twenty to twenty-five years ago.

And, yet, we think this is an important chapter. All these years later, the courts are still divided about how to handle some of the borderline cases (just look at how difficult the Oracle v. Google API case has been), and courts are still struggling with how to manage modularization and realistic depictions (as seen in disputes about fan fiction, museum photography, and social media). These are all problems that will seep into virtual reality, and we explain the different ways courts have handled disputes and how we think they will treat particularly salient virtual reality problems in the future.

Tuesday, October 24, 2017

Experiments on Bias in Patent Litigation OR Does Everyone Hate NPEs?

Lisa has written about the importance of experiments in patents, and I agree. I read about a really good one today. Bernard Chao (Denver Law) and one of his students, Roderick O'Dorisio, conducted an experiment to simultaneously test whether there is a bias against patentees sued for declaratory relief of non-infringement and against NPEs. To do so, they made identical patent vignettes used to resolve a close, but simple, infringement case. The only differences in the videos shown to the subjects were whether the defendant sued first and whether the plaintiff was an NPE (and in one, both were true). The abstract his here, for the paper forthcoming in the Federal Circuit Bar Journal:
Although everyone believes that telling a good story is an important part of jury persuasion, attorneys inevitably rely on their intuition to choose their stories. Experimental methodologies now allow us to test how effective these stories are. In this article, we rigorously test how two different narratives common to patent law affect mock jurors. First, we look at whether accused infringers can improve their chances of prevailing by being the aggressor. Prior studies have observed that accused infringers that file declaratory judgment actions to vindicate their rights win more often than those that are sued by patent holders. However, these results may simply be an artifact of the selection effects. For example accused infringers may simply be suing on stronger cases. To date, no studies have tried to control for these selection effects and determine whether it is truly the story that sways juries. Second, we looked at whether an accused infringer can influence mock jurors by making a few disparaging remarks about one kind of patentee’s business model, the non-practicing entity (NPE). NPEs, often pejoratively called patent trolls, may have a more difficult time prevailing at trial than practicing entities do.
To test how these narratives affect potential juries, we used a 2x2 between-subjects online experiment. We randomly assigned virtual mock jurors to watch one of four different scenarios of an abbreviated patent trial and render verdicts. The results showed that accused infringers that filed declaratory judgment actions prevailed more often than those where the patentee initiated the lawsuit. In addition, our study found that NPEs won less often than practicing entities. We discuss implications for strategy and policy.
The results are pretty clear - there were marked differences in favor of those who sued first and in favor of those sued by NPEs. And for the group that is both NPEs sued for declaratory relief, the numbers are the lowest of all. I consider this to be a validating check on the findings for each of the individual treatments (though more on that later, as statistically it is not so clear).

As the title of this post implies, there are a couple of ways to read this data. The results here may show an implicit bias against NPEs. Or, NPEs may be the baseline, and it shows a preference for practicing entities. The highest win rate was 39%, so it is not like the plaintiffs were running away with victory here. Or, it may show that taking the bull by the horns is rewarded - patentees prefer defendants who assert their "rights" to defend against infringement.

Nonetheless, the results are a bit shocking - a product making plaintiff was more than twice as likely to win than an NPE sued for declaratory judgment of non-infringement on identical facts and presentations. This makes me think that we have to talk about more than patent quality when we talk about low NPE win rates.

About the statistics: the Declaratory Relief effect was significant at p<.1 (and at p<.05 if you included demographics). The NPE effect was significant at p<.01. Interesting, despite the marked drop for both combined, when the entire model was tested, including the interaction of declaratory relief and NPE, then none of the treatments was statistically significant. This result is difficult to interpret, but my sense from eyeballing the data is that the NPE effect is doing most of the work in the combined model, and so combining the DJ effect with it confounds the model.

A final note on methodology - the authors use Mechanical Turk, and cite to literature that such users are reliable for research like this. They also use some techniques to ensure attention. Finally, if there are attention issues, it is unclear why they would affect one category more than any other. Nonetheless, to the extent that one is skeptical of mTurk, one might be skeptical of the results here.

Tuesday, October 17, 2017

A Deep Dive on NPE Outcomes

I glibly commented on a friend's Facebook post last week that "patent troll" academic articles are so passe, despite the growing number of articles that use that term as compared to, say, 2012. Now, I shouldn't complain; given that my most cited article is called Patent Troll Myths (2012, naturally), I'd like to think that I'm driving that trend (of course, that's what the folks who wrote in 2007 would say).

But one of the reasons I joked about trolls being so 2012 is that this is where much of the detailed data comes from, and this is when the key articles that are cited by many were published. Indeed, I've published two follow-on articles to Patent Troll Myths, each of which contains more and better data (and thus took longer complete and published later), but which gets only a tiny fraction of the citation love of the original article.

And so it is no surprise that the latest in a series of articles by Chris Cotropia (Richmond), Jay Kesan (Illinois), and David Schwartz (Northwestern) was released with little fanfare. The article, called Heterogeneity among Patent Plaintiffs: An Empirical Analysis of Patent Case Progression, Settlement, and Adjudication is forthcoming in Journal of Empirical Legal Studies, but a draft is on SSRN. Here is the abstract:
This article empirically studies current claims that patent assertion entities (PAEs), sometimes referred to as ‘patent trolls’ or non-practicing entities (NPEs), behave badly in litigation by bringing frivolous patent infringement suits and seeking nuisance fee settlements. The study explores these claims by examining the relationship between the type of patentee-plaintiffs and litigation outcomes (e.g., settlement, grant of summary judgment, trial, and procedural dispositions), while taking into account, among other factors, the technology of the patents being asserted and the identity of the lawyers and judges. The study finds significant heterogeneity among different patent holder entity types. Individual inventors, failed operating companies, patent holding companies, and large patent aggregators each have distinct litigation strategies largely consistent with their economic posture and incentives. These PAEs appear to litigate differently from each other and from operating companies. Accordingly, to the extent any patent policy reform targets specific patent plaintiff types, such reforms should go beyond the practicing entity versus non-practicing entity distinction and understand how the proposed legislation would impact more granular and meaningful categories of patent owners.
In my article A Generation of Patent Litigation, I presented data about how often cases settle, and how that skews our view of how long they last, and who wins. This article extends the authors' earlier work on categorizing just who is filing NPE suits (in 2010 in this article), and asks when they settle for each and every defendant. This is hard work. In most of today's cases, each defendant is sued separately, so when the defendant settles, the case is over. Analytics companies track this all the time...now.

But in 2010, a patentee could sue 100 defendants at once, and you could not tell how long each remained in the case without tracking each defendant. If you only track the end of the case, you capture the one defendant who fought it out, but you miss all the defendants who exited early. The other added value of this series of papers is tracking all plaintiffs by type, rather than one big "NPE" status. I do this in The Layered Patent System, but I only had a subset of cases over a longer period of time, They have captured all of the cases in a single year.  I'll discuss what this all means after the jump.

Wednesday, October 11, 2017

The Case for a Patent Box with Strings Attached

[This post is co-authored with Daniel Hemel, an assistant professor of law at the University of Chicago Law School, and cross-posted at Whatever Source Derived.] 

Trump administration officials are hoping that their plan for steep business tax cuts will spur economic growth. Economists are skeptical of the administration’s rosy growth projections. But there may yet be a way to reduce business taxes that accelerates growth, encourages innovation, and delivers tangible benefits to American consumers.

To achieve these objectives, administration officials and lawmakers should consider implementing a “patent box” — a reduced tax rate for revenues derived from the licensing and sale of patents. But unlike the patent box regimes that the United Kingdom and several other advanced economies have implemented, a U.S. patent box should come with strings attached. Specifically, the reduced rate on patent-related revenues should be conditional upon the patent holder agreeing to a shorter patent term.

Here’s how it could work: Right now, a patent confers exclusivity for 20 years from the date of application. If the patent is held by a U.S. corporation, the corporation pays a top tax rate of 35% on patent-related income. Under a “patent box with strings attached,” the corporation would have the option to pay no tax on patent-related income in exchange for a shorter patent life.

The system would be structured such that the net present value of the patent holder’s expected income stream — in after-tax terms — would be slightly more attractive under a patent box and a shorter patent life than under the status quo. For example, assuming a 5% interest rate and a 35% corporate tax rate, the net present value of a constant stream of tax-free payments over 11 years is slightly more than the net present value of a constant stream of taxable payments over a 20-year term. Thus, if utilizing the patent box meant accepting an 11-year term, patent holders would have an incentive to choose the patent box and relinquish the last 9 years of exclusivity. (If we assume instead that the prevailing tax rate is 20%, as the Trump administration and congressional Republican leaders have proposed, then the patent box with strings attached becomes preferable to a 20-year term plus full taxability if the patent box allows 15 years of exclusive rights.)

Tuesday, October 10, 2017

Patents and Vertical Integration: A Revised Theory of the Firm

I'm a big fan of Peter Lee's work, and I'm a big fan of theory of the firm work. Imagine my joy upon seeing Prof. Lee's new article, forthcoming in Stanford Law Review, called: Innovation and the Firm: A New Synthesis. This article is a really thoughtful, really thorough re-examination of patents and the firm. The abstract is here:
Recent scholarship highlights the prevalence of vertical disintegration in high-technology industries, wherein specialized entities along a value chain transfer knowledge-intensive assets between them. Patents play a critical role in this process by lowering the cost of technology transactions between upstream and downstream parties, thus promoting vertical disintegration. This Article, however, challenges this prevailing narrative by arguing that vertical integration pervades patent-intensive fields. In biopharmaceuticals, agricultural biotechnology, information technology, and even university-industry technology transfer, firms are increasingly absorbing upstream and downstream technology providers rather than simply licensing their patents.
 This Article explains this counterintuitive development by retheorizing the relationship between innovation and the firm. Synthesizing previously disconnected lines of theory, it first argues that the challenge of aggregating tacit technical knowledge — which patents do not disclose — leads high-tech companies to vertically integrate rather than simply rely on licenses to transfer technology. Relatedly, the desire to obtain not just discrete technological assets but also innovative capacity, in the form of talented engineers and scientists, also motivates vertical integration. Due to the socially embedded nature of tacit knowledge and innovative capacity, firms frequently absorb entire pre-existing organizations and grant them significant autonomy, an underappreciated phenomenon this Article describes as “semi-integration.” Finally, strategic imperatives to achieve rapid scale and scope also lead firms to integrate with other entities rather than simply license their patents. The result, contrary to theory, is a resurgence of vertical integration in patent-intensive fields. The Article concludes by evaluating the costs and benefits of vertically integrated innovative industries, suggesting private and public mechanisms for improving integration and tempering its excesses.
The abstract does a pretty complete job of explaining the thesis and arguments here, so I'll make a few comments after the jump.

Sunday, October 8, 2017

Tejas Narechania: Is The Supreme Court Against "Patent Exceptionalism" Or In Favor of "Universality"?

Tejas Narechania's new paper, Certiorari, Universality, and a Patent Puzzle, forthcoming in Michigan Law Review argues that a major identifying factor for the Supreme Court's interest in patent cases is a field split: an area where a particular patent law doctrine plays out differently in patent law than in other fields of law where it is used. Narechania argues that the Court's apparent need to resolve, or at least address, these differences by taking review, has to do with the Court's overarching interest in preserving "universality." "[T]he Court," he writes, "is not interested in merely eliminating exceptionalism altogether. Rather, it appears concerned for calibrating a degree of consistency across doctrinal areas in light of its underlying interests in judicial efficiency, neutrality, and legitimacy." (47).

Narechania's article, especially when read alongside recent work by Peter Lee, teaches that there are two ways to explain the Supreme Court's increased interest in patent law. One is that the Court is against what is often called "patent exceptionalism" - i.e., against the Federal Circuit's use of patent-specific rules that differ from similar doctrines used other fields.  The other, which may or may not be the same thing, is that the Court is intent on preserving universal rules across all areas of law. Narechania has insightfully reoriented the "patent exceptionalism" discussion towards the latter.

Read more at the jump.

Tuesday, October 3, 2017

Repealing Patents, Oil States, and IPRs

If you haven't read any of Chris Beauchamp's (Brooklyn Law) work on patent law and litigation history, you really should. His book, Invented by Law, on the history of Bell telephone litigation, and his Yale L.J. article on the First Patent Litigation Explosion (in the 1800's) are both engaging, thorough, and thoughtful looks at history. Furthermore, he writes as a true historian, placing his work in context even if there is no clear answer for today's disputes. He points to where we can draw lessons and where we might be too quick to draw lessons. Chris doesn't publish that often because he does so much work toiling over source materials in the national archives and elsewhere.

Prof. Beauchamp posted a new essay to SSRN last week that caught my eye, and I thought I would share it here. Repealing Patents could not be more timely given the pending Oil States case - it discusses how patent revocation worked at our nation's founding, both in England and the U.S.  Here is the abstract:
The first known patent case in the United States courts did not enforce a patent. Instead, it sought to repeal one. The practice of cancelling granted patent rights has appeared in various forms over the past two and a quarter centuries, from the earliest U.S. patent law in 1790 to the new regime of inter partes review (“IPR”) and post grant review. With the Supreme Court’s grant of cert in Oil States Energy Services v. Greene’s Energy Group and its pending review of the constitutionality of IPR, this history has taken on a new significance.
This essay uses new archival sources to uncover the history of patent cancellation during the first half-century of American patent law. These sources suggest that the early statutory provisions for repealing patents were more widely used and more broadly construed than has hitherto been realized. They also show that some U.S. courts in the early Republic repealed patents in a summary process without a jury, until the Supreme Court halted the practice. Each of these findings has implications—though not straightforward answers—for the questions currently before the Supreme Court.
As with his other work, this essay is careful not to draw too many conclusions. It cannot answer all of our questions, and he explains why.

There were a few key points that really stood out for me; things we should be thinking about when we think about the "common law" right to a jury with respect to the Seventh Amendment, and more broadly how we think of revocation of patents as public.

First, the essay points out that the first Patent Act (with repeal included) predated the bill of rights. So when we think of common law, we usually look to England because the U.S. adopted English law at the time of the Seventh Amendment. But, here, the U.S. broke with England and installed its own procedure. It is quite possible that English practice at the time is simply irrelevant. I don't know how this cuts for the case, frankly.

Second, the revocation action came at a time when patents were essentially registered rather than examined. Beauchamp points out that the first three years had three cabinet members using discretion to grant patents, but they were not conducting prior art searches and the like. In other words, revocation, which was abolished when the patent examination system was installed in 1836, was a creature of non-examination, not a way to do re-examination.

Third, there were some summary revocations, but there was a dispute about whether a jury should decide factual issues on revocation. That debate lasted until 1924, when Justice Story (for the Supreme Court) ruled that the English procedure of a jury trial should apply. This, too, is ambiguous, because the right to a jury trial was really up in the air for a while. But what struck me most about this history is something different. As I wrote in my own article America's First Patents, Justice Story had an affinity for English patent law, and apparently liked to discard American breaks from the law in favor of English rule. In my article, it was his importation of a distrust of process patents (which gave rise to much of our patentable subject matter jurisprudence today). In this essay, it is his importation of the English revocation process, which required a jury. If it turns out that jury rule in early American repeal proceedings is important in this case, you'll know who to thank.

Tuesday, September 26, 2017

How does trade secrecy affect patenting?

As I mention in my forthcoming book chapter on empirical methods in trade secret research, there's really a dearth of good empirical scholarship about the role of trade secrets in the economy. One scholar who has written several articles in this area is my Ivan Png from the National University of Singapore. Professor Png exploits the variation in strength of trade secret protection to find causal effects on, say, innovation or worker mobility.

His latest article, called Secrecy and Patents: Theory and Evidence from the Uniform Trade Secrets Act (SSRN draft here, Final paywall version here), examines how rates of patenting change when levels of protection for trade secrets change. Here is the abstract, which shares some of findings:

How should firms use patents and secrecy as appropriability mechanisms? Consider technologies that differ in the likelihood of being invented around or reverse engineered. Here, I develop the profit-maximizing strategy: (i) on the internal margin, the marginal patent balances appropriability relative to cost of patents vis-a-vis secrecy, and (ii) on the external margin, commercialize products that yield non-negative profit. To test the theory, I exploit staggered enactment of the Uniform Trade Secrets Act (UTSA), using other uniform laws as instruments. The Act was associated with 38.6% fewer patents after one year, and smaller effects in later years. The Act was associated with larger effect on companies that earned higher margins, spent more on R&D, and faced weaker enforcement of covenants not to compete. The empirical findings are consistent with businesses actively choosing between patent and secrecy as appropriability mechanisms, and appropriability affecting the number of products commercialized.
Frankly, I think that the abstract undersells the findings a bit, as it seems targeted to the journal, "Strategy Science." The paper takes a much broader view of his model: "If trade secrets law is stronger in the sense of reducing the likelihood of reverse engineering, then businesses should adjust by (i) patenting fewer technologies and keeping more of them secret, and (ii) commercializing more products."

Like Png's other work in this area, the core of the analysis begins with an index of trade secret strength in each state, based on passage of the UTSA and variations of each state's implementation of UTSA (e.g. with respect to inevitable disclosure). In this paper, Png then obtained data about the location of company R&D facilities and patents coming out of those facilities. He also used other uniform laws passed at around the same time as an instrument, to make sure that the UTSA is not endogenous with patenting.

This is a really interesting and important paper, even if it validates what most folks probably assumed (dating back to the days of Kewanee v. Bicron): if you strengthen secrecy, there will be fewer patents. That said, there is a lot going on in this paper, and a lot of assumptions in the modeling. First and foremost, the levels of protection of trade secrets don't have many degrees of freedom. I much prefer the categories created by Lippoldt and Schultz. That said, even a binary variable might be sufficient. Second, the model and estimation are based on the assumption that the marginal patent is the one most likely to be designed around, and uses the number of technology classes to estimate patent scope (and validate the assumption). I know many folks who would disagree with using patent classes as a measure of scope.

Even with these critiques, this paper is worth a read and some attention. I'd love to see more like it.

Monday, September 25, 2017

What can we learn from variation in patent examiner leniency?

Studying the effect of granting vs. rejecting a given patent application can reveal little about the ex ante patent incentive (since ex ante decisions were already made), but it can say a lot about the ex post effect of patents on things like follow-on innovation. But directly comparing granted vs. rejected applications is problematic because one might expect there to be important differences between the underlying inventions and their applicants. In an ideal (for a social scientist) world, some patent applications would be randomly granted or denied in a randomized controlled trial, allowing for a rigorous comparison. There are obviously problems with doing this in the real world—but it turns out that the real world comes close enough.

The USPTO does not randomly grant application A and reject application B, but it does often assign (as good as randomly) application A to a lenient examiner who is very likely to grant, while assigning B to a strict examiner who is very likely to reject. Thus, patent examiner leniency can be used as an instrumental variable for which patent applications are granted. This approach was pioneered by Bhaven Sampat and Heidi Williams in How Do Patents Affect Follow-on Innovation? Evidence from the Human Genome, in which they used this approach to concluded that on average, gene patents appear to have had no effect on follow-on innovation.

Since their seminal work, I have seen a growing number of other scholars adopt this approach, including these recent papers:

Monday, September 18, 2017

Tattoos, Architecture, and Copyright

In my IP seminar, I ask students to pick an article to present in class for a critical style and substance review. This year, one of my students picked an article about copyright and tattoos, a very live issue. The article was decent enough, raising many concerns about tattoos: Is human skin fixed? Is it a copy? How do you deposit it at the Library of Congress? (answer: photographs) What rights are there to modify it? To photograph it? Why is it ok for photographers to take pictures, but not ok for video game companies to emulate them? Can they be removed or modified under VARA (which protects against such things for visual art)?

It occurred to me that we ask many of these same questions with architecture, and that the architectural rules have solved the problem. You can take pictures of buildings. You can modify and destroy buildings. You register buildings by depositing plans and photographs. Standard features are not protectible (sorry, no teardrop, RIP, and Mom tattoo protection). But you can't copy building designs. If we view tattoos on the body as a design incorporated into a physical structure (the human body), it all makes sense, and solves many of our definitional and protection problems.

Clever, right? I was going to write and article about it, maybe. Except then I discovered that somebody else had. In That Old Familiar Sting: Tattoos, Publicity and Copyright, Matthew Parker writes:

Tattoos have experienced a significant rise in popularity over the last several decades, and in particular an explosion in popularity in the 2000s and 2010s. Despite this rising popularity and acceptance, the actual mechanics of tattoo ownership and copyright remain very much an issue of first impression before the courts. A series of high-priced lawsuits involving famous athletes and celebrities have come close to the Supreme Court at times, but were ultimately settled before any precedent could be set. This article describes a history of tattoos and how they might be seen to fit in to existing copyright law, and then proposes a scheme by which tattoo copyrights would be bifurcated similar to architecture under the Architectural Works Copyright Protection Act.
It's a whole article, so Parker spends more time developing the theory and dealing with topics such as joint ownership than I do in my glib  recap. For those interested in this topic, it's certainly a thought-provoking analogy worth considering.

Barton Beebe: Bleistein and the Problem of Aesthetic Progress in American Copyright Law



Bleistein v. Donaldson Lithographing Co., is a well-known early twentieth century copyright decision of the U.S. Supreme Court. In his opinion for the majority, Justice Holmes is taken to have articulated two central propositions about the working of copyright law. The first is the idea that copyright's originality requirement may be satisfied by the notion of "personality," or the "personal reaction of an individual upon nature," which was satisfied in just about every work of authorship. The second is the principle of aesthetic neutrality, according to which "[it] would be a dangerous undertaking for persons trained only to the law to constitute themselves final judges of the worth of pictorial illustrations, outside of the narrowest and most obvious limits." Both of these propositions are today understood as relating to copyright's relatively toothless originality requirement, which few works ever fail to satisfy.

In a paper recently published in the Columbia Law Review, Barton Beebe (NYU) unravels the intellectual history of Bleistein and concluded that for over a century, American copyright jurisprudence has relied on a misreading (and misunderstanding) of what Holmes was trying to do in his opinion. On the first proposition, he shows that Holmes was deeply influenced by American (rather than British or European) literary romanticism, which constructed the author in a "distinctively democratic—and more particularly, Emersonian—image of everyday, common genius." (p. 370). On the second, Beebe argues that Holmes' comments on neutrality had little to do with the originality requirement, but were instead a response to the dissenting opinion that had sought to deny protection to the work at issue (an advertisement for a circus) because it did not "promote the progress," as mandated by the Constitution. The paper then examines how this misunderstanding (about both propositions) came to influence copyright jurisprudence, and Beebe then proceeds to suggest ways in which an accurate understanding of Bleistein may be used to reform crucial aspects of modern copyright law. The paper is well worth a read for anyone interested in copyright.

Beebe's examination of Holmes' views on progress, personality and literary romanticism did however raise a question for me about the unity (or coherence) of Holmes' views, especially given that he was a polymath. Long-regarded as a Legal Realist who thought about legal doctrine in largely functional and instrumental terms, Bleistein's commonly (mis)understood insights about originality comport well with Holmes' pragmatic worldview. His treatment of originality as a narrow (and normatively empty) concept, for instance, sits well with his anti-conceptualism and critique of formalist thinking. But if Holmes really did not intend for originality to be a banal and content-less standard (as Beebe suggests), how might he have squared its innate indeterminacy with his Realist thinking? Does Beebe's reading of Bleistein suggest that Holmes was not a Legal Realist after all when it came to questions of copyright law and its relationship to aesthetic progress? This of course isn't Beebe's inquiry in the paper (nor should it be, given the other important questions that it addresses), but the possibility of revising our view of Holmes intrigued me.

Wednesday, September 13, 2017

Tribal Sovereign Immunity and Patent Law

Guest post by Professor Greg Ablavsky, Stanford Law School

In Property, I frequently hedge my answers to student questions by cautioning that I am not an expert in intellectual property. I’m writing on an IP blog today because, with Allergan’s deal with the Saint Regis Mohawk Tribe, IP scholars have suddenly become interested in an area of law I do know something about: federal Indian law.

Two principles lie at the core of federal Indian law. First, tribes possess inherent sovereignty, although their authority can be restricted through treaty, federal statute, or when inconsistent with their dependent status. Second, Congress possesses plenary power over tribes, which means it can alter or even abolish tribal sovereignty at will.

Tribal sovereign immunity flows from tribes’ sovereign status. Although the Supreme Court at one point described tribal sovereign immunity as an “accident,” the doctrine’s creation in the late nineteenth century in fact closely paralleled contemporaneous rationales for the development of state, federal, and foreign sovereign immunity. But the Court’s tone is characteristic of its treatment of tribal sovereign immunity: even as the Court has upheld the principle, it has done so reluctantly, even hinting to Congress that it should cabin its scope. This language isn’t surprising. The Court hasn’t been a friendly place for tribes for nearly forty years, with repeated decisions imposing ever-increasing restrictions on tribes’ jurisdiction and authority. What is surprising is that tribal sovereign immunity has avoided this fate. The black-letter law has remained largely unchanged, narrowly surviving a 2014 Court decision that saw four Justices suggest that the doctrine should be curtailed or even abolished.

Monday, September 11, 2017

Reexamining the Private and Social Costs of NPEs

It's good to be returning from a longish hiatus. I've just taken over as the Associate Dean for Faculty Research; needless to say, it's kept me busier that I would like. But I'm back, and hope to resume regular blogging.

My first entry has been sitting on my desk (errrr, my email) for about six months. In 2011 Bessen, Meurer, and Ford published The Private and Social Costs of Patent Trolls, which was received with much fanfare. Its findings of nearly $500 billion in market value decrease over a 20 year period, and $80 billion losses a year for four years in the late 2000's garnered significant attention; the paper has been downloaded more than 5000 times on SSRN.

Enter Emiliano Giudici and Justin Robert Blount, both of Stephen F. Austin Business School. They have attempted to replicate the findings of Bessen, Meurer, and Ford with newer data. The results are pretty stark: they find no significant evidence of loss at all. They also attribute the findings of the prior paper to a few outliers, among other possible explanations. These are really important findings. Their paper has fewer than 50 downloads. The abstract is here: 
An ongoing debate in patent law involves the role that “non-practicing entities,” sometimes called “patent trolls” serve in the patent system. Some argue that they serve as valuable market intermediaries and other argue that they are a drain on innovation and an impediment to a well-functioning patent system. In this article, we add to the data available in this debate by conducting an event study that analyzes the market reaction to patent litigation filed by large, “mass-aggregator” NPE entities against large publicly traded companies. This study advances the literature by attempting to reproduce the results of previous event studies done in this area on newer market data and also by subjecting the event study results to more rigorous statistical analysis. In contrast to a previous event study, in our study we found that the market reacted little, if at all, to the patent litigation filed by large NPEs.
This paper is a useful read beyond the empirics. It does a good job explaining the background, the prior study, and critiques of the prior study. It is also circumspect in its critique - focusing more on the inferences to be drawn from the study than the methods. This is a key point: I'm not a fan of event studies for a variety of reasons. But that doesn't mean that I think event studies are somehow unsound methodologically. It just means that our takeaways from them have to be tempered by the limitations. And I've always been troubled that the key takeaways from Bessen, Meurer & Ford were outsized (especially in the media) compared to the method.

But Giudici and Blount embrace the event study, weaknesses and all, and do not find the same results. This, I think, is an important finding and worthy of publicity. That said, there are some critiques, which I'll note after the break.

Natalie Ram: Innovating Criminal Justice

Natalie Ram (Baltimore Law) applies the tools of innovation policy to the problem of criminal justice technology in her latest article, Innovating Criminal Justice (forthcoming in the Northwestern University Law Review), which is worth a read by innovation and criminal law scholars alike. Her dive into privately developed criminal justice technologies—"[f]rom secret stingray devices that can pinpoint a suspect’s location to source code secrecy surrounding alcohol breath test machines, advanced forensic DNA analysis tools, and recidivism risk statistic software"—provides both a useful reminder that optimal innovation policy is context specific and a worrying depiction of the problems that over-reliance on trade secrecy has wrought in this field.

She recounts how trade secrecy law has often been used to shield criminal justice technologies from outside scrutiny. For example, criminal defense lawyers have been unable to examine the source code for TrueAllele, a private software program for analyzing difficult DNA mixtures. Similarly, the manufacturer of Intoxilyzer, a breath test, has fought efforts for disclosure of its source code. But access to the algorithms and other technical details used for generating incriminating evidence is important for identifying errors and weaknesses, increasing confidence in their reliability (and in the criminal justice system more broadly), and promoting follow-on innovations. Ram also argues that in some cases, secrecy may raise constitutional concerns under the Fourth Amendment, the Due Process Clause, or the Confrontation Clause.

Drawing on the full innovation policy toolbox, Ram argues that contrary to the claims of developers of these technologies, trade secret protection is not essential for the production of useful innovation in this field: "The government has at its disposal a multitude of alternative policy mechanisms to spur innovation, none of which mandate secrecy and most of which will easily accommodate a robust disclosure requirement." Patent law, for example, has the advantage of increased disclosure compared with trade secrecy. Although some of the key technologies Ram discusses are algorithms that may not be patentable subject matter post-Alice, to the extent patent-like protection is desirable, regulatory exclusivities could be created for approved (and disclosed) technologies. R&D tax incentives for such technologies also could be conditioned on public disclosure.

But one of Ram's most interesting points is that the main advantage of patents and taxes over other innovation policy tools—eliciting information about the value of technologies based their market demand—is significantly weakened for most criminal justice technologies for which the government is the only significant purchaser. For example, there is little private demand for recidivism risk statistical packages. Thus, to the extent added incentives are needed, this may be a field in which the most effective tools are government-set innovation rewards—grants, other direct spending, and innovation inducement prizes—that are conditioned on public accessibility of the resulting algorithms and other technologies. In some cases, agencies looking for innovations may even be able to collaborate at no financial cost with academics such as law professors or other social scientists who are looking for opportunities to conduct rigorous field tests.

Criminal justice technologies are not the only field of innovation in which trade secrecy can pose significant social costs, though most prior discussions I have seen are focused on purely medical technologies. For instance, Nicholson Price and Arti Rai have argued that secrecy in biologic manufacturing is a major public policy problem, and a number of scholars (including Bob Cook-Deegan et al., Dan Burk, and Brenda Simon & Ted Sichelman) have discussed the problems with secrecy over clinical data such as genetic testing information. It may be worth thinking more broadly about the competing costs and benefits of trade secrecy and disclosure in certain areas—while keeping in mind that the inability to keep secrets does not mean the end of innovation in a given field.

Tuesday, September 5, 2017

Adam Mossoff: Trademarks As Property

There are two dominant utilitarian frameworks for justifying trademark law. Some view trademark protection as necessary to shield consumers from confusion about the source of market offerings, and to reduce consumers' "search costs" in finding things they want. Others view trademark protection as necessary to secure producers' incentives to invest in "quality". I personally am comfortable with both justifications for this field of law. But I have always been unclear as to how trademarks work as propertyWith certain caveats, I do not find it difficult to conceive of the patented and copyrighted aspects of inventions and creative writings as "property" on the theory that we generally create property rights in subject matter that we want more of.  But surely Congress did not pass the Lanham Act in 1946 and codify common law trademark protection simply because Congress wanted companies to invest in catchy names and fancy logos?

In his new paper, Trademark As A Property Right, Adam Mossoff seeks to clarify this confusion and convince people that trademarks are property rights based on Locke's labor theory. In short, Mossoff's view is that trademarks are not a property right on their own; rather, trademarks are a property right derived from the underlying property right of goodwill. Read more at the jump.

Saturday, September 2, 2017

Petra Moser and Copyright Empirics

I thought this short Twitter thread was such a helpful, concise summary of some of NYU economist Petra Moser's excellent work—and the incentive/access tradeoff of IP laws—that it was worth memorializing in a blog post. You can read more about Moser's work on her website.

Monday, August 28, 2017

Dinwoodie & Dreyfuss on Brexit & IP

In prior work such as A Neofederalist Vision of TRIPS, Graeme Dinwoodie and Rochelle Dreyfuss have critiqued one-size-fits-all IP regimes and stressed the value of member state autonomy. In theory, the UK's exit from the EU could promote these autonomy values by allowing the UK to revise its IP laws in ways that enhance its national interests. But in Brexit and IP: The Great Unraveling?, Dinwoodie and Dreyfuss argue that these gains are mostly illusory: "the UK will, to maintain a robust creative sector, be forced to recreate much of what it previously enjoyed" through the EU, raising the question "whether the transaction costs of the bureaucratic, diplomatic, and private machinations necessary to duplicate EU membership are worth the candle."

The highlight of the piece for me is that Dinwoodie and Dreyfuss give numerous specific examples of how post-Brexit UK might depart from EU IP policy in ways that serve its perceived national policy interests, which nicely illustrate some of the ways in which the EU has harmonized IP law. For example, in the copyright context, it could resist the expansion in copyrightable subject matter suggested by EU Court of Justices cases; re-enact its narrow, compensation-free private copying exception; or reinstate section 52 of its Copyright, Designs and Patents Act, which limited the term of copyright for designs to the maximum term available under registered design law. In the trademark context, Dinwoodie and Dreyfuss describe how UK courts have grudgingly accepted more protectionist EU trademark policies that would not be required post-Brexit, such as limits on comparative advertising. Patent law is the area "where the UK will formally re-acquire the least sovereignty as a result of Brexit," given that it will continue to be part of the European Patent Convention (EPC) and that it still intends to ratify the Unified Patent Court Agreement—though the extent of UK involvement remains unclear.

Of course, whether such changes to copyright or trademark law would in fact further UK interests in an economic sense is highly debatable—but if UK policymakers think they would, why would they nonetheless recreate existing harmonization? I think Dinwoodie and Dreyfuss would respond that these these national policy interests are outweighed by the benefits of coordination on IP, which "have been substantial and well recognized for more than a century." Their argument is perhaps grounded more in political economy than economic efficiency, as their examples of the benefits of coordination are all benefits for content producers rather than overall welfare benefits. In any case, they note that coordination became even easier within the institutional structures of the EU, and that after Brexit, "the UK will have to seek the benefits of harmonization through the same international process that has been the subject of sustained resistance as well as scholarly critique, rather than under these more efficient EU mechanisms." While it is plausible that the lack of these efficiency gains will tilt the cost-benefit balance in favor of IP law tailored to national interests, Dinwoodie and Dreyfuss suggest that a desire for continuity and commercial certainty will override autonomy concerns.

With all the uncertainties regarding Brexit (as recently reviewed by John Oliver), intellectual property might seem low on the list of things to worry about. But the companies with significant financial stakes in UK-based IP are anxiously awaiting greater clarity in this area.

Sunday, August 20, 2017

Gugliuzza & Lemley on Rule 36 Patentable-Subject-Matter Decisions

Paul Gugliuzza (BU) and Mark Lemley (Stanford) have posted Can a Court Change the Law by Saying Nothing? on the Federal Circuit's many affirmances without opinion in patentable subject matter cases. They note a remarkable discrepancy: "Although the court has issued over fifty Rule 36 affirmances finding the asserted patent to be invalid, it has not issued a single Rule 36 affirmance when finding in favor of a patentee. Rather, it has written an opinion in every one of those cases. As a result, the Federal Circuit’s precedential opinions provide an inaccurate picture of how disputes over patentable subject matter are actually resolved."

Of course, this finding alone does not prove that the Federal Circuit's Rule 36 practice is changing substantive law. The real question isn't how many cases fall on each side of the line, but where that line is. As the authors note, the skewed use of opinions might simply be responding to the demand from patent applicants, litigants, judges, and patent examiners for examples of inventions that remain eligible post-Alice. And the set of cases reaching a Federal Circuit disposition tells us little about cases that settle or aren't appealed or in which subject-matter issues aren't raised. But their data certainly show that patentees have done worse at the Federal Circuit than it appears from counting opinions.

Perhaps most troublingly, Gugliuzza and Lemley find some suggestive evidence that Federal Circuit judges' substantive preferences on patent eligibility are affecting their choice of whether to use Rule 36: Judges who are more likely to find patents valid against § 101 challenges are also more likely to cast invalidity votes via Rule 36. When both active and senior judges are included, this correlation is significant at the five-percent level. The judges on either extreme are Judge Newman (most likely to favor validity, and most likely to cast invalidity votes via Rule 36) and Chief Judge Prost (among least likely to favor validity, and least likely to cast invalidity vote via Rule 36), who also happen to be the two judges who are most likely to preside on the panels they sit. Daniel Hemel and Kyle Rozema recently posted an article on the importance of the assignment power across the 13 federal circuits; this may be one concrete example of that power in practice.

Gugliuzza and Lemley do not call for precedential opinions in all cases, but they do argue for more transparency, such as using short, nonprecedential opinions to at least list the arguments raised by the appellant. For lawyers without the time and money to find the dockets and briefs of Rule 36 cases, this practice would certainly provide a richer picture of how the Federal Circuit disposes of subject-matter issues.

Monday, August 14, 2017

Research Handbook on the Economics of IP (Depoorter, Menell & Schwartz)

Many IP professors have posted chapters of the Research Handbook on the Economics of Intellectual Property Law. As described in a 2015 conference for the project, it "draws together leading economics, legal, and empirical scholars to codify and synthesize research on the economics of intellectual property law." This should be a terrific starting point for those new to these fields. I'll link to new chapters as they become available, so if you are interested in this project, you might want to bookmark this post.

Volume I – Theory (Ben Depoorter & Peter Menell eds.)


Volume II – Analytical Methods (Peter Menell & David Schwartz eds.)

Patents
Knowledge Commons

Last updated: April 17, 2019