Wednesday, December 19, 2018

All about IP & Price Discrimination

It's a grading/break week, so just a short post. A recent article that I enjoyed a lot, but that hasn't found much love on SSRN is Price Discrimination & Intellectual Property, by Ben Depoorter (Hastings) and Mike Meurer (Boston University). The paper has the following abstract:
This chapter reviews the law and economics literature on intellectual property law and price discrimination. We introduce legal scholars to the wide range of techniques used by intellectual property owners to practice price discrimination; in many cases the link between commercial practice and price discrimination may not be apparent to non-economists. We introduce economists to the many facets of intellectual property law that influence the profitability and practice of price discrimination. The law in this area has complex effects on customer sorting and arbitrage. Intellectual property law offers fertile ground for analysis of policies that facilitate or discourage price discrimination. We conjecture that new technologies are expanding the range of techniques used for price discrimination while inducing new wrinkles in intellectual property law regimes. We anticipate growing commentary on copyright and trademark liability of e-commerce platforms and how that connects to arbitrage and price discrimination. Further, we expect to see increasing discussion of the connection between intellectual property, privacy, and antitrust laws and the incentives to build and use databases and algorithms in support of price discrimination.
They call it a chapter, but they don't identify the book that the chapter will appear in. It's probably an interesting book.

In any event, the chapter is a really interesting, thorough look at price discrimination generally, in addition to price discrimination as it relates to IP. It discusses the pros and cons as well as the assumptions that underlie each. If you are interested in a better understanding of the economics of IP (and secondarily, the internet), this is a good read.

Tuesday, December 11, 2018

The Value of Patent Applications in Valuing Firms

It's an age-old question that we've blogged about here before - what role do patents have on firm value? And is any effect due to signaling or exclusivity? Does the disclosure in the patent have any value? Does anybody read patents?

These are all good questions that are difficult to measure, and so scholars try to use natural experiments or other empirical methods to divine the answer. In a recent draft, Deepak Hegde, Baruch Lev, and Chenqi Zhu (all NYU Stern Business) use the AIPA to provide some useful answers. For those unaware, the AIPA mandated that patent applications be published after 18 months by default, rather than held secretly until patent grant. The AIPA is the law that keeps on giving; there have been several studies that use the "shock" of the AIPA to measure what effect patent publications had on a variety of dependent variables.

So, too, in Patent Disclosure and Price Discovery. A draft is available on SSRN, and the abstract is here:
We focus in this study on the exogenous event of the enactment of American Inventor’s Protection Act of 1999 (AIPA), which disseminates timely, detailed, and credible public information on R&D activities through pre-grant patent disclosures. Exploiting the staggered timing of patent disclosures, we identify a significant improvement in the efficiency of stock price discovery. This improvement is stronger when patent disclosures reveal firms’ successful, new, or technologically valuable inventions. This improvement is more pronounced for firms in high-tech or fast-moving industries, or with a large institutional ownership or analyst coverage. We also find stock liquidity rises and investors’ risk perception of R&D drops after the enactment of AIPA. Our results highlight the importance of timely, detailed, and credible disclosures of R&D activities in alleviating the information problems faced by R&D-intensive firms.
This is a short abstract, so I'll fill in a few details. The authors measure the effect on  intra-period timeliness, a standard measure used to proxy for "price discovery," or how quickly information enters the market and settle the price of a stock. There are a lot of articles on this, but here's one for those interested (paywall, sorry).

In short, the authors look at how quickly price discovery occurred before and after the AIPA, correcting for firm fixed effects and other variables. One of the nice features of their model is that patent applications occurred over a period of years, and so the "shock" of patent publication was not distributed only in one year (which could have been affected by something other than the AIPA that happened in that same year).

They find that price discovery is faster after the AIPA. Interestingly, they also find that the effect is more pronounced in high-tech and fast moving fields -- that is, industries where new R&D information is critically important.

Finally, their results say something about the nature of the patent disclosure itself - the effects come from disclosure of the information, and not necessarily the patent grant. Thus, the signaling effect may really relate to information, and (some) people may well read patents after all.

Monday, December 10, 2018

Adam Mossoff: Are Property Rights Created By Statute "Public Rights"?

I greatly enjoyed Professor Adam Mossoff's new article, Statutes, Common-Law Rights, and the Mistaken Classification of Patents as Public Rights, forthcoming in the Iowa Law Review.  Mossoff's article is written in the wake of Oil States Energy Services v. Green's Energy Group, where the Supreme Court held it is not unconstitutional for the Patent Trial & Appeals Board (PTAB), an agency in the Department of Commerce, to hear post-issuance challenges to patents, without the process and protections of an Article III court. Justice Thomas' opinion concluded that patents are "public rights" for purposes of Article III; therefore, unlike, say, property rights in land, patents can be retracted without going through an Article III court.

Mossoff's article objecting to this conclusion is a logical follow on to his prior work, while also providing new insights about the nature of patents, property, and the public rights doctrine. He does so quite concisely too, with the article coming in at only 21 pages.

Wednesday, December 5, 2018

Helsinn Argument Recap: Did the AIA Change the Meaning of Patent Law's "On Sale" Bar?

As Michael previewed this morning, the Supreme Court heard argument today in Helsinn v. Teva, which is focused on the post-America Invents Act § 102(a)(1) bar on patents if "the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public" before the relevant critical date. The Federal Circuit held that Helsinn's patents were invalid because Helsinn had sold the claimed invention to a distributor more than one year before filing for a patent, but Helsinn (supported by the United States as amicus) argues that the "on sale" bar is triggered only by sales that make the invention "available to the public" under a broad reading of "public."

During argument, none of the Justices seemed inclined to favor Helsinn's attempt to argue that "on sale" clearly means on sale to everybody—Justice Kavanaugh said "it's pretty hard to say something that has been sold was not on sale," and Chief Justice Robert's noted that Helsinn's interpretation "might not be consistent with the actual meaning of the world 'sale'" because "if something's on sale, it doesn't have to be on sale to everybody." Nor did they jump at the government's argument that "on sale" means a product can be purchased by its ultimate consumers—Justice Sotomayor said: "This definition of 'on sale,' to be frank with you, I've looked at the history cited in the briefs, I looked at the cases, I don't find it anywhere."

Helsinn's better statutory argument is that the meaning of "on sale" is modified by "or otherwise available to the public" to require that the sale be publicly available. Indeed, for a reader with no background in patent law, this might seem like the most natural reading of the statute. Justice Alito said that "the most serious argument" against the Federal Circuit's position is "the fairly plain meaning of the new statutory language," and that he "find[s] it very difficult to get over the idea that this means that all of the things that went before are public." And Justice Gorsuch suggested, at least for hypothetical purposes, that "the introduction of the 'otherwise' clause introduced some ambiguity about what 'on sale' means now." But if there was more support to reverse the Federal Circuit, it was not apparent from the argument.

Much of the statutory language used in the Patent Act—including "on sale"—has developed a technical legal meaning over time, generally due to courts' attention to the law's utilitarian focus. For example, patentable subject matter caselaw is "implicit" in § 101, courts have put a highly specialized gloss on the word "obvious" in § 103, and—relevant here—the § 102 categories of prior art have long been interpreted to include relatively obscure and private uses. Although this expansive definition of prior art might seem unfair to patentees, there are also strong policy arguments in its favor, including (1) encouraging patentees to get to the patent office early (leading to earlier disclosure and patent expiration) and (2) avoiding patents when their costs (including higher prices for consumers and subsequent innovators) aren't likely to be outweighed by their innovation-incentivizing benefits, such as when there is independent invention—even when evidence of that invention is relatively obscure.

As Justice Kavanaugh noted at argument today, Mark Lemley's amicus brief on behalf of forty-five IP professors describes the long history of treating relatively non-public disclosures as prior art, including (1) "noninforming public use" cases, (2) "output of a patented machine or process" cases, and (3) cases involving secret, confidential, and nonpublic sales transactions. Justice Breyer also mentioned the Lemley brief, and he said it "seems right" to have the on-sale bar include private sales "to prevent people from benefitting from their invention prior to and beyond the 20 years that they're allowed." The legislative history of the AIA does not suggest that Congress intended to do sweep away all of these cases—Justice Kavanaugh said that he thinks "the legislative history, read as a whole, goes exactly contrary" to Helsinn's contention because "there were a lot of efforts … to actually change the 'on sale' language, and those all failed," leaving the losers "trying to snatch victory from defeat" with "a couple statements said on the floor."

It is perhaps because of this history that Helsinn and the government seemed more focused on the argument that "on sale" has always excluded nonpublic sales than on the argument that the AIA changed the law. Justice Ginsburg's only comment during argument was to ask Helsinn to clarify this: "I thought that one argument was that the AIA changed the way it was. But … you seem to say there was no change; 'on sale' never included the secret sale." Arguing for the government, Malcolm Stewart even conceded—in response to questioning from Justice Kagan—that if the law was settled pre-AIA such that "on sale" included nonpublic sales, then the new AIA language ("or otherwise available to the public") "would be a fairly oblique way of attempting to overturn" the law. But based on my reading of the transcript, it doesn't seem likely that the argument that "on sale" has always meant "on sale publicly" will get five votes.

I waited until after writing the above to get Ronald Mann's take at SCOTUSblog, but I think I very much agree on his bottom line conclusion: while this isn't "a case in which the argument clearly presages the result," the overall transcript "suggests that the most likely outcome will be an affirmance."

Tuesday, December 4, 2018

How Important is Helsinn?

In honor of the oral argument in Helsinn today, I thought I would blog about a study that questions its importance. For those unaware, the question the Supreme Court is considering is whether the AIA's new listing of prior art in 35 U.S.C. §102(a)(1): "the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public..." changed the law.

Since forever, on sale meant any offer or actual sale, regardless of who knew about it. Some have argued that the addition of "or otherwise available to the public" means that only offers that are publicly accessible count as prior art. I think this is wrong, and signed on to an amicus brief saying so. We'll see what the Court says. Note that non-public does not mean "secret." True secret activity is often considered non-prior art, but the courts have defined "public" to mean "not-secret." The question is whether that should change to be "publicly accessible."

But how big a deal is this case? How many offers for sale would be affected? Steve Yelderman (Notre Dame, and soon to be Gorsuch clerk) wanted to know as well, so he did the hard work of finding out. In a draft paper on SSRN that he blogged about at Patently-O, he looked at all invalidity decisions to see exactly where the prior art was coming from. Here is the abstract for Prior Art in the District Court:
This article is an empirical study of the evidence district courts rely upon when invalidating patents. To construct our dataset, we collected every district court ruling, verdict form, and opinion (whether reported or unreported) invalidating a patent claim over a six-and-a-half-year period. We then coded individual invalidation events based on the prior art supporting the court’s analysis. In the end, we observed 3,320 invalidation events based on 817 distinct prior art references.
The nature of the prior art relied upon to invalidate patents informs the value of district court litigation as an error correction tool. The public interest in revoking erroneous patent grants depends significantly on the reason those grants were undeserved. Distinguishing between revocations that incentivize future inventors and those that do not requires understanding the reason individual patents are invalidated. While prior studies have explored patent invalidity in general, no study has reported data at the level of detail necessary to address these questions.
The conclusions here are mixed. On one hand, invalidations for lack of novelty bear many indicia of publicly beneficial error correction. Anticipation based on obscure prior art appears to be quite rare. When it comes to obviousness, however, a significant number of invalidations rely on prior art that would have been difficult or impossible to find at the time of invention. This complicates — though does not necessarily refute — the traditional view that obviousness challenges ought to be proactively encouraged.
So, let's get right to the point. The data seem to show that "activity" type prior art (that is sale or use) is much more prevalent in anticipation than in obviousness. This is not surprising, given that this category is often the patentee's own activities.

With respect to non-public sales, they estimate that a maximum of 14% of anticipation and 2% of obviousness invalidations based on activity were based on plausibly non-public sales. This translates to about 8% of all anticipation invalidations and 1% of all obviousness invalidations. Because there are about as many obviousness cases as anticipation cases, this averages to 4.25% of all invalidations. They note that with a different rule, some of these might have been converted to "public" sales upon more attention paid to providing such evidence.

A related question is whether the inventor's actions can invalidate, or whether the AIA overruled Metallizing Engineering, which held that an inventor's secret use can invalidate, even if a third-party's secret use does not. The study found that the plaintiff's actions were relevant in 27% of anticipation invalidations and 13% of obviousness invalidations.  Furthermore, they found that most of the secret activity was associated with either the plaintiff or defendant--this makes sense, as they have access to such secret information.

So, what's the takeaway from this? I suppose where you stand depends on where you sit. I think that wiping out 4% of the invalidations, especially when they are based on the actions of one of the two parties, is not a good thing. It's bad to allow the patentee to non-publicly sell and have the patent, and it's bad to hold the defendant liable even if it has been selling the patent in a non-public (though non-secret) way. We're talking about 20 claims per year that go the other way - too high for my taste, especially when it means we have to start defining new ways to determine whether something is truly public.

Furthermore, the stakes of reversing Metallizing are much higher. I freely admit that the "plaintiff's secret actions only" rule has a tenuous basis in the text of the statute, but it has been the law for a long time without being expressly overruled by two subsequent revisions. Given that more than 25% of the invalidations were based on the plaintiffs actions, I think it would be difficult to reverse course.

Tuesday, November 27, 2018

Judging Patents by their Rejection Use

The quest for an objective measure of patent quality continues. Scholars have attempted many, many ways to calculate such value, including citations, maintenance fee payments, number of claims, length of claims, and so forth. As each new data source has become available, more creative ways of measuring value have been developed (and old ways of measuring value have been validated/questioned).

Today, I'd like to briefly introduce a new one: the use of patents rejecting other patents. Chris Cotropia (Richmond) and David Schwartz (Northwestern) have posted a short essay on SSRN introducing their methodology.* The abstract for the cleverly named Patents Used in Patent Office Rejections as Indicators of Value is here:
The economic literature emphasizes the importance of patent citations, particularly forward citations, as an indicator of a cited patent’s value. Studies have refined which forward citations are better indicators of value, focusing on examiner citations for example. We test a metric that arguably is closer tied to private value—the substantive use of a patent by an examiner in a patent office rejection of another pending patent application. This paper assesses how patents used in 102 and 103 rejections relate to common measures of private value—specifically patent renewal, the assertion of a patent in litigation, and the number of patent claims. We examine rejection data from U.S. patent applications pending from 2008 to 2017 and then link value data to rejection citations to patents issued from 1999 to 2007. Our findings show that rejection patents are independently, positively correlated with many of the value measurements above and beyond forward citations and examiner citations.

The essay is a short, easy read, and I recommend it. They examine nearly 700,000 patents used in anticipation and obviousness rejections and find that not all patent citations are equal, and that those citations that were used in a rejection have additional ability to explain value, even when other predictors, such as forward citations and examiner citations are included in the model. The only value measure that had no statistically significant relationship to rejection patents was use in litigation (even though forward citations did). This may say something about the types of patents that are litigated or about the role of rejection patents in litigation.

That's about all I'll say about this essay. The paper is a brief introduction to the way this new data set might be used, and this blog post is a brief introduction to the paper.

*At least, I think it's theirs. If you know of an earlier article that measures this on any kind of scale, please let me know!

Tuesday, November 20, 2018

The Role of IP in Industry Structure

I've long been a fan of Peter Lee's (UC Davis) work at the intersection of IP and organizational theory. His latest article is another in a long line of interesting takes on how IP affects and is affected by the structure and culture of its creators. The latest draft, forthcoming in Vanderbilt Law Review, is titled Retheorizing the Impact of Intellectual Property Rights on Industry Structure. The draft is on SSRN, and the abstract is here:
Technological and creative industries are critical to economic and social welfare, and the forces that shape such industries are important subjects of legal and policy examination. These industries depend on patents and copyrights, and scholars have long debated whether exclusive rights promote industry consolidation (through shoring up barriers to entry) or fragmentation (by promoting entry of new firms). Much hangs in the balance, for the structure of these IP-intensive industries can determine the amount, variety, and quality of drugs, food, software, movies, music, and books available to society. This Article retheorizes the role of patents and copyrights in shaping industry structure by examining empirical profiles of six IP-intensive industries: biopharmaceuticals; agricultural biotechnology, seeds, and agrochemicals; software; film production and distribution; music recording; and book publishing. It makes two novel arguments that illuminate the impacts of patents and copyrights on industry structure. First, it distinguishes along time, arguing that patents and copyrights promote the initial entry of new firms and early-stage viability, but that over time industry incumbents wielding substantial IP portfolios often absorb such entrants, thus reconsolidating those industries. It also distinguishes along the value chain, arguing that exclusive rights most prominently promote entry in “upstream” creative functions—from creating biologic compounds to coordinating movie production—while tending to promote concentration in downstream functions related to commercialization, such as marketing and distribution of drugs and movies. This Article provides legal and policy decision makers with a more robust understanding of how patents and copyrights promote both fragmentation and concentration, depending on context. Drawing on these insights, it proposes calibrating the acquisition of exclusive rights based on the size and market position of a rights holder.
Professor Lee surveys six industries, looking for commonalities in how they are structured, and how IP fits in with entry and consolidation. This is not an empirical paper in the sense of, say Cockburn & MacGarvie, who found that patents reduced entry into the software industry unless the entrant had patent applications. Instead, it looks at the history of entry and consolidation in the different industries as a whole, using studies like Cockburn & MacGarvie (which is discussed in some detail) as the foundational base for the theoretical view that puts all the empirical findings together.

The result is a sort of two dimensional axis (though Prof. Lee provides no chart, which wouldn't have added much). He finds that, in general, IP leads to entry early in time, but as the industry (or product area) matures, then IP leads instead to consolidation, as companies find it easier to acquire IP than create it on its own in crowded areas. He also finds, however (and I think this is a key insight in the paper), that IP leads to more entry upstream (early creation stage) and more consolidation downstream (commercialization and marketing).

This second axis is the more interesting one (there are lots of articles about development of thickets over time), but it is also the harder one to prove, and it depends a lot on your definition. For example, Professor Lee discusses video streaming services such as Netflix and Hulu but doesn't discuss whether he views them as horizontally consolidated because there are so few of them. I've always thought of IP as fragmenting video streaming, because rights holders want to monetize their IP by holding on to it. Hence, we have to pay separately to get Star Trek: Discovery on CBS streaming, Hulu has many TV shows that Netflix doesn't, and soon Disney will pull out of its exclusive deal with Netflix to create its own service. That's 5 or more services I have to sign up with if I want to get all the shows (contrast this with the story he tells about music streaming, in which the music distributors all distribute all the music, and the distributor record labels consolidate to enhance market power against the distributor streamers). Indeed, this issue is so important that the services have (as Prof. Lee points out) vertically integrated by consolidating production with distribution (Netflix and Amazon making its own shows, Comcast and NBC/Universal, and AT&T buying Warner). Professor Lee discusses this as a penchant for consolidation, but it is not clear why IP drives it. I think it is consolidation caused by upstream entry (as he would predict) by the likes of Netflix and Amazon in the creation space, because they also happen to be distributors. But then why don't the record labels become streamers? Why does this fragmentation work for video and not music? I'd be interested in hearing how Professor Lee breaks this down.

As you can probably tell, this is a thoughtful and thought-provoking paper, and I recommend it, especially to those unfamiliar with the literature on the role of IP in industry organization and entry.

Tuesday, November 13, 2018

Measuring Alice's Effect on Patent Prosecution

It's a bit weird to write a blog post about something posted at another blog in order to bring attention to it, when that blog has many more readers than this blog. Nonetheless, I thought that the short essay Decoding Patentable Subject Matter by Colleen Chien (Santa Clara) and her student Jiun Ying Wu, in the Patently-O Law Journal was worth a mention. The article is also on SSRN, and the abstract is here:
The Supreme Court’s patentable subject matter jurisprudence from 2011 to 2014 has raised significant policy concerns within the patent community. Prominent groups within the IP community and academia, and commentators to the 2017 USPTO Patentable Subject Matter report have called for an overhaul of the Supreme Court’s “two-step test.” Based on an analysis of 4.4 million office actions mailed from 2008 through mid-July 2017 covering 2.2 million unique patent applications, this article uses a novel technology identification strategy and a differences-in-differences approach to document a spike in 101 rejections among select medical diagnostics and software/business method applications following the Alice and Mayo decisions. Within impacted classes of TC3600 (“36BM”), the 101 rejection rate grew from 25% to 81% in the month after the Alice decision, and has remained above 75% almost every month through the last month of available data (2/2017); among abandoned applications, the prevalence of 101 rejection subject matter rejections in the last office action was around 85%. Among medical diagnostic (“MedDx”) applications, the 101 rejection rate grew from 7% to 32% in the month after the Mayo decision and continued to climb to a high of 64% and to 78% among final office actions just prior to abandonment. In the month of the last available data (from early 2017), the prevalence of subject matter 101 rejections among all office actions in applications in this field was 52% and among office actions before abandonment, was 62%. However outside of impacted areas, the footprint of 101 remained small, appearing in under 15% of all office actions. A subsequent piece will consider additional data and implications for policy.
This article is the first in a series of pieces appearing in Patently-O based on insights gleaned from the release of the treasure trove of open patent data starting the USPTO from 2012.
The essay is a short, easy read, and the graphs really tell you all you need to know from a differences-in-differences point of view - there was a huge spike in medical diagnostics rejections following Mayo and software & business patent rejections following Alice. We already knew this from the Bilski Blog, but this is comprehensive. Interesting to me from a legal history/political economy standpoint is the fact that software rejections were actually trending downward after Mayo but before Alice. I've always thought that was odd. The Mayo test, much as I dislike it, easily fits with abstract ideas in the same way it fits with natural phenomena. Why courts and the PTO simply did not make that leap until Alice has always been a great mystery to me.

Another important finding is that 101 apparently hasn't destroyed any other tech areas the way it has software and diagnostics. Even so, 10% to 15% rejections in other areas is a whole lot more than there used to be. Using WIPO technical classifications shows that most areas have been touched somehow.

Another takeaway is that the data used came from Google BigQuery, which is really great to see. I blogged about this some time ago and I'm glad to see it in use.

So, this was a good essay, and the authors note it is the first in a series. In that spirit, I have some comments for future expansion:

1. The authors mention the "two-step" test many times, but provide no data about the two steps. If the data is in the office action database, I'd love to see which step is the important one. My gut says we don't see a lot of step two determinations.

2. The authors address gaming the claims to avoid certain tech classes, but discount this by showing growth in the business methods class. However, the data they use is office action rejections, which is lagged--sometimes by years. I think an interesting analysis would be office action rejections by date of patent filing, both earliest priority and by the date the particular claim was added. This would show growth or decline in those classes, as well as whether the "101 problem" is limited to older patents.

3. All of the graphs start in the post-Bilski (Fed. Cir.) world. The office actions date back to 2008. I'd like to see what happened between 2008 and 2010.

4. I have no sense of scale. The essay discusses 2000 rejections per month, and it discusses in terms of rates, but I'd like to know, for example, a) what percentage of applications are in the troubled classes? b) how many applications are in the troubled classes (and others)? c) etc.? In other words, is this devastation of a few or of many?

5. Are there any subclasses in the troubled centers that have a better survival rate? The appendix shows the high rejection classes, what about the low rejection classes (if any)?

I look forward to future work on this!


Sunday, November 11, 2018

Recent Critiques of Post-Sale Confusion: Is Materiality the Answer?

Kal Raustiala and Christopher Sprigman are well known as the authors of the book, The Knock-Off Economy: How Imitation Sparks Innovation (2012). In their new article, Rethinking Post-Sale Confusion, Raustiala and Sprigman level a critique at "post-sale confusion" theory that supports many of their book's conclusions about the virtues of so-called knock-offs. In post-sale confusion cases, courts find infringement even when it is abundantly clear that consumers of obvious knock-offs are not confused at the time of purchase.

Raustiala and Sprigman's critique of post-sale confusion theory adds to similarly critical scholarship by others such as Jeremy Sheff and Mark McKenna, whose articles Veblen Brands and A Consumer Decision-Making Theory of Trademark Law, respectively, provide the backbone for much of the discussion in this post. Professor Sheff also has a forthcoming book chapter in the Cambridge Handbook on Comparative and International Trademark Law, where he places American post-sale confusion doctrine in perspective by comparing it to the European approach.

This post attempts to synthesize this scholarship, though cannot hope to serve as a replacement for the much more comprehensive and eloquent original work by these experts. The post also draws attention to a growing refrain by trademark scholars such as Rebecca Tushnet, Mark McKenna, and Mark Lemley: that a possible response to trademark courts' embrace of alternative theories of confusion is to institute a materiality requirement, like courts use for false advertising claims.

Saturday, November 10, 2018

Samantha Zyontz on CRISPR Adoption

Pierre Azoulay's recent Twitter thread on students from the MIT Sloan TIES PhD program who are currently on the market alerted me to Sam Zyontz's interesting work on the CRISPR genome editing tool. CRISPR has captivated the patent world due to the fight between the University of California and MIT's Broad Institute over key patent rights—Jake Sherkow summarized the dispute in May and reflected on the Federal Circuit's decision in September. But CRISPR is of course also interesting to innovation scholars due to the revolutionary nature of the technology itself (this is why the patent rights were worth fighting for), which has the potential to applied to a tremendous variety of applications. Using data on researchers who attempt to experiment with CRISPR and the smaller number who succeed in publishing new findings using the technology, Zyontz has produced some fascinating findings on hurdles to technological diffusion.

Zyontz's work was made possible because of the nonprofit global plasmid repository Addgene, which received the basic biological tools for CRISPR from researchers at the University of California and the Broad Institute in 2012 and 2013. Since then, researchers have had easy access to CRISPR tools for the low cost of $65 per plasmid.

Tuesday, November 6, 2018

The Uneasy Case for Ariosa Diagnostics v. Illumina

The Supreme Court's request for views from the Solicitor General in Ariosa Diagnostics v. Illumina has renewed interest in this nerdy issue of patent prior art. I appear to be in a very small minority that believes that Federal Circuit's rule on this may be right (or at least is not obviously wrong), so I thought I would discuss the issue.

Let's start with the (pre-AIA) statute. 35 U.S.C. 102(e) says that one type of prior art may be where:
the invention was described in ... a patent granted on an application for patent by another filed in the United States before the invention by the applicant for patent...
This is a pretty old rule, dating back to the Alexander Milburn case. The gist of the rule is that delays in the patent office should not deprive references of being prior art. Thus, even though the patent application is "secret" until published, we backdate the reference to the date of filing once the patent is granted (or the application published, which is covered in a subsection I do not reproduce above).

The issue in Ariosa v. Illumina is what to do with provisional patent applications. For the reference at issue, the prior art patent first relied on a provisional patent application, which is never published but becomes publicly available if a patent that relies on it is granted. Later, a regular patent application was filed and eventually issued. There is a dispute about whether the invention was even described in the provisional, but we'll assume that it was. However, the PTAB ruled (and the Fed. Cir. affirmed) that because the issued patent claims were not supported by the provisional patent disclosure, then the reference could not be backdated to the filing of the provisional patent, even if the invention was described in the final patent.

This is where the objections come in. If the patent relies on the filing date of the provisional patent (and incorporates it by reference), then surely it is described as of the provisional patent date and should be prior art. We are, after all, living in a first to invent world and it is unfair that the first inventor (in the provisional patent) should not count as prior art.

Let's start with Alexander Milburn. I love that case. I have assigned it to my students. I think it explains this statute well. But it is not controlling. It was an interpretation of the statute at that time. We have a later adopted statute that defines what is and is not prior art, and Alexander Milburn does not speak to the facts of the Ariosa dispute because there were no provisional patents at that time. This is not like, say, on sale (Section 102(b)) in Helsinn in which that statute remained unchanged and the meaning of the words remained unchanged. There were no provisional patents when Alexander Milburn was granted, and thus it has little to say; the statute was intended to deal with that (and even that has a difficult time).

As a corollary to this analysis, I want to put the rest that there is a problem with the Federal Circuit's rule because it rewards the second inventor. I would bet dollars to donuts that many people arguing this scoffed at complaints that the AIA's first to file rule was unconstitutional because it rewarded second inventors. Both arguments fail for the same reason - the patent system has a long history of allowing the second invention to issue as a patent under certain circumstances. Indeed, even the current version of 102(e) disallows many early foreign patent filings, even though such filings are clearly the first invention. Once again, we have to look at the statute.

So, let's look at the statute: "The invention is described in" - critics focus on this, saying it makes no sense to look at a patent's claims. We only care about whether the invention was described. Fair enough - I agree.

But what about the next part: "a patent granted on an application for patent by another filed in the United States before the invention." Looking at this in pieces, we see a few requirements. First, the description must be in the patent, not the provisional application. Thus, looking at what the provisional patent says should be irrelevant...for this piece.

Second, that description must be in a patent "granted on an application for patent...filed...before." This is where the action is. What does it mean for a patent to be granted on an application for patent filed? For a provisional application, means that the patent must satisfy Section 119(e). It must be filed within one year, and the final patent claim must be supported by the written description of the provisional patent. It is as simple as that - the plain words of the statute dictate the Federal Circuit's rule.

There is a policy benefit to this reading. I think that patentee's can take advantage of the jump from provisional to final patent disclosures, adding new matter while always claiming priority back to the provisional. The provisional patent is not easily obtained, and it takes work to parse out which claims are actually entitled to the earlier filing date. Enforcing the rules on prior art better incentivizes complete provisional patent disclosures.

Then why do I say this is an uneasy case? Well, did I mention that I like Alexander Milburn? The policy it states, that delay in the patent office shouldn't affect prior art can easily be applied here. So long as the description is in the provisional patent, and so long as that provisional patent is eventually publicly accessible, then the goal, even if not the strict language, of the statute is met.

Also, my reading leads to a potentially unhappy result. A party could file a provisional that supports invention A, and then a year later file a patent that claims invention A but describes invention B. The patent could then be asserted against B while relying on the earlier filing date of A, even though B was never described in the provisional as of the earlier date. Similarly, a provisional patent could describe B, and B could then be removed from the final patent application, and the patent would not be prior art because B was not described in the patent, even though B had been described in the earlier, now publicly accessible provisional application.

I don't know where I land on this - as readers of this blog know, I tend to be a textualist. Sometimes the Court has agreed with that, but sometimes (see patentable subject matter and patent venue) it does not.

Friday, November 2, 2018

How will the USPTO study gaps in patenting by women, minorities, and veterans under the new SUCCESS Act?

On Wednesday, President Trump signed H.R. 6758, the Study of Underrepresented Classes Chasing Engineering and Science Success Act of 2018 (SUCCESS Act). It states that the "sense of Congress" is that the United States should "close the gap in the number of patents applied for and obtained by women and minorities to harness the maximum innovative potential and continue to promote United States leadership in the global economy."

The USPTO has been charged with conducting a study that "(1) identifies publicly available data on the number of patents annually applied for and obtained by, and the benefits of increasing the number of patents applied for and obtained by women, minorities, and veterans and small businesses owned by women, minorities, and veterans; and (2) provides legislative recommendations for how to— (A) promote the participation of women, minorities, and veterans in entrepreneurship activities; and (B) increase the number of women, minorities, and veterans who apply for and obtain patents." Congress wants to receive a report on the study results within a year.

There is already great empirical work on gender and racial gaps in patenting, including the "lost Einsteins" work by Alex Bell, Raj Chetty, Xavier Jaravel, Neviana Petkova, and John Van Reenen and Colleen Chien's Inequality, Innovation, and Patents. The USPTO could expand on this work, including by adding to its excellent collection of research datasets. Accurately quantifying the net benefits of increasing patenting by certain groups will be more difficult—especially if the agency follows Jonathan Masur's suggestions for improving its economic analysis—though the second half of the study doesn't depend on getting this number right.

The second half of the study—recommending how to promote entrepreneurship and patenting by women, minorities, and veterans—will require the USPTO to master a different strand of the empirical literature. I've spent some time digging into this work for my upcoming discussion group on Innovation and Inequality, and suffice it to say that there is robust debate about why certain groups are underrepresented in science, engineering, entrepreneurship, and patenting. (Though I haven't seen anything focused on veterans.) There's also increasing academic interest in these issues. For example, at the new Cardozo-Google Project for Patent Diversity, the goal is "to increase the number of U.S. patents issued to women and minorities," mostly by matching resource-constrained inventors with pro bono patent attorneys.

The USPTO is well positioned to bring new evidence to this debate, and I hope it will take this study as an opportunity to test some proposals in rigorous ways through actual field experiments. The agency has shown a wonderful willingness to experiment with pilot programs, but it could learn far more by, for example, randomly selecting only a subset of those opting in to the pilot and comparing their outcomes to those who opted in but weren't selected. (For a review of the literature on learning through policy randomization and some potential applications in patent law, see Part II of my Patent Experimentalism.) Such experimentation could be useful even for small questions, such as whether acceleration certificates (like those used as Patents for Humanity prizes) are useful at increasing pro bono volunteer work among the patent bar.

The SUCCESS Act seems like an exciting chance for the USPTO, and potentially for academics the agency collaborates with, so I look forward to seeing how they use this opportunity.

Tuesday, October 30, 2018

Measuring the Role of Attorney Quality in Patenting

It stands to reason that better attorneys are better at turning patent applications into patents. Theoretically, better arguments about overcoming prior art, for example, will be more likely to lead to granted claims. But what about the quality of inventions? Maybe better patent attorneys just get better patent applications, so of course they have better success rates.

Measuring this is hard, but Gaétan de Rassenfosse (Ecole Polytechnique Fédérale de Lausanne) and four co-authors from University of Melbourne and Swinburne University of Technology think they have found the answer. Examining 1.2 million granted and refused patent applications in the US, Europe (EPO), China, Japan, and South Korea, they think they have the answer. They have posted a draft on SSRN, and the abstract is here:
Failure to obtain a patent weakens the market position and production chain of enterprises in patent-intensive technology domains. For such enterprises, finding ways to maximise the chance to obtain patent protection is a business imperative. Using information from patent applications filed in at least two of the five largest patent offices in the world between 2000 and 2006, we find that the ability to obtain patent protection depends not only on the quality of the invention but also on the quality of the patent attorney. In some cases, the latter is surprisingly more important than the former. We also find that having a high-quality patent attorney increases the chance of getting a patent in less codified technology areas such as software and ICT.
They use a clever approach with their multi-country methodology to separate attorney quality from invention quality. By estimating grant rates across countries for the same inventions as well as for the same attorneys, they are able to estimate the marginal value added by the invention versus the attorney. For example, if different attorneys have differing results in two countries with the same invention, then attorney quality is likely at play. However, if the same attorney has differing results in two countries with the same invention, then invention quality is more likely at issue. Of course, this doesn't work with a couple of patents, but with more than one million patent applications, the estimates likely trend toward a reasonable measure of each type of quality unaffected by the other.

Using this index of quality, they then estimate the effects of attorney and invention quality. They find that, as expected, invention quality matters. But they also find that attorney quality matters--sometimes a lot. More interesting, they find that attorney quality matters more where there is more wiggle room (my term, not theirs), such as in software (as opposed to chemistry). In other words, they find empirical evidence to support the intuition that attorneys who can mold claims to avoid rejections are more likely to wind up with issued patents.

Now, the benefit isn't unbounded. Because grant rates are really high (like 85%), the marginal benefit is unlikely to be huge. One marginal estimate they make is that moving from the 10th percentile to the 90th percentile in quality increases the grant probability by 13 percentage points (e.g, from 79% to 92%). What this means is that invention quality is very important, because even the least successful attorneys are successful most of the time. Of course, this is a potential criticism, because if everything is granted, then how can we measure quality based on grant rates? The paper addresses this, arguing that there is a lot of variability, even within the same patent family. They also test on other measures of patent quality and find similar results.

The authors also make other interesting findings: external counsel increase grant rates, attorney quality doesn't seem to have an effect on foreign v. local probabilities, and attorney quality is of less importance in PCT applications. There is a lot more to the paper - including effects in different offices, on different technologies and other great information. I found this to be a really interesting and useful read.

Tuesday, October 23, 2018

Patents and the Administrative State

When Justice Gorsuch was confirmed to the Supreme Court, many commentators, well, commented that he was wary of administrative overreach. But it turns out he was really active in patent cases, writing opinions in all the patent cases he saw last term. Who knew he was so interested in patents? He does have some IP chops; his opinion in Meshworks remains one of my favorite copyright cases, not the least of which because it validates a legal argument I made about virtual reality copyright some 25 years ago. I was able to cite that case in a recent book chapter on the same subject.

But it turns out that his interest in patents may be one and the same as his concern about the administrative state. We suspected as much with Oil States, but what about the others? To answer this, Daniel Kim and Jonathan Stroud (both of Unified Patents) have an article forthcoming in the Chicago-Kent Journal of Intellectual Property called Administrative Oversight: Justice Gorsuch’s Patent Opinions, the PTAB, and Antagonism Toward the Administrative State. A draft is posted on SSRN, and the abstract is here:
In his first term, Justice Neil Gorsuch has made a surprisingly forceful impact on, of all things, patent law—and even more unlikely, the United States Patent and Trademark Office’s adjudicatory arm, the Patent Trial and Appeal Board. Was there any way to predict, from his 10th Circuit opinions below, that he would author opinions in all three patent cases in his first term? Was this attention the result of deeply submerged but long-felt opinions on patent law, or rather a result of his sharp distrust of administrative overreach? We analyze 10th Circuit and Supreme Court opinions authored by Justice Gorsuch, and conclude his unforeseen interest springs from his desire to limit agency power rather than from any particular concern with patents. Still, his opinions—intentionally or by happenstance—will reverberate through our patent law for years.
This article is straightforward and illuminating. It begins with the justice's background and some comments on his writing style. It then examines his Tenth Circuit opinions in IP and tribal immunity (there are really not that many, which the authors attribute to the backwater location of the 10th Circuit devoid of any innovation, which I'm sure the folks in Denver will be happy to hear).

What's interesting about the article is that its analysis of the cases is not about IP outcomes, but about methods and the administrative state. Returning to Meshwerks, for example, the authors focus primarily on how it traces the history and purpose of IP rather than the important holding that realistic renditions of physical objects lack creativity.

This analysis bears fruit, though, because they show how these same methods and concerns about the administrative state drive Gorsuch's patent opinions, including those where he is dissenting or breaking from other conservative justices.

I found this article an interesting and insightful read, and I thought it was especially well done given the authors' own admission that they do not agree with Justice Gorsuch's judicial perspective. In other words, this wasn't a fan piece, but instead really useful commentary about how judicial philosophy and concerns about the administrative state may be brought to bear on the IP system in unexpected ways.

Friday, October 19, 2018

What impact do government grants have on small tech firms?

Most academic writing on direct government spending as an innovation policy tool focuses on how this mechanism compares with other policies rather than on the policy choices within the "direct spending" box. For example, in Beyond the Patents–Prizes Debate, Daniel Hemel and I considered a single category of "government grants—a category that includes direct spending on government research laboratories and grants to nongovernment researchers"—with a focus on the similarities among these direct spending mechanisms, and what makes them all different from the other tools in our four-box framework (R&D tax incentives, patents, and inducement prizes).

But we noted that there is variation within each policy box, and that in practice the boxes form a spectrum rather than discrete choices. And it is certainly worth diving within each of the four boxes of our framework from Beyond to dissect these policy tools. There is of course an extensive literature already on optimizing within the "patent" mechanism, but legal innovation scholars pay far less attention to the other boxes, including grants.

Even if one focuses on the most typical grants to academic scientists, there is some interesting research on the effect of different ways of awarding this funding, such as this paper by Azoulay et al. on NIH vs. HHMI grants. But the federal government also provides many other types of direct finding: in 2013, almost one-quarter of federal R&D expenditures went to for-profit firms. How does the theory behind this substantial expenditure of taxpayer funding differ from that for academic research, and what impact does it have in practice?

A recent study from Aleksandar Giga, Andrea Belz, Richard Terrile, and Fernando Zapatero at USC and NASA's Jet Propulsion Lab at Caltech provides some data on the Small Business Innovation Research (SBIR) program as administered by NASA. They find that compared with firms that do not receive these grants, "microfirms" (1-5 employees) with SBIR grants are twice as likely to produce patents and generate twice as many patents. They argue that this is unlikely to be due to a selection effect. They also find that the program does not show the same effect for larger firms, and they suggest that the size limitations for the program should be reconsidered.

Giga et al.'s work focuses on just one corner of the extensive field of direct government science funding, but I hope legal scholars will incorporate empirical work like this to provide a richer understanding of this type of innovation policy.

Tuesday, October 16, 2018

Equitable Servitudes and Post-Sale Restrictions

I have continued to find the issue of post-sale restrictions vexing. On the one hand, I think that there are sound economic reasons for them. On the other hand, I really don't like them, especially when they limit what should otherwise be reasonable and free activities.

The Supreme Court's recent cases in this area have made it more difficult to enforce such restrictions, but they have done so in a way that leaves open the possibility that some restrictions might apply while also not giving much guidance about when.

A recent article takes this on. Tim Scott (King & Spalding) has published The Availability of Post-Sale Contractual Restrictions in the Wake of Impression Products, Inc. v. Lexmark, 581 U.S. 1523 (2017) in Les Nouvelle. It is available on SSRN as well, and the abstract is below. I write about this article in part because the abstract is uninspiring when compared to the high quality analysis in the article. Don't be fooled.
In Impression Products, Inc. v. Lexmark International Inc., 581 U.S. ___, 137 S. Ct. 1523 (2017), the United States Supreme Court reaffirmed the patent exhaustion rule; i.e., patent rights are exhausted upon the first sale of the patented item such that the patentee has no rights to impose any post-sale conditions or limitations on the use of the product, at least under the patent laws. Id. at 1529. In doing so, the Court left open the question of whether such conditions or limitations could be imposed as a matter of contract law. Thus, the “restrictions in Lexmark’s contracts with its customers may have been clear and enforceable under contract law, but they do not entitle Lexmark to retain patent rights in an item that it has elected to sell.” Id. at 1531 (emphasis added). In summarizing its decision in Quanta Computer, Inc. v. LG Electronics, Inc., 533 U.S. 617 (2008) ruling that the sale of computer components exhausted the plaintiffs patent rights in those components, the Court noted that it reached that conclusion “without so much as mentioning the lawfulness of the contract.” Lexmark, 137 S. Ct. at 1533. And it later summarized its holding by stating that “whatever rights Lexmark retained are a matter of the contracts with its purchasers, not the patent law.” Id. (emphasis added).
This is my kind of article: it's short, it gets right to business, and it is thorough despite its brevity. The article introduces the cases, discusses contract v. property rights, discusses equitable servitudes, surveys the literature on equitable servitudes on personal property (including pros and cons), proposes alternatives to equitable servitudes (and points to critiques), and then discusses how the alternatives might have applied to Lexmark's activities in Impression Products.

Not bad for eight journal pages. This is recommended reading for anyone who wants a quick background on the state of post-sale restrictions.

Friday, October 12, 2018

Mike Andrews on Historical Patent Data

Mike Andrews is a postdoc at NBER, and I recently came across his PhD dissertation, Fuel of Interest and Fire of Genius: Essays on the Economic History of Innovation. He presents some interesting new results from historical patent records:

I already described the work in chapter 1 in my post on the NBER Summer Institute; in short, he compares U.S. counties that received new colleges in the period 1839-1954 with finalist sites that were not chosen for plausibly exogenous reasons. He finds that counties that received a college had 33% more patents per year, mostly due to increases in population rather than the colleges' graduates and faculty.

Chapter 2 looks at the effect of statewide alcohol bans on counties that previously set their own alcohol policies. Statewide prohibition reduced patents by 15% per year in previously wet counties relative to previously dry counties, and there is a larger decline for men than for women. Andrews suggests this decline is due to a disruption of information social interactions in saloons.

Chapter 3 matches 1870–1940 patents with census data and finds that patentees are consistently more likely to be older, white, male, and living in a state other than the one in which they were born. Establishment of a historically black college increased representation of black inventors, but the effect largely disappears after controlling for a county's black population, suggesting it is driven by concentration rather than the college itself. Extension of the franchise to women did not seem to increase the representation of women among inventors.

Finally, chapter 4 compares patent historical datasets. For those considering historical work with patent data, this is probably a good place to start. One should always be cautious about generalizing from the innovation institutions of a century ago to the ones that exist today—e.g., the effect of universities on the patent system has changed significantly in the past few decades—but it is still interesting to understand how patents worked in a particular historical context.

Tuesday, October 9, 2018

Do Patent Laws Affect the Location of R&D?

One of the common complaints about weakening patent protection is that it causes reduced R&D in the country with weakened protection. I've always been skeptical of this claim in the modern era, because one can develop anywhere and import into a location with better protection. As a result, one would expect that patent protection is unrelated to R&D offshoring.

In a draft article called Offshoring Patent Law, Gregory Day (Georgia Business) and Steven Udick (Skiermont Derby LLP) consider this question. Their article is forthcoming in the Washington Law Review and a draft is on SSRN. Here is the abstract:
Legislators and industry leaders claim that patent strength in the United States has declined, causing firms to innovate in foreign countries. However, scholarship has largely dismissed the theory that foreign patents have any effect on where firms invent, considering that patent law is bound by strict territorial limitations (as a result, one cannot strengthen their patent protection by innovating abroad). In essence, then, industry leaders are deeply divided from scholarship about whether innovative firms seek out jurisdictions offering stronger patent rights, affecting the rate of innovation.
To resolve this puzzle, we offer a novel theory of patent rights — which we empirically test — to dispel the positions taken by both scholarship and industry leaders. Since technology is generally developed in one country, the innovation process exposes the typical inventor to infringement claims only in that jurisdiction. In turn, we demonstrate that inventors have powerful, counterintuitive incentives to develop technology where patent rights are weaker and enforcement is cheaper. Given that it typically costs more to defend a patent infringement claim in the United States than to lose one in another country (the cost to litigate a patent in the United States averages around $3.5 million and royalty awards have surpassed $2.5 billion), our empirical research contributes to the theoretical understanding of patent rights by shedding new light on the important, yet largely dismissed, dimension of where innovation takes place.
We received invaluable support from international research organizations and patent attorneys working for top-tier law firms. Notably, the Global IP Project, which is a multinational research group spearheaded by the leading global intellectual property (“IP”) law firm, Finnegan, Henderson, Farabow, Garrett & Dunner, LLP, as well as Darts-ip, an international organization dedicated to the study of global IP litigation, provided proprietary data, enabling us to explore whether firms optimize value by placing research and innovation in countries with “better” patent laws. To verify our models, we interviewed notable patent attorneys practicing in the United States, Europe, and Asia.
The primary takeaway from their approach is that not only might the strength of the laws matter, but also the costs of defense. To tell this story, they use Marvell as an example, but that was actually a rare case where the R&D and sales process itself constituted infringement to trigger worldwide sales. I would expect that companies can usually design in the U.S., send designs overseas (see Microsoft v. AT&T), and ship from there. Thus, the more important complaint is that patent enforcement causes manufacturing to move offshore, not R&D.

That said, the article performs a regression on R&D and several variables that might affect R&D like tax rates and human capital density, and finds that costs of defense and damages awards are negatively correlated with R&D, while strength of enforcement is positively correlated. This is all reasonable enough, but I'm concerned that the empirical model is incomplete. Though the word "cost" appears dozens of times in the article, not once is it mentioned with respect to the cost of R&D. Might the reason R&D gets offshored be that it's cheaper? And could cheaper R&D also correlate with lower enforcement of IP? My guess is yes, based on the studies I've read over the years. I would have liked to have seen some analysis and discussion of this point.

While I think this is an interesting paper, I think that the model is underdeveloped in two ways. The first is the focus on costs in only half of the equation. The second is the neglect of trade secret enforcement. Unlike patent law, trade secret laws can affect R&D in the country in which the R&D takes place because the developer can lose value without ever selling into that country. Studies by Lippoldt and Schultz and also by Png demonstrate this pretty well.

For those interested in this topic, I recommend this article, and I recommend a contrast with Bilir, Patent Laws, Product Life-Cycle Lengths, and Multinational Activity, in the American Economic Review. Bilir develops a similar model, but bases it on location of companies (which covers some of the manufacturing issues), and considers the life-cycle of R&D (long term v. short term protection) as well as trade secrets. Bilir does not directly consider costs of defense, so it would be interesting to see how that notion from this new article would overlay onto Bilir.

Saturday, October 6, 2018

Language Barriers and Machine Translation

One of the more expensive parts of acquiring global patent protection is having a patent application translated into the relevant language for local patent offices. This is typically viewed simply as an administrative cost of the patent system, though my survey of how scientists use patents suggested that these translation costs may improve access to information about foreign inventions. As I wrote then, "[t]he idea that patents might be improving access to existing knowledge through mandatory translations and free accessibility is a very different disclosure benefit from the one generally touted for the patent system and seems worthy of further study." E.g., if researchers at a U.S. firm publish their results only in English but seek patent protection in the United States and Japan, then Japanese researchers who don't speak English would be able to read about the work in the Japanese patent.

I've also been interested in the proliferation of machine translation tools for patents—which can make patents even more accessible, but which also might limit this comparative advantage of patents over scientific publications if machine translation of journal articles becomes commonplace.

I don't know of much data on the actual economic impact of any of these translation tools, so I was intrigued to spot a new empirical study about the benefits of machine translation for international trade: Does Machine Translation Affect International Trade? Evidence from a Large Digital Platform. Three researchers from MIT and Washington University, Erik Brynjolfsson, Xiang Hui, and Meng Liu, found that the introduction of eBay Machine Translation increased international trade on the platform by 17.5%. They conclude: "Our results provide causal evidence that language barriers significantly hinder trade and that AI has already begun to improve economic efficiency in at least one domain."

Of course, this trade benefit of machine translation is different from the effect on patent disclosure, but the study made me wonder if a similar methodology could be applied by the hosts of patent translation tools (e.g., WIPO, EPO, SIPO) to study the increase in access to patent documents from different countries. Such a study could be a nice complement to survey-based work about how researchers in different countries access information about foreign work, and how machine translation fits into this picture. I'm not currently planning any of this work myself, but if the topic interests you, feel free to email—it seems like a fruitful area for a number of studies, and I'd love to share more thoughts and advice.

Tuesday, October 2, 2018

Valuing Wikimedia Commons Images

Several years ago, both Lisa and I wrote about Heald, et al.'s study that attempted to value public domain photographs as used on Wikipedia. While I liked the study a lot, two of my chief critiques were small sample size and unclear value of hits on Wikipedia pages.

A new paper extends their study, and provides even more evidence of the extensive use of Wikimedia Commons photos. In What is the Commons Worth? Estimating the Value of Wikimedia Imagery by Observing Downstream Use, Kris Erickson (University of Leeds), Felix Rodriguez Perez (Independent), and Jesus Rodriguez Perez (University of Glasgow), have attempted to generalize the findings from the prior study. The paper is published in an ACM conference proceeding, but is available without a paywall on SSRN. The abstract is here:
The Wikimedia Commons (WC) is a peer-produced repository of freely licensed images, videos, sounds and interactive media, containing more than 45 million files. This paper attempts to quantify the societal value of the WC by tracking the downstream use of images found on the platform. We take a random sample of 10,000 images from WC and apply an automated reverse-image search to each, recording when and where they are used ‘in the wild’. We detect 54,758 downstream uses of the initial sample and we characterize these at the level of generic and country-code top-level domains (TLDs). We analyze the impact of specific variables on the odds that an image is used. The random sampling technique enables us to estimate overall value of all images contained on the platform. Drawing on the method employed by Heald et al (2015), we find a potential contribution of USD $28.9 billion from downstream use of Wikimedia Commons images over the lifetime of the project.
In one fell swoop, the authors have answered my two concerns. The random sample is much larger, and their search went far beyond Wikipedia, to commercial and non-commercial uses. It turns out that the images were used a whopping 5.4 times each on average, which is a lot of usage when extrapolated to the millions of images in the Commons.

As with the prior study, estimating the value is a bit back of the envelope. Assuming that every commercial (and non-commercial) user would have paid the Getty Images fee is a big assumption, as many might have substituted to homegrown photos or maybe no photo at all. The authors note that this is a big assumption. Another issue is that not every item in the commons is within copyright, and my have been findable by other means.

That said, I do not think the assumption detracts from the value of the Wikimedia Commons for two reasons. First, they report Getty having revenues of nearly $1 billion per year, so finding $28 billion value over the lifetime of the WC is perhaps not far-fetched. Second, even if people would not pay the full amount, they might have been willing to pay less than the Getty fee (which also includes some public domain items). In the absence of WC, the differences between what they would have paid and what they get (either nothing or homegrown or search costs) is deadweight loss.

I frankly had no idea that Wikimedia Commons was used so much, but I'm glad that there's competition in the stock photo market. I'll finally note that the discussion about which images get used is an interesting one. It turns out-just like Netflix, Facebook, and Twitter-the stuff that gets curated for you is the stuff you wind up seeing and using.

Sunday, September 30, 2018

Aman Gebru: Compelling Disclosure of Traditional Knowledge in Patents

Aman Gebru, visiting assistant professor at Cardozo Law, has a new article forthcoming in Denver Law Review about patenting traditional knowledge. Aman is on the teaching market this year in the patents and intellectual property field, but his research and teaching deal with other areas as well like contracts and international law. His proposal, if adopted, could be good for the public and some communities, but might make big pharma a bit angry.

Tuesday, September 25, 2018

Questioning Design Patent Bar Restrictions

Every once in a while an article comes along that makes you realize all the things that you just don't realize. Of course, someone else realizes these things, which makes you realize all the things you should be realizing but didn't. But this is what scholarship is about, I think - spreading knowledge. The latest such article for me is The Design Patent Bar: An Occupational Licensing Failure, by Chris Buccafusco and Jeanne Curtis (both of Cardozo Law). A draft is posted on SSRN, and the abstract is here:
Although any attorney can represent clients with complex property, tax, or administrative issues, only a certain class of attorneys can assist with obtaining and challenging patents before the U.S. Patent & Trademark Office (PTO). Only those who are members of the PTO’s patent bar can prosecute patents, and eligibility for the patent bar is only available to people with substantial scientific or engineering credentials. However much sense the eligibility rules make for utility patents—those based on novel scientific or technical inventions—they are completely irrational when applied to design patents—those based on ornamental or aesthetic industrial designs. Yet the PTO applies its eligibility rules to both kinds of patents. While chemical engineers can prosecute both utility patents and design patents (and in any field), industrial designers cannot even prosecute design patents. This Article applies contemporary research in the law and economics of occupational licensing to demonstrate how the PTO’s application of eligibility rules to design patents harms the patent system by increasing the costs of obtaining and challenging design patents. Moreover, we argue that the PTO’s rules produce a substantial disparate impact on women’s access to a lucrative part of the legal profession. By limiting design patent prosecution jobs to those with science and engineering credentials, the majority of whom are men, the PTO’s rules disadvantage women attorneys. We conclude by offering two proposals for addressing the harms caused by the current system.
It never occurred to me to think about the qualifications required for prosecuting design patents. The observation that a different set of skills goes into such work is a good one; it makes no sense that a chemistry grad can prosecute design patents but an industrial design grad cannot. There are plenty of outstanding trademark lawyers who could probably do this work, despite not having a science or engineering degree.

I like that this paper takes the issue beyond this simple observation (which could really be a blog post or op-ed), and applies some occupational licensing concepts to the issue. Furthermore, I like that the paper makes some testable assertions that can drive future scholarship, such as whether these rules have a disparate impact on women. I am skeptical about the negative impact on design patents, but I think that's testable as well.

The paper concludes with some relatively mild suggestions on how to open up the field a little bit. I think they should be considered, but I'm happy to hear from folks who disagree.

Monday, September 24, 2018

USPTO Director Iancu Proposes Revised 101 Guidance

In remarks at the annual IPO meeting today, USPTO Director Andrei Iancu said "the USPTO cannot wait" for "uncertain" legislation on patentable subject matter and is "contemplating revised guidance" to help examiners apply this doctrine. Few are likely to object to his general goal of "increased clarity," but the USPTO should be sure that any new guidance is consistent with precedent from the Supreme Court and Federal Circuit.

As most readers of this blog are well aware, the Supreme Court's recent patentable-subject-matter cases—Bilski (2010), Mayo (2012), Myriad (2013), and Alice (2014)—have made it far easier to invalidate patent claims that fall under the "implicit exception" to § 101 for "laws of nature, natural phenomena, and abstract ideas." Since Alice, the Federal Circuit has held patents challenged on patentable-subject-matter grounds to be invalid in over 90% of appeals, and the court has struggled to provide clear guidance on the contours of the doctrine. Proponents of this shift call it a necessary tool in the fight against "patent trolls"; critics claim it creates needless uncertainty in patent rights and makes it too difficult to patent important innovations in areas such as medical diagnostics. In June, Rep. Thomas Massie (R-KY) introduced the Restoring America’s Leadership in Innovation Act of 2018, which would amend § 101 to largely undo these changes—following a joint proposal of the American Intellectual Property Law Association (AIPLA) and Intellectual Property Owners Association (IPO)—but Govtrack gives it a 2% chance of being enacted and Patently-O says 0%.

In the absence of legislation, can the USPTO step in? In his IPO speech today, Director Iancu decries "recent § 101 case law" for "mush[ing]" patentable subject matter with the other patentability criteria under §§ 102, 103, and 112, and he proposes new guidance for patent examiners because this mushing "must end." The problem is that the USPTO cannot overrule recent § 101 case law. It does not have rulemaking authority over substantive patent law criteria, so it must follow Federal Circuit and Supreme Court guidance on this doctrine, mushy though it might be.

Tuesday, September 18, 2018

No Fair Use for Mu(sic)

It's an open secret that musicians will sometimes borrow portions of music or lyrics from prior works. But how much borrowing is too much? One would think that this is the province of fair use, but it turns out not to be the case - at least not in those cases that reach a decision.  Edward Lee (Chicago-Kent) has gathered up the music infringement cases and shown that fair use (other than parody) is almost never a defense - not just that defendants lose, but that they don't even raise it most of the time. His article Fair Use Avoidance in Music Cases is forthcoming in the Boston College Law Review, and a draft is available on SSRN. Here's the abstract:
This Article provides the first empirical study of fair use in cases involving musical works. The major finding of the study is surprising: despite the relatively high number of music cases decided under the 1976 Copyright Act, no decisions have recognized non-parody fair use of a musical work to create another musical work, except for a 2017 decision involving the copying of a narration that itself contained no music (and therefore might not even constitute a musical work). Thus far, no decision has held that copying musical notes or elements is fair use. Moreover, very few music cases have even considered fair use. This Article attempts to explain this fair use avoidance and to evaluate its costs and benefits. Whether the lack of a clear precedent recognizing music fair use has harmed the creation of music is inconclusive. A potential problem of “copyright clutter” may arise, however, from the buildup of copyrights to older, unutilized, and underutilized musical works. This copyright clutter may subject short combinations of notes contained in older songs to copyright assertions, particularly after the U.S. Supreme Court’s rejection of laches as a defense to copyright infringement. Such a prospect of copyright clutter makes the need for a clear fair use precedent for musical works more pressing.
The results here are pretty interesting, as I discuss below.

Wednesday, September 12, 2018

Erie and Intellectual Property Law

When it comes to choice of law, U.S. federal courts hearing intellectual property law claims generally do one of two things. They either construct and apply the federal IP statutes (Title 18, Title 35, Title 17, and Title 15, respectively), remaining as faithful to Congress' meaning as possible; or they construct and apply state law claims brought under supplemental (or diversity) jurisdiction, remaining as faithful as possible to the meaning of the relevant state statutes and state judicial decisions. In the former case, they apply federal law; in the latter case, they apply the law of the state in which they sit.

Simple, right? Or maybe not.

This Friday, University of Akron School of Law is hosting a conference called Erie At Eighty: Choice of Law Across the Disciplines, exploring the implications of the Erie doctrine across a variety of fields, from civil procedure to constitutional law to evidence to remedies. I will be moderating a special panel: Erie in Intellectual Property Law.  Joe Miller (Georgia) will present his paper, "Our IP Federalism: Thoughts on Erie at Eighty"; Sharon Sandeen (Mitchell-Hamline) will present her paper, "The Erie/Sears-Compco Squeeze: Erie's Effects on Unfair Competition and Trade Secret Law”; and Shubha Ghosh (Syracuse) will present his paper "Jurisdiction Stripping and the Federal Circuit: A Path for Unlocking State Law Claims from Patent."

Other IP scholars in attendance include Bryan Frye (Kentucky), whose paper The Ballad of Harry James Tompkins provides a riveting, surprising, and (I think) convincing re-telling of the Erie story, and Megan LaBelle (Catholic University of America), whose paper discusses the crucial issue of whether the Erie line of cases directs federal courts sitting in diversity to apply state privilege law. All papers will be published in the Akron Law Review.

If you have written a paper that touches on the Erie doctrine's implications for intellectual property, I would really appreciate it if you would send it to me: chrdy@uakron.edu or cahrdy@gmail.com I will link to them in a subsequent post in order provide a resource for future research. Thank you!


Tuesday, September 11, 2018

Bargaining Power and the Hypothetical Negotiation

As I detail in my Boston University Law Review article, (Un)Reasonable Royalties, one of the big problems with using the hypothetical negotiation for calculating damages  (aside from the fact that it strains economic rationality and also has no basis in the legal history of reasonable royalties) is differences in bargaining power. The more explicit problem is when litigants try to use their bargaining power to argue that the patent owner would have agreed to a lower hypothetical rate. More implicitly, bargaining power can affect royalty rates in pre-existing (that is, comparable) licenses. This gives rise to competing claims in top 14 law reviews about whether royalty damages are spiraling up or down based on the trend of comparable licensing terms.

For what it's worth, my article dodges the spiral question, but suggests that existing licenses only be used if they can be directly tied to the value of the patented technology (and thus settlements should never be used). Patent damages experts who have read my article uniformly hate that part of it, because preexisting licenses (including settlements) are sometimes their best or even only granular source of data.

But much of this is theory. What about the data?  Gaurav Kankanhalli (Cornell Management - finance) and Alan Kwan (U. Hong Kong) have posted An Empirical Analysis of Bargaining Power in Licensing Contract Terms to SSRN. Here is the abstract:
This paper studies a new, large sample of intellectual property licensing agreements, sourced from filings by public corporations, under the lens of a surplus-bargaining framework. This framework motivates several new empirical findings on the determinants of royalty rates. We find that licensors command premium royalty rates for exclusivity (particularly in competitive industries), and for exchange of know-how. Licensors with differentiated technology and high market power charge higher royalty rates, while larger-than-rival licensees pay lower rates. Finally, using this framework, we study how the nature of disclosure by public firms affects transaction value. Firms transact at lower royalty rates when they redact contracts, preserving pricing power for future negotiations. This suggests that practitioners modeling fair value in transfer pricing and litigation contexts based on publicly-known comparables are over-estimating royalties, potentially impacting substantial cumulative transaction value.
The paper uses SEC reported licenses (more on that below), but one clever twist is that they obtained redacted terms via FOIA requests, so they could both expand their dataset and also see what types of terms are missing. They model the following transactions. Every firm has the most they are willing to pay, and the least they are willing to accept. If those two overlap, then the parties will agree to some price in the middle that splits the surplus.  Where that price is set is based on bargaining power. The authors then hypothesize what types of characteristics will affect that price, and most of them are borne out.

They focus on several kinds of bargaining power contract characteristics, firm specific characteristics, technology characteristics and license characteristics. I'm not sure I would call all of these bargaining power, as they do. I think some relate more to the value of the thing being licensed. Technically this will affect the division of surplus, but it's not really the type of bargaining power I think about. So long as the effect on license value is clear, however, the results are helpful for use in patent cases regardless of the technical designation.

So, for example, universities, non-profits, and individuals receive lower rates because they  have no credible BATNA for self-commercialization. They argue that this sheds light on conventional wisdom that individuals produce less valuable inventions. Further, firms in weaker financial condition do worse, and firms with more pricing power among their rivals do better.

On the other hand, licenses including know-how or exclusivity receive higher royalties, while amendments typically lead to lower royalties (presumably due to underperformance). I don't consider this to be bargaining power, but rather added value. That said, the authors test exclusivity and find that that highly competitive industries have higher royalties for exclusivity than non-competitive industries, which implies a mix of both bargaining power and value in competition.

The authors do look at technological value and find, unsurprisingly, that substitutability leads to lower rates.

The paper points to one interesting combination, though: territorial restrictions. Contracts with territorial restrictions have higher rates. You would think they have lower rates because the license covers less. But the contrary implication here is that a territorial restriction is imposed where the owner has the leverage to impose it, and that means a higher rate. That could be due to value or bargaining power, I suppose. I wonder, though, how many expert reports say that a royalty rate should be greater because the comparable license only covered a territory. Any readers who want to chime in would be appreciated.

There is a definite selection effect here, though, which further implies that use of preexisting licenses gathered via SEC filings be treated carefully. First, the authors note that there is a selection effect in the redactions. They find that not only are lower rates redacted, but that these redactions are driven by non-exclusive licenses, because firms want to hide their lowest willingness to sell (reservation) price. This finding is as valuable as the rest, in my opinion. It means, as the authors note, that any reliance on reported licenses may be over-weighting. It also means, in terms of my own views, that the hypothetical negotiation is not a useful way to calculate damages, because the value of the patent shouldn't change based on who is buying and selling. A second selection effect is not within the data, but what is not in the data: these are only material licenses. If the licenses are not material, they will not be reported. Those licenses are likely to be smaller, whether due to patent value or bargaining power.

This is a really interesting and useful paper, and worth a look.

Monday, September 3, 2018

Boundary Maintenance on Twitter

Last Saturday was cut-down day in the NFL, when rosters are shaved from 90 players down to 53. For the first time, I decided to follow the action for my team by spending time (too much time, really - the kids were monopolizing the TV with video games) watching a Twitter list solely dedicated to reporters and commentators discussing my team.

I've never used Twitter this way, but from an academic point of view I'm glad I did, because I witnessed first-hand the full microcosm of Twitter journalism. First, there were the reporters, who were all jockeying to be the first to report someone was cut (and confirm it with "sources."). Then, there were the aggregators, sites with a lot of writers devoted to team analysis and discussion, but who on this day were simply tracking all of cuts/trades/etc. Ironically, the aggregators were better sources of info than the reporters' own sites, because the reporters didn't publish a full list until later in the day, along with an article that they were too busy to write because they were gathering facts.

Then there were the professional commentators - journalists and semi-professional social media types who have been doing this a long time or have some experience in the sport, but who were not gathering facts. They mostly commented on transactions. Both the reporters and commentators answered fan questions. And then...there were the fans, commenting on the transactions, commenting on the reporters, commenting on the commentators, etc. This is where it got interesting.

Apparently experienced commentators don't like it when fans tell them they're wrong. They like to make clear that either a) they have been doing this a long time, or b) they have a lot of experience in the league, and therefore their opinion should not be questioned. Indeed, in one case a commentator's statement seemed so ridiculous that the "new reporter" in town made fun of it, and all the other reporters circled the wagons to say that the new guy shouldn't be questioning the other men and women on the beat, all of whom had once held his job but left for better jobs. Youch! It turns out the statement was, in fact, both wrong and ridiculous (and proven so the next morning).

This type of boundary maintenance is not new, but it is the first time I've seen it so clearly, explicitly, and unrelentingly (there is some in legal academia, which I'll discuss below). This is a blog about scholarly works, so I point you to an interesting article called The Tension between Professional Control and Open Participation:Journalism and its Boundaries, by Seth Lewis, now a professor in the communications department at the University of Oregon. The article is published in Information, Communication & Society. It is behind a paywall, so a prepublication draft is here. Here is the abstract:
Amid growing difficulties for professionals generally, media workers in particular are negotiating the increasingly contested boundary space between producers and users in the digital environment. This article, based on a review of the academic literature, explores that larger tension transforming the creative industries by extrapolating from the case of journalism – namely, the ongoing tension between professional control and open participation in the news process. Firstly, the sociology of professions, with its emphasis on boundary maintenance, is used to examine journalism as boundary work, profession, and ideology – each contributing to the formation of journalism's professional logic of control over content. Secondly, by considering the affordances and cultures of digital technologies, the article articulates open participation and its ideology. Thirdly, and against this backdrop of ideological incompatibility, a review of empirical literature finds that journalists have struggled to reconcile this key tension, caught in the professional impulse toward one-way publishing control even as media become a multi-way network. Yet, emerging research also suggests the possibility of a hybrid logic of adaptability and openness – an ethic of participation – emerging to resolve this tension going forward. The article concludes by pointing to innovations in analytical frameworks and research methods that may shed new light on the producer–user tension in journalism.
The article includes a fascinating literature review on the sociology of journalism, and focuses on what it means to be a journalist in a world when your readers participate with you.

Bringing it back to IP for a moment (and legal academia more generally), I certainly see some of this among bloggers and tweeters. I see very little of it as a producer of content, presumably because I am always right. 😀 But I know that as a consumer I bleed into the boundaries of others, both in legal academia and elsewhere. I can't help myself - my law school classmates surely remember me as a gunner.

Many of my producer colleagues (mostly women, surprise surprise) see it much worse. Practicing lawyers tell them they don't know what they are talking about. Some may be making valid points, some not. Some are nice about it, while others are not. I'm speaking mostly of good faith boundary issues here, not trolling or harassment, which is a different animal in my mind.

I guess the real question is what to do about it. If you are in an "open" area, boundaries will get pushed. Some people welcome this, and some despise it. Some are challenged more fairly than others. I suspect that people have different ways of managing their boundaries, and it depends heavily on who and how folks are commenting. Some may ignore it, some may swat back about relative expertise, some engage with everyone, some disengage selectively or entirely, going so far as block and mute. I suspect it's a mix.

In any event, I don't have any policy prescriptions here. I know so little about it that I have no clue what the right answer is. I just thought I would make explicit what is usually implicit, point out an interesting article about it, and suggest that readers be mindful of boundaries and Diff'rent Strokes - what might be right for you, may not be right for some.