Elizabeth Rowe has a highly informative new empirical paper, called “eBay, Permanent Injunctions, & Trade Secrets,” forthcoming in Washington and Lee Law Review. Professor Rowe examines—through both high-level case coding and individual case analysis—when, and under what circumstances, courts are willing to grant permanent injunctions in trade secret cases. (So-called “permanent” injunctions are granted or denied after the trade secret plaintiff is victorious, as opposed to “preliminary” injunctions granted or denied prior to on-the-merits review. They need not actually last forever).
Patent & IP blog, discussing recent news & scholarship on patents, IP theory & innovation.
Sunday, November 10, 2019
Thursday, October 24, 2019
Response to Similar Secrets by Fishman & Varadarajan
Posted by
Camilla Hrdy
Professors Joseph Fishman and Deepa Varadarajan have argued trade secret law should be more like copyright law. Specifically, they argue trade secret law should not prevent people (especially departing employees who obtained the trade secret lawfully within the scope of their employment) from making new end uses of trade secret information, so long as it's not a foreseeable use of the underlying information and is generally outside of the plaintiff's market. The authors made this controversial argument at last year's IP Scholars conference at Berkeley Law, and in their new article in University of Pennsylvania Law Review, called "Similar Secrets."
My full response to Similar Secrets is now published in the University of Pennsylvania Law Review Online. It is called: "Should Dissimilar Uses Of Trade Secrets Be Actionable?" The response explains in detail why I think the answer is, as a general matter, YES. It can be downloaded at: https://www.pennlawreview.com/online/168-U-Pa-L-Rev-Online-78.pdf
My full response to Similar Secrets is now published in the University of Pennsylvania Law Review Online. It is called: "Should Dissimilar Uses Of Trade Secrets Be Actionable?" The response explains in detail why I think the answer is, as a general matter, YES. It can be downloaded at: https://www.pennlawreview.com/online/168-U-Pa-L-Rev-Online-78.pdf
Tuesday, September 24, 2019
Lucy Xiaolu Wang on the Medicines Patent Pool
Posted by
Lisa Larrimore Ouellette
Patent pools are agreements by multiple patent owners to license related patents for a fixed price. The net welfare effect of patent pools is theoretically ambiguous: they can reduce numerous transaction costs, but they also can impose anti-competitive costs (due to collusive price-fixing) and costs to future innovation (due to terms requiring pool members to license future technologies back to the pool). In prior posts, I've described work by Ryan Lampe and Petra Moser suggesting that the first U.S. patent pool—on sewing machine technologies—deterred innovation, and work by Rob Merges and Mike Mattioli suggesting that the savings from two high tech pools are enormous, and that those concerned with pools thus have a high burden to show that the costs outweigh these benefits. More recently, Mattioli has reviewed the complex empirical literature on patent pools.
Economics Ph.D. student Lucy Xiaolu Wang has a very interesting new paper to add to this literature, which I believe is the first empirical study of a biomedical patent pool: Global Drug Diffusion and Innovation with a Patent Pool: The Case of HIV Drug Cocktails. Wang examines the Medicines Patent Pool (MPP), a UN-backed nonprofit that bundles patents for HIV drugs and other medicines and licenses these patents for generic sales in developing countries, with rates that are typically no more than 5% of revenues. For many diseases, including HIV/AIDS, the standard treatment requires daily consumption of multiple compounds owned by different firms with numerous patents. Such situations can benefit from a patent pool for the diffusion of drugs and the creation of single-pill once-daily drug cocktails. She uses a difference-in-differences method to study the effect of the MPP on both static and dynamic welfare and finds enormous social benefits.
On static welfare, she concludes that the MPP increases generic drug purchases in developing countries. She uses "the arguably exogenous variation in the timing of when a drug is included in the pool"—which "is not determined by demand side factors such as HIV prevalence and death rates"—to conclude that adding a drug to the MPP for a given country "increases generic drug share by about seven percentage points in that country." She reports that the results are stronger in countries where drugs are patented (with patent thickets) and are robust to alternative specifications or definitions of counterfactual groups.
On dynamic welfare, Wang concludes that the MPP increases follow-on innovation. "Once a compound enters the pool, new clinical trials increase for drugs that include the compound and more firms participate in these trials," resulting in more new drug product approvals, particularly generic versions of single-pill drug cocktails. And this increase in R&D comes from both pool insiders and outsiders. She finds that outsiders primarily increase innovation for new and better uses of existing compounds, and insiders reallocate resources for pre-market trials and new compound development.
Under these estimations, the net social benefit is substantial. Wang uses a simple structural model and estimates that the MPP for licensing HIV drug patents increased consumer surplus by $700–1400 million and producer surplus by up to $181 million over the first seven years of its establishment, greatly exceeding the pool's $33 million total operating cost over the same period. Of course, estimating counterfactuals from natural experiments is always fraught with challenges. But as an initial effort to understand the net benefits and costs of the MPP, this seems like an important contribution that is worth the attention of legal scholars working in the patent pool area.
Economics Ph.D. student Lucy Xiaolu Wang has a very interesting new paper to add to this literature, which I believe is the first empirical study of a biomedical patent pool: Global Drug Diffusion and Innovation with a Patent Pool: The Case of HIV Drug Cocktails. Wang examines the Medicines Patent Pool (MPP), a UN-backed nonprofit that bundles patents for HIV drugs and other medicines and licenses these patents for generic sales in developing countries, with rates that are typically no more than 5% of revenues. For many diseases, including HIV/AIDS, the standard treatment requires daily consumption of multiple compounds owned by different firms with numerous patents. Such situations can benefit from a patent pool for the diffusion of drugs and the creation of single-pill once-daily drug cocktails. She uses a difference-in-differences method to study the effect of the MPP on both static and dynamic welfare and finds enormous social benefits.
On static welfare, she concludes that the MPP increases generic drug purchases in developing countries. She uses "the arguably exogenous variation in the timing of when a drug is included in the pool"—which "is not determined by demand side factors such as HIV prevalence and death rates"—to conclude that adding a drug to the MPP for a given country "increases generic drug share by about seven percentage points in that country." She reports that the results are stronger in countries where drugs are patented (with patent thickets) and are robust to alternative specifications or definitions of counterfactual groups.
On dynamic welfare, Wang concludes that the MPP increases follow-on innovation. "Once a compound enters the pool, new clinical trials increase for drugs that include the compound and more firms participate in these trials," resulting in more new drug product approvals, particularly generic versions of single-pill drug cocktails. And this increase in R&D comes from both pool insiders and outsiders. She finds that outsiders primarily increase innovation for new and better uses of existing compounds, and insiders reallocate resources for pre-market trials and new compound development.
Under these estimations, the net social benefit is substantial. Wang uses a simple structural model and estimates that the MPP for licensing HIV drug patents increased consumer surplus by $700–1400 million and producer surplus by up to $181 million over the first seven years of its establishment, greatly exceeding the pool's $33 million total operating cost over the same period. Of course, estimating counterfactuals from natural experiments is always fraught with challenges. But as an initial effort to understand the net benefits and costs of the MPP, this seems like an important contribution that is worth the attention of legal scholars working in the patent pool area.
Sunday, September 8, 2019
Anthony Levandowski: Is Being a Jerk a Crime?
Posted by
Camilla Hrdy
Former Google employee Anthony Levandowski was recently indicted on federal criminal charges of trade secret theft. As reported in the Los Angeles Times, the indictment was filed by the U.S. attorney’s office in San Jose and is based on the same facts as the civil trade secrets lawsuit that Waymo (formerly Google’s self-driving car project) settled with Uber last year. It is even assigned to the same judge. The gist of the indictment is that, at the time of his resignation from Waymo, and just before taking a new job at Uber, Levandowski downloaded approximately 14,000 files from a server hosted on Google's network. These files allegedly contained "critical engineering information about the hardware used on [Google's] self-driving vehicles …" Each of the 33 counts with which Levandowski is charged carries a penalty of up to 10 years in prison and a $250,000 fine.
This is a crucial time to remember that being disloyal to your employer, on its own, is not illegal. Employees like Levandowski have a clear duty of secrecy with respect to certain information they receive through their employment. But if none of this information constitutes trade secrets, there is no civil trade secret claim. In other words, for a civil trade secrets misappropriation claim, if there is no trade secret, there is no cause of action.
For criminal cases like Levandowski's, the situation is more complicated. The federal criminal trade secret statute shares the same definition of "trade secret" as the federal civil trade secret statute. See 18 U.S.C. § 1839(3). However, unlike in civil trade secret cases, attempt and conspiracy can be actionable. 18 U.S.C. § 1832(a)(4)-(5). This means that even if the crime was not successful—because the information the employee took wasn't actually a trade secret—the employee can still go to jail. See U.S. v. Hsu, 155 F. 3d 189 (3rd Cir. 1998); U.S. v. Martin, 228 F.3d 1 (2000).
The Levandoski indictment brings counts of criminal theft and attempted theft of trade secrets. (There is no conspiracy charge, which perhaps suggests the government will not argue Uber was knowingly involved.) But the inclusion of an "attempt" crime means the key question is not just whether Levandowski stole actual trade secrets. It is whether he attempted to do so while having the appropriate state of mind. The criminal provisions under which Levandowski is charged, codified in18 U.S.C. §§ 1832(a)(1), (2), (3) and (4), provide that "[w]hoever, with intent to convert a trade secret ... to the economic benefit of anyone other than the owner thereof, and intending or knowing that the offense will, injure any owner of that trade secret, knowingly—steals...obtains... possesses...[etcetera]" a trade secret, or "attempts to" do any of those things, "shall... be fined under this title or imprisoned not more than 10 years, or both…"
This means Levandowski can be found guilty of attempting to steal trade secrets that never actually existed. This seems odd. It contradicts fundamental ideas behind why we protect trade secrets. As law professor, Mark Lemley, observed in his oft-cited Stanford Law Review article, modern trade secret law is not a free-ranging license for judges to punish any acts they perceive as disloyal or immoral. It is a special form of property regime. Charles Tait Graves, a partner at Wilson, Sonsini, Goodrich & Rosati, who teaches trade secrets at U.C. Hastings College of Law, echoes this conclusion. Treating trade secrets as an employer’s property, Graves writes, counterintuitively "offers better protection for employees who change jobs” than the alternatives, because it means courts must carefully "define the boundaries" of the right, and may require the court to rule in the end "that not all valuable information learned on the job is protectable.” See Charles Tait Graves, Trade Secrets As Property: Theory and Consequences, 15 J. Intell. Prop. L. 39 (2007).
So where does that leave Levandowski? In Google/Waymo’s civil case against Uber, Uber got off with a settlement deal, presumably in part because Google recognized the difficulty in proving key pieces of its civil case. Despite initial appearances, Google’s civil action was not actually a slam dunk. It was not clear Uber actually received the specific files Levandowski took or that the information contained in those files constituted trade secrets, versus generally known information or Levandonwki's own "general knowledge, skill, and experience.” (I discuss this latter issue in my recent article, The General Knowledge, Skill, and Experience Paradox, forthcoming in the Boston College Law Review).
But thanks to criminal remedies under 18 U.S.C. §1832, and that pesky "attempt" charge, Levandowsi is left holding the blame and facing millions in fines, and many decades in jail.
Maybe being a jerk is illegal after all.
Wednesday, July 17, 2019
Pushback on Decreasing Patent Quality Narrative
Posted by
Michael Risch
It's been a while since I've posted, as I've taken on Vice Dean duties at my law school that have kept me busy. I hope to blog more regularly as I get my legs under me. But I did see a paper worth posting mid-summer.
Wasserman & Frakes have published several papers showing that as examiners gain more seniority, their time spent examining patents decreases and their allowances come more quickly. They (and many others) have taken this to mean a decrease in patent quality.
Charles A. W. deGrazia (University of London, USPTO), Nicholas A. Pairolero (USPTO), and Mike H. M. Teodorescu (Boston College Management, Harvard Business) have released a draft that pushes back on this narrative. The draft is available on SSRN, and the abstract is below:
Their conclusion is that to the extent seniority leads to a time crunch through heavier loads, it is handled by more efficient claim amendment through the examiner amendment procedures, and quality is not reduced.
As with all new studies like this one, it will take time to parse out the methodology and hear critiques. I, for one, am glad to hear of rising use of examiner amendments, as I long ago suggested that as a way to improve patent clarity.
Wasserman & Frakes have published several papers showing that as examiners gain more seniority, their time spent examining patents decreases and their allowances come more quickly. They (and many others) have taken this to mean a decrease in patent quality.
Charles A. W. deGrazia (University of London, USPTO), Nicholas A. Pairolero (USPTO), and Mike H. M. Teodorescu (Boston College Management, Harvard Business) have released a draft that pushes back on this narrative. The draft is available on SSRN, and the abstract is below:
Prior research argues that USPTO first-action allowance rates increase with examiner seniority and experience, suggesting lower patent quality. However, we show that the increased use of examiner's amendments account for this prior empirical finding. Further, the mechanism reduces patent pendency by up to fifty percent while having no impact on patent quality, and therefore likely benefits innovators and firms. Our analysis suggests that the policy prescriptions in the literature regarding modifying examiner time allocations should be reconsidered. In particular, rather than re-configuring time allocations for every examination promotion level, researchers and stakeholders should focus on the variation in outcomes between junior and senior examiners and on increasing training for examiner's amendment use as a solution for patent grant delay.In short, they hypothesize (and then empirically show with 4.6 million applications) that as seniority increases, the likelihood of examiner amendments goes up, and it goes up on the first office action. They measure how different the amended claims are, and they use measures of patent scope to show that the amended applications are no broader than those that junior examiners take longer to prosecute.
Their conclusion is that to the extent seniority leads to a time crunch through heavier loads, it is handled by more efficient claim amendment through the examiner amendment procedures, and quality is not reduced.
As with all new studies like this one, it will take time to parse out the methodology and hear critiques. I, for one, am glad to hear of rising use of examiner amendments, as I long ago suggested that as a way to improve patent clarity.
Monday, July 8, 2019
Jacob Victor: Should Royalty Rates in Compulsory Licensing of Music Be Set Below the Market Price?
Posted by
Camilla Hrdy
Jacob Victor has a remarkable new article on copyright compulsory licenses, forthcoming in the Stanford Law Review. The article boldly wades into the notoriously convoluted history of the compulsory license option for obtaining rights to copyrighted music, and makes what I think is a very interesting and important normative argument about how compulsory license rates should be set. Other scholars who have written on compulsory licensing, whose work Victor addresses, include, to name only a few: Kristelia Garcia, Jane C. Ginsburg, Wendy Gordon, Lydia Pallas Loren, Robert P. Merges, Pam Samuelson, Tim Wu, and more herein.
Tuesday, June 18, 2019
Freilich & Ouellette: USPTO should require prophetic examples to be clearly labeled to avoid confusion
Posted by
Lisa Larrimore Ouellette
Professor Janet Freilich (Fordham Law) has a fantastic forthcoming law review article, Prophetic Patents, which puts a spotlight on the common practice of submitting patent applications containing entirely hypothetical experimental results. These "prophetic examples" are permitted as long as the predicted results are not in the past tense. Using this tense rule, Freilich analyzed over two million U.S. patents in chemistry and biology, and she estimates that 17% of examples in these patents are prophetic. Prophetic examples may be familiar to patent drafters, but scientists and engineers who learn about them generally describe them as bizarre, and even some patent scholars are unfamiliar with the practice. Prophetic Patents was the one article by a lawyer selected for the 2018 NBER Summer Institute on Innovation, and the economist-heavy audience was fascinated by the concept—many were not even aware that researchers can obtain a patent without actually implementing an invention, much less that patents can contain hypothetical data.
Freilich notes the potential benefits of allowing untested ideas to be patented in terms of encouraging earlier disclosure and helping firms acquire financing, though she finds that patents with prophetic examples are not broader (based on claim word count), filed earlier (based on AIA implementation), or more likely to be filed by small entities. I'm very sympathetic to the argument that the current legal standard may allow speculative ideas to be patented too early—I've argued in prior work that all the competing policy considerations raised by Pierson v. Post about the optimal timing of property rights suggest that many patents are currently awarded prematurely. This is a challenging empirical question, however, because we cannot observe the counterfactual innovation ecosystem operating under a different legal standard.
But while pondering the hard question of the timing of patentability, patent scholars should not lose sight of the easy question: even if patenting untested inventions is socially desirable, there is no reason these patents need to be confusing. To me, Freilich's most interesting empirical result is her study of how often prophetic patents are mis-cited in scientific publications. She looked at 100 randomly selected patents with only prophetic examples that were cited in a scientific article or book for a specific proposition, and she found that 99 were not cited in a way that made clear they were prophetic. Instead, they were cited with phrases such as "[d]ehydration reaction in gas phase has been carried out over solid acid catalysts" (emphasis added). And it is not surprising that scientist readers are misled: many prophetic examples do confusingly mimic actual experiments, with specific numerical results. In prior work, I have shown that contrary to the assertions of some patent scholars, a substantial number of scientists do look to the patent literature to learn new technical information. So it is concerning that a large number of patents are written in a way that can be confusing to readers unfamiliar with the tense rule.
Freilich and I teamed up on a new project for which we interviewed patent drafters to explore whether prophetic examples have any important benefits for patentees that could not be obtained through less misleading methods of constructive reduction to practice. In Science Fiction: Fictitious Experiments in Patents—just published in last week's Science—we explain that the answer is no. Patent prosecutors who rarely use prophetic examples argued that there is no legal reason to use fictitious experiments with specific results rather than more general predictions. Those who usually use prophetic examples agreed that more explicit labeling would not affect the patents' legal strength. The only benefit to patentees that would be reduced by requiring greater clarity seems to be any benefit that comes from confusion, which does not seem worth preserving.
The USPTO already requires prophetic examples to be labeled by tense. But the tense rule is unfamiliar to many readers (including scientists and investors), and the distinction in tenses may be literally lost in translation in foreign patent offices. (For example, the form of Chinese verbs does not change with tense.) There is no good justification for not having a more explicit label, such as "hypothetical experiment." As Freilich and I conclude: "Just because some patents are not based on actual results does not mean they need to be confusing. Scientists regularly write grant applications in a way that makes clear what preliminary data they have already acquired and what the expected goal of the proposed project is. Perhaps this is an area in which the patent system could learn from the scientific community."
Freilich notes the potential benefits of allowing untested ideas to be patented in terms of encouraging earlier disclosure and helping firms acquire financing, though she finds that patents with prophetic examples are not broader (based on claim word count), filed earlier (based on AIA implementation), or more likely to be filed by small entities. I'm very sympathetic to the argument that the current legal standard may allow speculative ideas to be patented too early—I've argued in prior work that all the competing policy considerations raised by Pierson v. Post about the optimal timing of property rights suggest that many patents are currently awarded prematurely. This is a challenging empirical question, however, because we cannot observe the counterfactual innovation ecosystem operating under a different legal standard.
But while pondering the hard question of the timing of patentability, patent scholars should not lose sight of the easy question: even if patenting untested inventions is socially desirable, there is no reason these patents need to be confusing. To me, Freilich's most interesting empirical result is her study of how often prophetic patents are mis-cited in scientific publications. She looked at 100 randomly selected patents with only prophetic examples that were cited in a scientific article or book for a specific proposition, and she found that 99 were not cited in a way that made clear they were prophetic. Instead, they were cited with phrases such as "[d]ehydration reaction in gas phase has been carried out over solid acid catalysts" (emphasis added). And it is not surprising that scientist readers are misled: many prophetic examples do confusingly mimic actual experiments, with specific numerical results. In prior work, I have shown that contrary to the assertions of some patent scholars, a substantial number of scientists do look to the patent literature to learn new technical information. So it is concerning that a large number of patents are written in a way that can be confusing to readers unfamiliar with the tense rule.
Freilich and I teamed up on a new project for which we interviewed patent drafters to explore whether prophetic examples have any important benefits for patentees that could not be obtained through less misleading methods of constructive reduction to practice. In Science Fiction: Fictitious Experiments in Patents—just published in last week's Science—we explain that the answer is no. Patent prosecutors who rarely use prophetic examples argued that there is no legal reason to use fictitious experiments with specific results rather than more general predictions. Those who usually use prophetic examples agreed that more explicit labeling would not affect the patents' legal strength. The only benefit to patentees that would be reduced by requiring greater clarity seems to be any benefit that comes from confusion, which does not seem worth preserving.
The USPTO already requires prophetic examples to be labeled by tense. But the tense rule is unfamiliar to many readers (including scientists and investors), and the distinction in tenses may be literally lost in translation in foreign patent offices. (For example, the form of Chinese verbs does not change with tense.) There is no good justification for not having a more explicit label, such as "hypothetical experiment." As Freilich and I conclude: "Just because some patents are not based on actual results does not mean they need to be confusing. Scientists regularly write grant applications in a way that makes clear what preliminary data they have already acquired and what the expected goal of the proposed project is. Perhaps this is an area in which the patent system could learn from the scientific community."
Monday, May 20, 2019
Inevitable Disclosure Injunctions Under the DTSA: Much Ado About § 1836(b)(3)(A)(i)(1)(I)
Posted by
Camilla Hrdy
When trade secret law was federalized in 2016, some commentators and legislators expressed concern that federalization of trade secret law would make so-called "inevitable disclosure" injunctions against departing employees a federal remedy, and negatively impact employee mobility on a national scale.
In response to such concerns, the Defend Trade Secrets Act (DTSA) included a provision that is ostensibly designed to limit availability of inevitable disclosure injunctions under the DTSA. The limiting provision is codified in 18 U.S.C. 1836(b)(3)(A)(i)(1)(I), discussed further below.
The DTSA has been in effect for just over three years. My preliminary observation is that courts do not appear to view Section 1836(b)(3)(A)(i)(1)(I) as placing novel limitations on employment injunctions in trade secret cases. They also do not seem to be wary of "inevitable disclosure" language.
In response to such concerns, the Defend Trade Secrets Act (DTSA) included a provision that is ostensibly designed to limit availability of inevitable disclosure injunctions under the DTSA. The limiting provision is codified in 18 U.S.C. 1836(b)(3)(A)(i)(1)(I), discussed further below.
The DTSA has been in effect for just over three years. My preliminary observation is that courts do not appear to view Section 1836(b)(3)(A)(i)(1)(I) as placing novel limitations on employment injunctions in trade secret cases. They also do not seem to be wary of "inevitable disclosure" language.
Friday, May 17, 2019
Likelihood of Confusion: Is 15% The Magic Number?
Posted by
Camilla Hrdy
David Bernstein, Partner at Debevoise & Plimpton, gave an interesting presentation yesterday at NYU Law Engelberg Center's "Proving IP" Conference on the origins of the "fifteen percent benchmark" in trademark likelihood of confusion analysis. (The subject of the panel was "Proving Consumer Perception: What are the best ways to test what consumers and users perceive about a work and how it is being positioned in the market?")
In trademark law, infringement occurs if defendant’s use of plaintiff’s trademark is likely to cause confusion as to the source of defendant’s product or as to sponsorship or affiliation. Courts across circuits often frame the question as whether an "appreciable number" of ordinarily prudent purchasers are likely to be confused. But evidence of actual confusion is not required. There is not supposed to be a magic number. Courts are supposed to assess a variety of factors, including the similarity of the marks and the markets in which they are used, along with evidence of actual confusion, if any, in order to asses whether confusion is likely, at some point, to occur.
In theory.
But in practice, Bernstein asserted, there is a magic number: it's around fifteen percent. Courts will often state that a survey finding 15% or more is sufficient to support likelihood of confusion, while under 15% suggests no likelihood of confusion. See, e.g., 1-800 CONTACTS, INC. v. Lens. com, Inc., 722 F. 3d 1229, 1248-49 (10th Cir. 2013) (discussing survey findings on the low end).
In trademark law, infringement occurs if defendant’s use of plaintiff’s trademark is likely to cause confusion as to the source of defendant’s product or as to sponsorship or affiliation. Courts across circuits often frame the question as whether an "appreciable number" of ordinarily prudent purchasers are likely to be confused. But evidence of actual confusion is not required. There is not supposed to be a magic number. Courts are supposed to assess a variety of factors, including the similarity of the marks and the markets in which they are used, along with evidence of actual confusion, if any, in order to asses whether confusion is likely, at some point, to occur.
In theory.
But in practice, Bernstein asserted, there is a magic number: it's around fifteen percent. Courts will often state that a survey finding 15% or more is sufficient to support likelihood of confusion, while under 15% suggests no likelihood of confusion. See, e.g., 1-800 CONTACTS, INC. v. Lens. com, Inc., 722 F. 3d 1229, 1248-49 (10th Cir. 2013) (discussing survey findings on the low end).
Tuesday, May 14, 2019
The Stanford NPE Litigation Database
Posted by
Michael Risch
I've been busy with grading and end of year activities, which has limited blogging time. I did want to drop a brief note that the Stanford NPE Litigation Database appears to be live now and fully populated with 11 years of data from 2007-2017. They've been working on this database for a long while. It provides limited but important data: Case name and number, district, filing date, patent numbers, plaintiff, defendants, and plaintiff type. The database also includes a link to Lex Machina's data if you have access.
The plaintiff type, especially, is something not available anywhere else, and is the key value of the database (hence the name). There are surely some quibbles about how some are coded (I know of one where I disagree), but on the whole, the coding is much more useful than the "highly active" plaintiff designations in other databases.
I think this database is also useful as a check on other services, as it is hand coded and may correct errors in patent numbers, etc., that I've periodically found. I see the value as threefold:
The database is here: http://npe.law.stanford.edu/ Kudos to Shawn Miller for all his hard work on this, and to Mark Lemley for having the vision to create it and get it funded and completed.
The plaintiff type, especially, is something not available anywhere else, and is the key value of the database (hence the name). There are surely some quibbles about how some are coded (I know of one where I disagree), but on the whole, the coding is much more useful than the "highly active" plaintiff designations in other databases.
I think this database is also useful as a check on other services, as it is hand coded and may correct errors in patent numbers, etc., that I've periodically found. I see the value as threefold:
- As a supplement to other data, adding plaintiff type
- As a quick, free guide to which patents were litigated in each case, or which cases involved a particular patent, etc.
- As a bulk data source showing trends in location, patent counts, etc., useful in its own right.
The database is here: http://npe.law.stanford.edu/ Kudos to Shawn Miller for all his hard work on this, and to Mark Lemley for having the vision to create it and get it funded and completed.
Friday, May 3, 2019
Fromer: Machines as Keepers of Trade Secrets
Posted by
Camilla Hrdy
I really enjoyed Jeanne Fromer's new article, Machines as the New Oompa-Loompas: Trade Secrecy, the Cloud, Machine Learning, and Automation, forthcoming in the N.Y.U. Law Review and available on SSRN. I think Professor Fromer has an important insight that more use of machines in businesses, including but not limited to increasing automation (i.e. using machines as the source of labor rather than humans), has made it easier for companies to preserve the trade secrecy of their information. Secrecy is not only more technologically possible, Fromer argues, but the chances that information will spill out of the firm are reduced, since human employees are less likely to leave and transfer the information to competitors, either illegally in the form of trade secret misappropriation or legally in the form of unprotectable "general knowledge, skill, and experience."
Professor Fromer's main take-home is that we should be a little worried about this situation, especially when seen in light of Fromer's prior work on the crucial disclosure function of patents. Whereas patents (in theory at least) put useful information into the public domain through the disclosures collected in patent specifications, trade secret law does the opposite, providing potentially indefinite protection for information kept in secret. Fromer's insight about growing use of machines as alternatives to humans provides a new reason to worry about the impact of trade secrecy, which does not require disclosure and potentially lasts forever, for follow-on innovation and competition.
Here was what I see as a key passage:
For more on trade secret law's "general knowledge, skill, and experience" status quo, see my new article, The General Knowledge, Skill, and Experience Paradox. I recently discussed this work on Brian Frye's legal scholarship podcast, Ipse Dixit in an episode entitled "Camilla Hrdy on Trade Secrets and Their Discontents".
Professor Fromer's main take-home is that we should be a little worried about this situation, especially when seen in light of Fromer's prior work on the crucial disclosure function of patents. Whereas patents (in theory at least) put useful information into the public domain through the disclosures collected in patent specifications, trade secret law does the opposite, providing potentially indefinite protection for information kept in secret. Fromer's insight about growing use of machines as alternatives to humans provides a new reason to worry about the impact of trade secrecy, which does not require disclosure and potentially lasts forever, for follow-on innovation and competition.
Here was what I see as a key passage:
In addition to the myriad of potential societal consequences that a shift toward automation would have on human happiness, subsistence, and inequality, automation that replaces a substantial amount of employment also turns more business knowledge into an impenetrable secret. How so? While a human can leave the employ of one business to take up employment at a competitor, a machine performing this employee’s task would never do so. Such machines would remain indefinitely at a business’s disposal, keeping all their knowledge self-contained within the business’s walls. Increasing automation thereby makes secrecy more robust than ever before. Whereas departing employees can legally take their elevated general knowledge and skill to new jobs, a key path by which knowledge spills across an industry, machines automating employees’ tasks will never take their general knowledge and skill elsewhere to competitors. Thus, by decreasing the number of employees that might carry their general knowledge and skill to new jobs and in any event the amount of knowledge and skill that each employee might have to take, increasing automation undermines a critical limitation on trade secrecy protection.(17)
For more on trade secret law's "general knowledge, skill, and experience" status quo, see my new article, The General Knowledge, Skill, and Experience Paradox. I recently discussed this work on Brian Frye's legal scholarship podcast, Ipse Dixit in an episode entitled "Camilla Hrdy on Trade Secrets and Their Discontents".
Wednesday, May 1, 2019
Measuring Patent Thickets
Posted by
Michael Risch
Measuring the effect of patenting on industry R&D is an age old pursuit in innovation economics. It's hard. The latest interesting attempt comes from Greg Day (Georgia Business) and Michael Schuster (OK State, but soon to be Georgia Business). They look at more than one million patents to determine that large portfolios tend to crowd out startups. I'm totally with them on that. As I wrote extensively during troll hysteria, patent portfolios and assertion by active companies can be harmful to innovation.
The question is how much, and what to do about it. Day and Schuster argue in their paper that the issue is patent thickets, as their abstract shows. The draft article Patent Inequality, is on SSRN:
They also find that firms with large portfolios are more likely to renew their patents, holding other indicia of patent quality (and firm assets) equal. Even if we assume that their indicia of patent quality are complete (they use forward cites, number of inventors, and number of claims), the effect they find is really, really small. For the one reported industry - biology, the effect is something like a -0.00000982 percent likelihood of lapse for each additional patent. This is statistically significant, I assume, because of the very large sample size and a relatively small variation. But it seems barely economically significant. If you multiply it out, it means that each patent is 1% more likely to lapse for every 1,000 patents in the portfolio (that is, from 50% chance of lapse, to 49% chance of lapse. For IBM - the largest patentee of the time with about 25,000 patents during the relative time period, it's still only a 25% change. Most patentees, even with portfolios, would be nowhere near that. I'm just not sure what we can read into those numbers - certainly not the broad policy prescriptions suggested in the paper, in my view.
That said, this paper provides a lot of useful information about what drives portfolio patenting, as well as a comprehensive look at what drives maintenance rates. I would have liked to see litigation data mixed in, as that will certainly affect renewals one way or the other, but even as is, this paper is an interesting read.
The question is how much, and what to do about it. Day and Schuster argue in their paper that the issue is patent thickets, as their abstract shows. The draft article Patent Inequality, is on SSRN:
Using an original dataset of over 1,000,000 patents and empirical methods, we find that the patent system perpetuates inequalities between powerful and upstart firms. When faced with growing numbers of patents in a field, upstart inventors reduce research and development expenditures, while those already holding many patents increase their innovation efforts. This phenomenon affords entrenched firms disproportionate opportunities to innovate as well as utilize the resulting patents to create barriers to entry (e.g., licensing costs or potential litigation).
A hallmark of this type of behavior is securing large patent holdings to create competitive advantages associated with the size of the portfolio, regardless of the value of the underlying patents. Indeed, this strategy relies on quantity, not quality. Using a variety of models, we first find evidence that this strategy is commonplace in innovative markets. Our analysis then determines that innovation suffers when firms amass many low-value patents to exclude upstart inventors. From these results, we not only provide answers to a contentious debate about the effects of strategic patenting, but also suggest remedial policies to foster competition and innovation.The article uses portfolio sizes and maintenance renewals to find correlations with investment. They find, unsurprisingly, that the more patents there are in portfolios in an industry, the lower the R&D investment. However, the causal takeaways from this seem to me to be ambiguous. It could be the patent thickets that cause that limitation, or it could simply be that industries dominated by large players are less competitive and drive out startups. There are plenty of (non-patent) theorists that would predict such outcomes.
They also find that firms with large portfolios are more likely to renew their patents, holding other indicia of patent quality (and firm assets) equal. Even if we assume that their indicia of patent quality are complete (they use forward cites, number of inventors, and number of claims), the effect they find is really, really small. For the one reported industry - biology, the effect is something like a -0.00000982 percent likelihood of lapse for each additional patent. This is statistically significant, I assume, because of the very large sample size and a relatively small variation. But it seems barely economically significant. If you multiply it out, it means that each patent is 1% more likely to lapse for every 1,000 patents in the portfolio (that is, from 50% chance of lapse, to 49% chance of lapse. For IBM - the largest patentee of the time with about 25,000 patents during the relative time period, it's still only a 25% change. Most patentees, even with portfolios, would be nowhere near that. I'm just not sure what we can read into those numbers - certainly not the broad policy prescriptions suggested in the paper, in my view.
That said, this paper provides a lot of useful information about what drives portfolio patenting, as well as a comprehensive look at what drives maintenance rates. I would have liked to see litigation data mixed in, as that will certainly affect renewals one way or the other, but even as is, this paper is an interesting read.
Tuesday, April 23, 2019
How Does Patent Eligibility Affect Investment?
Posted by
Michael Risch
David Taylor (SMU) was interested in how patent eligibility decisions at the Supreme Court affected venture investment decisions, so he thought he would ask. He put together an ambitious survey of 14,000 investors at 3000 firms, and obtained some grant money to provide incentives. As a result, he got responses from 475 people at 422 firms. The response rate by individual is really low, but by firm it's 12% - not too bad. He performs some analysis of non-responders, and while there's a bit of an oversample on IT and on early funding, it appears to be somewhat representative.
The result is a draft on SSRN and forthcoming in Cardozo L. Rev. called Patent Eligibility and Investment. Here is the abstract:
There are several findings on the importance of patents, and these are consistent with the rest of the literature - that patents are important for investment decisions, but not first on the list (or second or third). Further, the survey finds that firms would invest less in areas where there are fewer patents - but this is much more pronounced for biotech and pharma than it is for IT. This, too, seems to comport with anecdotal evidence.
But I've always been skeptical of surveys that ask what people would do - stated preferences are different than revealed preferences. The best way to measure revealed preferences would be through some sort of empirical look at the numbers, for example a differences-in-differences approach before and after these cases (though having 60% of the people say they haven't heard of them would certainly affect whether the case constitutes a "shock" - a requirement of such a study).
Another way, which this survey attempts, is to ask not what investors would do but rather ask what they have done. This amounts to the most interesting part of the survey - investors who know about the key court opinions say they have moved out of biotech and pharma, and into IT. So much for Alice destroying IT investment, as some claim (though we might still see a shift in the type of projects and/or the type of protection - such as trade secrets). But more interesting to me was that there was also a similar shift among those folks who claimed not to know much about patent eligibility or think it had anything to do with their investment. In other words, even for that group who didn't actively blame the Supreme Court, they were shifting investments out of biotech and pharma and into IT.
You can, of course, come up with other explanations - perhaps biotech is just less valuable now for other reasons. But this survey is an important first step in teasing out those issues.
There are a lot more questions on the survey and some interesting answers. It's a relatively quick and useful read.
The result is a draft on SSRN and forthcoming in Cardozo L. Rev. called Patent Eligibility and Investment. Here is the abstract:
Have the Supreme Court’s recent patent eligibility cases changed the behavior of venture capital and private equity investment firms, and if so how? This Article provides empirical data about investors’ answers to those important questions. Analyzing responses to a survey of 475 investors at firms investing in various industries and at various stages of funding, this Article explores how the Court’s recent cases have influenced these firms’ decisions to invest in companies developing technology. The survey results reveal investors’ overwhelming belief that patent eligibility is an important consideration in investment decisionmaking, and that reduced patent eligibility makes it less likely their firms will invest in companies developing technology. According to investors, however, the impact differs between industries. For example, investors predominantly indicated no impact or only slightly decreased investments in the software and Internet industry, but somewhat or strongly decreased investments in the biotechnology, medical device, and pharmaceutical industries. The data and these findings (as well as others described in the Article) provide critical insight, enabling evidence-based evaluation of competing arguments in the ongoing debate about the need for congressional intervention in the law of patent eligibility. And, in particular, they indicate reform is most crucial to ensure continued robust investment in the development of life science technologies.The survey has some interesting results. Most interesting to me was that fewer than 40% of respondents were aware of any of the key eligibility decisions, though they may have been vaguely aware of reduced ability to patent. More on this in a minute.
There are several findings on the importance of patents, and these are consistent with the rest of the literature - that patents are important for investment decisions, but not first on the list (or second or third). Further, the survey finds that firms would invest less in areas where there are fewer patents - but this is much more pronounced for biotech and pharma than it is for IT. This, too, seems to comport with anecdotal evidence.
But I've always been skeptical of surveys that ask what people would do - stated preferences are different than revealed preferences. The best way to measure revealed preferences would be through some sort of empirical look at the numbers, for example a differences-in-differences approach before and after these cases (though having 60% of the people say they haven't heard of them would certainly affect whether the case constitutes a "shock" - a requirement of such a study).
Another way, which this survey attempts, is to ask not what investors would do but rather ask what they have done. This amounts to the most interesting part of the survey - investors who know about the key court opinions say they have moved out of biotech and pharma, and into IT. So much for Alice destroying IT investment, as some claim (though we might still see a shift in the type of projects and/or the type of protection - such as trade secrets). But more interesting to me was that there was also a similar shift among those folks who claimed not to know much about patent eligibility or think it had anything to do with their investment. In other words, even for that group who didn't actively blame the Supreme Court, they were shifting investments out of biotech and pharma and into IT.
You can, of course, come up with other explanations - perhaps biotech is just less valuable now for other reasons. But this survey is an important first step in teasing out those issues.
There are a lot more questions on the survey and some interesting answers. It's a relatively quick and useful read.
Thursday, April 18, 2019
Beebe and Fromer: Study on the Arbitrariness of 2(a) Immoral or Scandalous Refusals
Posted by
Camilla Hrdy
For those who have not had the pleasure of seeing it, I recommend the fascinating and, honestly, fun, new study by Barton Beebe and Jeanne Fromer on the arbitrariness and unpredictability of the U.S. Patent & Trademark Office's refusals of trademarks that are deemed to be "immoral" or "scandalous."
The study, entitled Immoral or Scandalous Marks: An Empirical Analysis, has been posted on SSRN. This paper served as the basis for Professors Beebe and Fromer's amicus brief in Iancu v. Brunetti.
This study follows up on Megan Carpenter and Mary Garner's prior 2015 paper, published in the Cardozo Arts & Entertainment Law Journal and Anne Gilson LaLonde and Jerome Gilson's 2011 article, Trademarks Laid Bare: Marks That May Be Scandalous or Immoral.
All of these studies come to similar conclusions: there are serious inconsistencies in trademark examiners' application of the Section 2(a) "immoral-or-scandalous" rejection. The Beebe/Fromer study is technically 161 pages long, but it's mostly exhibits, and it's very accessible – worth at least a read to see some of the examples they give, and to oggle at the bizarre interplay between Section 2(a) "immoral-or-scandalous" refusals and Section 2(d) "likely to confuse with prior registered mark" refusals.
The study, entitled Immoral or Scandalous Marks: An Empirical Analysis, has been posted on SSRN. This paper served as the basis for Professors Beebe and Fromer's amicus brief in Iancu v. Brunetti.
This study follows up on Megan Carpenter and Mary Garner's prior 2015 paper, published in the Cardozo Arts & Entertainment Law Journal and Anne Gilson LaLonde and Jerome Gilson's 2011 article, Trademarks Laid Bare: Marks That May Be Scandalous or Immoral.
All of these studies come to similar conclusions: there are serious inconsistencies in trademark examiners' application of the Section 2(a) "immoral-or-scandalous" rejection. The Beebe/Fromer study is technically 161 pages long, but it's mostly exhibits, and it's very accessible – worth at least a read to see some of the examples they give, and to oggle at the bizarre interplay between Section 2(a) "immoral-or-scandalous" refusals and Section 2(d) "likely to confuse with prior registered mark" refusals.
Thursday, April 11, 2019
What was the "promise of the patent doctrine"?
Posted by
Camilla Hrdy
What was the "promise of the patent doctrine"? The short answer is: a controversial doctrine that originated in English law and that, until recently, was applied in Canadian patent law to invalidate patents that made a material false promise about the utility of the invention. A common example would be a claim to therapeutic efficacy in a specification that is not born out.
Warning: the content of this doctrine this may seem bizarre to those familiar with U.S. patent law.
Warning: the content of this doctrine this may seem bizarre to those familiar with U.S. patent law.
Tuesday, April 9, 2019
Making Sense of Unequal Returns to Copyright
Posted by
Michael Risch
Typically, describing an article as polarizing refers to two different groups having very different views of an article. But I read an article this week that had a polarizing effect within myself. Indeed, it took me so long to get my thoughts together, I couldn't even get a post up last week. That article is Glynn Lunney's draft Copyright's L Curve Problem, which is now on SSRN. The article is a study of user distribution on the video game platform Steam, and the results are really interesting.
The part that has me torn is the takeaway. I agree with Prof. Lunney's view that copyright need not be extended, and that current protection (especially duration) is overkill for what is needed in the industry. I disagree with his view that you could probably dial back copyright protection all the way with little welfare loss. And I'm scratching my head over whether the data in his paper actually supports one argument or the other. Here's the abstract:
The part that has me torn is the takeaway. I agree with Prof. Lunney's view that copyright need not be extended, and that current protection (especially duration) is overkill for what is needed in the industry. I disagree with his view that you could probably dial back copyright protection all the way with little welfare loss. And I'm scratching my head over whether the data in his paper actually supports one argument or the other. Here's the abstract:
No one ever argues for copyright on the grounds that superstar artists and authors need more money, but what if that is all, or mostly all, that copyright does? This article presents newly available data on the distribution of players across the PC videogame market. This data reveals an L-shaped distribution of demand. A relative handful of games are extremely popular. The vast majority are not. In the face of an L curve, copyright overpays superstars, but does very little for the average author and for works at the margins of profitability. This makes copyright difficult to justify on either efficiency or fairness grounds. To remedy this, I propose two approaches. First, we should incorporate cost recoupment into the fourth fair use factor. Once a work has recouped its costs, any further use, whether for follow-on creativity or mere duplication, would be fair and non-infringing. Through such an interpretation of fair use, copyright would ensure every socially valuable work a reasonable opportunity to recoup its costs without lavishing socially costly excess incentives on the most popular. Second and alternatively, Congress can make copyright short, narrow, and relatively ineffective at preventing unauthorized copying. If we refuse to use fair use or other doctrines to tailor copyright’s protection on a work-by-work basis and insist that copyright provide generally uniform protection, then efficiency and fairness both require that that uniform protection be far shorter, much narrower, and generally less effective than it presently is.The paper is really an extension of Prof. Lunney's book, Copyright's Excess, which is a good read even if you disagree with it. As Chris Sprigman's JOTWELL review noted, you either buy in to his methodology or you don't. I discuss below why I'm a bit troubled.
Saturday, April 6, 2019
PatCon9 at University of Kansas
Posted by
Lisa Larrimore Ouellette
Yesterday and today, the University of Kansas School of Law hosted the ninth annual Patent Conference—PatCon9—largely organized by Andrew Torrance. Schedule and participants are here. For those who missed it, here's a recap of my live Tweets from the conference. (For those who receive Written Description posts by email: This will look much better—with pictures and parent tweets—if you visit the website version.)
Tuesday, April 2, 2019
Why do I blog?
Posted by
Lisa Larrimore Ouellette
Friday and Saturday I'll be at PatCon9 at the University of Kansas School of Law. I'll discuss a scholarly work-in-progress with Janet Freilich, and I've also been invited to serve on a panel on "Roles and Influence of Patent Blogs" (with Kevin Noonan from Patent Docs and Jason Rantanen from Patently-O, and Written Description's own Camilla Hrdy serving as moderator). So I thought this would be a good opportunity to reflect on why I've written over 300 blog posts throughout the past eight years. (For those who want to read some highlights, note that many individual words below link to separate posts.)
I started Written Description in February 2011 when I was a 3L at Yale and was winding down my work as a Yale Law Journal Articles Editor, which had been a great opportunity to read a lot of IP scholarship. I noted that there were already many blogs reporting on the latest patent news (like Patently-O and Patent Docs), but that it was "much harder to find information about recent academic scholarship about patent law or broader IP theory." The only similar blog I knew of was Jotwell, but it had only two patent-related posts in 2010. (In 2015, I was invited to join Jotwell as a contributing editor, for which I write one post every spring.) Written Description has grown to include guest posts and other blog authors—currently Camilla Hrdy (since 2013) and Michael Risch (since 2015).
Most of my posts have featured scholarship related to IP and innovation. Some posts simply summarize an article's core argument, but my favorite posts have attempted to situate an article (or articles) in the literature and discuss its implications and limitations. I also love using my blog to highlight the work of young scholars, particularly those not yet in faculty positions. And I enjoyed putting together my Classic Patent Scholarship project; inspired by Mike Madison's work on "lost classics" of IP scholarship, I invited scholars to share pre-2000 words that they thought young IP scholars should be aware of.
Some posts have focused on how scholarship can inform recent news related to IP. For example, I recently posted about the role of public funding in pharmaceutical research. And I have drawn connections between scholarship and Supreme Court patent cases such as Impression v. Lexmark, Cuozzo v. Lee, Halo v. Pulse, Teva v. Sandoz, FTC v. Actavis, Bowman v. Monsanto, and Microsoft v. i4i. My compilation of Supreme Court patent cases has been cited in some academic articles, and my post on Justice Scalia's IP legacy led to some interesting discussions. I have also reflected on patent law scholarship more generally, such as on how patent law criteria apply to evaluating legal scholarship, what experience is needed to write good scholarship, choosing among academic IP conferences, transitioning from science to patent law, and why IP isn't on the bar exam.
I started Written Description in February 2011 when I was a 3L at Yale and was winding down my work as a Yale Law Journal Articles Editor, which had been a great opportunity to read a lot of IP scholarship. I noted that there were already many blogs reporting on the latest patent news (like Patently-O and Patent Docs), but that it was "much harder to find information about recent academic scholarship about patent law or broader IP theory." The only similar blog I knew of was Jotwell, but it had only two patent-related posts in 2010. (In 2015, I was invited to join Jotwell as a contributing editor, for which I write one post every spring.) Written Description has grown to include guest posts and other blog authors—currently Camilla Hrdy (since 2013) and Michael Risch (since 2015).
Most of my posts have featured scholarship related to IP and innovation. Some posts simply summarize an article's core argument, but my favorite posts have attempted to situate an article (or articles) in the literature and discuss its implications and limitations. I also love using my blog to highlight the work of young scholars, particularly those not yet in faculty positions. And I enjoyed putting together my Classic Patent Scholarship project; inspired by Mike Madison's work on "lost classics" of IP scholarship, I invited scholars to share pre-2000 words that they thought young IP scholars should be aware of.
Some posts have focused on how scholarship can inform recent news related to IP. For example, I recently posted about the role of public funding in pharmaceutical research. And I have drawn connections between scholarship and Supreme Court patent cases such as Impression v. Lexmark, Cuozzo v. Lee, Halo v. Pulse, Teva v. Sandoz, FTC v. Actavis, Bowman v. Monsanto, and Microsoft v. i4i. My compilation of Supreme Court patent cases has been cited in some academic articles, and my post on Justice Scalia's IP legacy led to some interesting discussions. I have also reflected on patent law scholarship more generally, such as on how patent law criteria apply to evaluating legal scholarship, what experience is needed to write good scholarship, choosing among academic IP conferences, transitioning from science to patent law, and why IP isn't on the bar exam.
I sometimes debate whether blogging is still worth my time. I could instead just post links to recent scholarship on Twitter, or I could stop posting about new scholarship altogether. But a number of people who aren't on Twitter—including from all three branches of the federal government—have told me that they love receiving Written Description in their email or RSS feed. Condensing patent scholarship seems like a valuable service even for these non-academic readers alone. And the pressure to keep writing new posts keeps me engaged with the recent literature in a way that I think makes me a better scholar. I don't think blogging is a substitute for scholarship, or that it will be anytime soon. Rather, I view my blogging time as similar to the time I spend attending conferences or commenting on other scholars' papers over email—one of many ways of serving and participating in an intellectual community.
I still have a lot of questions about the role of law-related blogs today, and I hope we'll discuss some of them on Thursday. For example: Has the role of blogs shifted with the rise of Twitter? Should blog authors have any obligation to study or follow journalism ethics and standards? How do blog authors think about concerns of bias? For many patent policy issues, the empirical evidence base isn't strong enough to support strong policy recommendations—do blog authors have any obligation to raise counterarguments and conflicting evidence for any decisions or academic papers they are highlighting? What are the different financial models for blogs, and how might they conflict with other blogging goals? (This may be similar to the conflicts traditional media sources face: e.g. clickbait to drive readership can come at the cost of more responsible reporting.) Do the ethical norms of blog authorship differ from those of scholars? How should blogs consider issues of diversity and inclusion when making choices about people to spotlight or to invite for guest authorship?
I'll conclude by noting that for PatCon8 at the University of San Diego School of Law, I tried a new blogging approach: I live Tweeted the conference and then published a Tweet recap. (Aside: I started those Tweets by noting that 19% of PatCon8 participants (13 out of 68) were women. The PatCon9 speaker list currently has 24% (7 out of 29) women speakers, but I don't know which direction non-speaker participants will push this.) For PatCon9, should I (1) live Tweet again (#PatCon9), (2) just do a blog post with some more general reactions (as I've done for some conferences before), or (3) not blog about the conference at all?
I still have a lot of questions about the role of law-related blogs today, and I hope we'll discuss some of them on Thursday. For example: Has the role of blogs shifted with the rise of Twitter? Should blog authors have any obligation to study or follow journalism ethics and standards? How do blog authors think about concerns of bias? For many patent policy issues, the empirical evidence base isn't strong enough to support strong policy recommendations—do blog authors have any obligation to raise counterarguments and conflicting evidence for any decisions or academic papers they are highlighting? What are the different financial models for blogs, and how might they conflict with other blogging goals? (This may be similar to the conflicts traditional media sources face: e.g. clickbait to drive readership can come at the cost of more responsible reporting.) Do the ethical norms of blog authorship differ from those of scholars? How should blogs consider issues of diversity and inclusion when making choices about people to spotlight or to invite for guest authorship?
I'll conclude by noting that for PatCon8 at the University of San Diego School of Law, I tried a new blogging approach: I live Tweeted the conference and then published a Tweet recap. (Aside: I started those Tweets by noting that 19% of PatCon8 participants (13 out of 68) were women. The PatCon9 speaker list currently has 24% (7 out of 29) women speakers, but I don't know which direction non-speaker participants will push this.) For PatCon9, should I (1) live Tweet again (#PatCon9), (2) just do a blog post with some more general reactions (as I've done for some conferences before), or (3) not blog about the conference at all?
Tuesday, March 26, 2019
Trademarking the Seven Dirty Words
Posted by
Michael Risch
With the Supreme Court agreeing to hear the Brunetti case on the registration of scandalous trademarks, one might wonder whether allowing such scandalous marks will open the floodgates of registrations. My former colleague Vicenรง Feliรบ (Nova Southeastern) wondered as well. So he looked at the trademark database to find out. One nice thing about trademarks is that all applications show up, whether granted or not, abandoned or not. He's posted a draft of his findings, called FUCT® – An Early Empirical Study of Trademark Registration of Scandalous and Immoral Marks Aftermath of the In re Brunetti Decision, on SSRN:
It turns out, not so much. No huge jump in filings or registrations after Brunetti. More interesting, I thought, was the choice of words. Turns out (thankfully, I think) that some dirty words are way more acceptable than others in terms of popularity in trademark filings. You'll have to read the paper to find out which.
This article seeks to create an early empirical benchmark on registrations of marks that would have failed registration as “scandalous” or “immoral” under Lanham Act Section 2(a) before the Court of Appeals for the Federal Circuit’s In re Brunetti decision of December, 2017. The Brunetti decision followed closely behind the Supreme Court’s Matal v. Tam and put an end to examiners denying registration on the basis of Section 2(a). In Tam, the Supreme Court reasoned that Section 2(a) embodied restrictions on free speech, in the case of “disparaging” marks, which were clearly unconstitutional. The Federal circuit followed that same logic and labeled those same Section 2(a) restrictions as unconstitutional in the case of “scandalous” and “immoral” marks. Before the ink was dry in Brunetti, commentators wondered how lifting the Section 2(a) restrictions would affect the volume of registrations of marks previously made unregistrable by that same section. Predictions ran the gamut from “business as usual” to scenarios where those marks would proliferate to astronomical levels. Eleven months out from Brunetti, it is hard to say with certainty what could happen, but this study has gathered the number of registrations as of October 2018 and the early signs seem to indicate a future not much altered, despite early concerns to the contrary.The study focuses not on the Supreme Court, but on the Federal Circuit, which already allowed Brunetti to register FUCT. Did this lead to a stampede of scandalous marks? It's hard to define such marks, so he started with a close proxy: George Carlin's Seven Dirty Words. This classic comedy bit (really, truly classic) nailed the dirty words so well that a radio station that played the bit was fined and the case wound up in the Supreme Court, which ruled that the FCC could, in fact, ban these seven words as indecent. So, this study's assumption is that the filings of these words as trademarks are the tip of the spear. That said, his findings about prior registrations of such words (with claimed dual meaning) are interesting, and show some of the problems that the court was trying to avoid in Matal v. Tam.
It turns out, not so much. No huge jump in filings or registrations after Brunetti. More interesting, I thought, was the choice of words. Turns out (thankfully, I think) that some dirty words are way more acceptable than others in terms of popularity in trademark filings. You'll have to read the paper to find out which.
Saturday, March 23, 2019
Jotwell Review of Frakes & Wasserman's Irrational Ignorance at the Patent Office
Posted by
Lisa Larrimore Ouellette
I've previously recommended subscribing to Jotwell to keep up with interesting recent IP scholarship, but for anyone who doesn't, my latest Jotwell post highlighted a terrific forthcoming article by Michael Frakes and Melissa Wasserman. Here are the first two paragraphs:
How much time should the U.S. Patent & Trademark Office (USPTO) spend evaluating a patent application? Patent examination is a massive business: the USPTO employs about 8,000 utility patent examiners who receive around 600,000 patent applications and approve around 300,000 patents each year. Examiners spend on average only 19 total hours throughout the prosecution of each application, including reading voluminous materials submitted by the applicant, searching for relevant prior art, writing rejections, and responding to multiple rounds of arguments from the applicant. Why not give examiners enough time for a more careful review with less likelihood of making a mistake?
In a highly-cited 2001 article, Rational Ignorance at the Patent Office, Mark Lemley argued that it doesn’t make sense to invest more resources in examination: since only a minority of patents are licensed or litigated, thorough scrutiny should be saved for only those patents that turn out to be valuable. Lemley identified the key tradeoffs, but had only rough guesses for some of the relevant parameters. A fascinating new article suggests that some of those approximations were wrong. In Irrational Ignorance at the Patent Office, Michael Frakes and Melissa Wasserman draw on their extensive empirical research with application-level USPTO data to conclude that giving examiners more time likely would be cost-justified. To allow comparison with Lemley, they focused on doubling examination time. They estimated that this extra effort would cost $660 million per year (paid for by user fees), but would save over $900 million just from reduced patent prosecution and litigation costs.Read more at Jotwell.
Tuesday, March 19, 2019
The Rise and Rise of Transformative Use
Posted by
Michael Risch
I'm a big fan of transformative use analysis in fair use law, except when I'm not. I think that it is a helpful guide for determining if the type of use is one that we'd like to allow. But I also think that it can be overused - especially when it is applied to a different message but little else.
The big question is whether transformative use is used too much...or not enough. Clark Asay (BYU) has done the research on this so you don't have to. In his forthcoming article in Boston College Law Review called, Is Transformative Use Eating the World?, Asay collects and analyzes 400+ fair use decisions since 1991. The draft is on SSRN, and the abstract is here:
The big question is whether transformative use is used too much...or not enough. Clark Asay (BYU) has done the research on this so you don't have to. In his forthcoming article in Boston College Law Review called, Is Transformative Use Eating the World?, Asay collects and analyzes 400+ fair use decisions since 1991. The draft is on SSRN, and the abstract is here:
Fair use is copyright law’s most important defense to claims of copyright infringement. This defense allows courts to relax copyright law’s application when courts believe doing so will promote creativity more than harm it. As the Supreme Court has said, without the fair use defense, copyright law would often “stifle the very creativity [it] is designed to foster.”
In today’s world, whether use of a copyrighted work is “transformative” has become a central question within the fair use test. The U.S. Supreme Court first endorsed the transformative use term in its 1994 Campbell decision. Since then, lower courts have increasingly made use of the transformative use doctrine in fair use case law. In fact, in response to the transformative use doctrine’s seeming hegemony, commentators and some courts have recently called for a scaling back of the transformative use concept. So far, the Supreme Court has yet to respond. But growing divergences in transformative use approaches may eventually attract its attention.
But what is the actual state of the transformative use doctrine? Some previous scholars have empirically examined the fair use defense, including the transformative use doctrine’s role in fair use case law. But none has focused specifically on empirically assessing the transformative use doctrine in as much depth as is warranted. This Article does so by collecting a number of data from all district and appellate court fair use opinions between 1991, when the transformative use term first made its appearance in the case law, and 2017. These data include how frequently courts apply the doctrine, how often they deem a use transformative, and win rates for transformative users. The data also cover which types of uses courts are most likely to find transformative, what sources courts rely on in defining and applying the doctrine, and how frequently the transformative use doctrine bleeds into and influences other parts of the fair use test. Overall, the data suggest that the transformative use doctrine is, in fact, eating the world of fair use.
The Article concludes by analyzing some possible implications of the findings, including the controversial argument that, going forward, courts should rely even more on the transformative use doctrine in their fair use opinions, not less.In the last six years of the study, some 90% of the fair use opinions consider transformative use.* This doesn't mean that the the reuser won every time - quite often, courts found the use to not be transformative. Indeed, while the transformativeness finding is not 100% dispositive, it is highly predictive. This supports Asay's finding that transformativeness does indeed seem to be taking over fair use.
Tuesday, March 12, 2019
Cicero Cares what Thomas Jefferson Thought about Patents
Posted by
Michael Risch
One of my favorite article titles (and also an article a like a lot) is Who Cares What Thomas Jefferson Thought About Patents? Reevaluating the Patent 'Privilege' in Historical Context, by Adam Mossoff. The article takes on the view that Jefferson's utilitarian view of patents should somehow reign, when there were plenty of others who had different, natural law views of patenting.
And so I read with great interest Jeremy Sheff's latest article, Jefferson's Taper. This article challenges everyone's understanding of Jefferson. The draft is on SSRN, and the abstract is here:
And so I read with great interest Jeremy Sheff's latest article, Jefferson's Taper. This article challenges everyone's understanding of Jefferson. The draft is on SSRN, and the abstract is here:
This Article reports a new discovery concerning the intellectual genealogy of one of American intellectual property law’s most important texts. The text is Thomas Jefferson’s often-cited letter to Isaac McPherson regarding the absence of a natural right of property in inventions, metaphorically illustrated by a “taper” that spreads light from one person to another without diminishing the light at its source. I demonstrate that Thomas Jefferson likely copied this Parable of the Taper from a nearly identical passage in Cicero’s De Officiis, and I show how this borrowing situates Jefferson’s thoughts on intellectual property firmly within a natural law theory that others have cited as inconsistent with Jefferson’s views. I further demonstrate how that natural law theory rests on a pre-Enlightenment Classical Tradition of distributive justice in which distribution of resources is a matter of private judgment guided by a principle of proportionality to the merit of the recipient — a view that is at odds with the post-Enlightenment Modern Tradition of distributive justice as a collective social obligation that proceeds from an initial assumption of human equality. Jefferson’s lifetime correlates with the historical pivot in the intellectual history of the West from the Classical Tradition to the Modern Tradition, but modern readings of the Parable of the Taper, being grounded in the Modern Tradition, ignore this historical context. Such readings cast Jefferson as a proto-utilitarian at odds with his Lockean contemporaries, who supposedly recognized property as a pre-political right. I argue that, to the contrary, Jefferson’s Taper should be read from the viewpoint of the Classical Tradition, in which case it not only fits comfortably within a natural law framework, but points the way toward a novel natural-law-based argument that inventors and other knowledge-creators actually have moral duties to share their knowledge with their fellow human beings.I don't have much more to say about the article, other than that it is a great and interesting read. I'm a big fan of papers like this, and I think this one is done well.
Tuesday, March 5, 2019
Defining Patent Holdup
Posted by
Michael Risch
There are few patent law topics that are so heatedly debated as patent holdup. Those who believe in it, really believe in it. Those who don't, well, don't. I was at a conference once where a professor on one side of this divide just..couldn't...even, and walked out of a presentation taking the opposite viewpoint.
The debate is simply the following. The patent holdup story is that patent holders can extract more than they otherwise would by asserting patents after the targeted infringer has invested in development and manufacturing. The "classic" holdup story in the economics literature relates to incomplete contracts or other partial relationships that allow one party to take advantage of an investment by the other to extract rents.
You can see the overlap, but the "classic" folks think that patent holdup story doesn't count, because there's no prior negotiation - the party investing has the opportunity to research patents, negotiate beforehand, plan their affairs, etc.
In their new article forthcoming in Washington & Lee Law Review, Tom Cotter (Minnesota), Erik Hovenkamp (Harvard Law Post-doc), and Norman Siebrasse (New Brunswick Law) try to solve this debate. They have put Demystifying Patent Holdup on SSRN. The abstract is here:
But I guess I'm not entirely convinced by the normative parallel. The key in all of these cases is transactions costs. So, the question is whether the transactions costs of finding patents are high enough to warrant the investment without expending them. The authors recognize the problem, and note that when injunctions are not possible parties will refuse to pay a license because it is more profitable to do so (holdout). But their answer is that just because there is holdout doesn't mean that holdup isn't real and a problem sometimes. Well, sure, but holdout merely shifts the transactions costs, and if it is cheaper to never make an ex ante agreement (which is typical is these days), then it's hard for me to say that being hit with a patent lawsuit after investment is the sort of path dependence that we should be worried about.
I think this is an interesting and thoughtful paper. There's a lot more than my brief concerns. It attempts to respond to other critiques of patent holdup, and it provides a framework to debate these questions, even if I'm not convinced by the debate.
The debate is simply the following. The patent holdup story is that patent holders can extract more than they otherwise would by asserting patents after the targeted infringer has invested in development and manufacturing. The "classic" holdup story in the economics literature relates to incomplete contracts or other partial relationships that allow one party to take advantage of an investment by the other to extract rents.
You can see the overlap, but the "classic" folks think that patent holdup story doesn't count, because there's no prior negotiation - the party investing has the opportunity to research patents, negotiate beforehand, plan their affairs, etc.
In their new article forthcoming in Washington & Lee Law Review, Tom Cotter (Minnesota), Erik Hovenkamp (Harvard Law Post-doc), and Norman Siebrasse (New Brunswick Law) try to solve this debate. They have put Demystifying Patent Holdup on SSRN. The abstract is here:
Patent holdup can arise when circumstances enable a patent owner to extract a larger royalty ex post than it could have obtained in an arm's length transaction ex ante. While the concept of patent holdup is familiar to scholars and practitioners—particularly in the context of standard-essential patent (SEP) disputes—the economic details are frequently misunderstood. For example, the popular assumption that switching costs (those required to switch from the infringing technology to an alternative) necessarily contribute to holdup is false in general, and will tend to overstate the potential for extracting excessive royalties. On the other hand, some commentaries mistakenly presume that large fixed costs are an essential ingredient of patent holdup, which understates the scope of the problem.
In this article, we clarify and distinguish the most basic economic factors that contribute to patent holdup. This casts light on various points of confusion arising in many commentaries on the subject. Path dependence—which can act to inflate the value of a technology simply because it was adopted first—is a useful concept for understanding the problem. In particular, patent holdup can be viewed as opportunistic exploitation of path dependence effects serving to inflate the value of a patented technology (relative to the alternatives) after it is adopted. This clarifies that factors contributing to holdup are not static, but rather consist in changes in economic circumstances over time. By breaking down the problem into its most basic parts, our analysis provides a useful blueprint for applying patent holdup theory in complex cases.The core of their descriptive argument is that both "classic" and patent holdup are based on a path dependence: one party invests sunk costs and thus is at the mercy of the other party. In this sense, they are surely correct (if we don't ask why the party invested). And the payoff from this is nice, because it allows them to build a model that critically examines sunk costs (holdup) v. switching costs (not holdup). The irony of this, of course, is that it's theoretically irrational to worry about sunk costs when making future decisions.
But I guess I'm not entirely convinced by the normative parallel. The key in all of these cases is transactions costs. So, the question is whether the transactions costs of finding patents are high enough to warrant the investment without expending them. The authors recognize the problem, and note that when injunctions are not possible parties will refuse to pay a license because it is more profitable to do so (holdout). But their answer is that just because there is holdout doesn't mean that holdup isn't real and a problem sometimes. Well, sure, but holdout merely shifts the transactions costs, and if it is cheaper to never make an ex ante agreement (which is typical is these days), then it's hard for me to say that being hit with a patent lawsuit after investment is the sort of path dependence that we should be worried about.
I think this is an interesting and thoughtful paper. There's a lot more than my brief concerns. It attempts to respond to other critiques of patent holdup, and it provides a framework to debate these questions, even if I'm not convinced by the debate.
Monday, March 4, 2019
Recent Advances in Biologics Manufacturing Diminish the Importance of Trade Secrets: A Response to Price and Rai
Posted by
Lisa Larrimore Ouellette
Guest post by Rebecca Weires, a 2L in the J.D./M.S. Bioengineering program at Stanford
In their 2016 paper, Manufacturing Barriers to Biologics Competition and Innovation, Price and Rai argue the use of trade secrets to protect biologics manufacturing processes is a social detriment. They go on to argue policymakers should demand more enabling disclosure of biologics manufacturing processes, either in patents or biologics license applications (BLAs). The authors premise their arguments on an assessment that (1) variations in the synthesis process can unpredictably affect the structure of a biological product; (2) variations in the structure of a biological product can unpredictably affect the physiological effects of the product, including immunogenicity; and (3) analytical techniques are inadequate to characterize the structure of a biological product. I am more optimistic than Price and Rai that researchers will soon overcome all three challenges. Where private-sector funding may fall short, grant-funded research has already led to tremendous advances in biologics development technology. Rather than requiring more specific disclosure of synthesis processes, as Price and Rai recommend, FDA could and should require more specific disclosure of structure, harmonizing biologics regulation with small molecule regulation. FDA should also incentivize development of industrial scale cell-free protein synthesis processes.
In their 2016 paper, Manufacturing Barriers to Biologics Competition and Innovation, Price and Rai argue the use of trade secrets to protect biologics manufacturing processes is a social detriment. They go on to argue policymakers should demand more enabling disclosure of biologics manufacturing processes, either in patents or biologics license applications (BLAs). The authors premise their arguments on an assessment that (1) variations in the synthesis process can unpredictably affect the structure of a biological product; (2) variations in the structure of a biological product can unpredictably affect the physiological effects of the product, including immunogenicity; and (3) analytical techniques are inadequate to characterize the structure of a biological product. I am more optimistic than Price and Rai that researchers will soon overcome all three challenges. Where private-sector funding may fall short, grant-funded research has already led to tremendous advances in biologics development technology. Rather than requiring more specific disclosure of synthesis processes, as Price and Rai recommend, FDA could and should require more specific disclosure of structure, harmonizing biologics regulation with small molecule regulation. FDA should also incentivize development of industrial scale cell-free protein synthesis processes.
Thursday, February 28, 2019
Sue First, Negotiate Later
Posted by
Michael Risch
Just a brief post this week, as I have a perfect storm of non-work related happenings. So, I'll just say that I'm please to announce that my draft article Sue First, Negotiate Later will be published by the Arizona Law Review. The draft is on SSRN, and the longish abstract is below. I may blog about this in more detail in the future, but this is an introduction:
One of the more curious features of patent law is that patents can be challenged by anyone worried about being sued. This challenge right allows potential defendants to file a declaratory relief lawsuit in their local federal district court, seeking a judgment that a patent is invalid or noninfringed. To avoid this home-court advantage, patent owners may file a patent infringement lawsuit first and, by doing so, retain the case in the patent owner’s venue of choice. But there is an unfortunate side effect to such preemptive lawsuits: they escalate the dispute when the parties may want to instead settle for a license. Thus, policies that allow challenges are favored, but they are tempered by escalation caused by preemptive lawsuits. To the extent a particular challenge rule leads to more preemptive lawsuits, it might be disfavored.
This article tests one such important challenge rule. In MedImmune v. Genentech, the U.S. Supreme Court made it easier for a potential defendant to sue first. Whereas the prior rule required threat of immediate injury, the Supreme Court made clear that any case or controversy would allow a challenger to file a declaratory relief action. This ruling had a real practical effect, allowing recipients of letters that boiled down to, “Let’s discuss my patent,” to file a lawsuit when they could not before.
This was supposed to help alleged infringers, but not everyone was convinced. Many observers at the time predicted that the new rule would lead to more preemptive infringement lawsuits filed by patent holders. They would sue first and negotiate later rather than open themselves up to a challenge by sending a demand letter. Further, most who predicted this behavior—including parties to lawsuits themselves—thought that non-practicing entities would lead the charge. Indeed, as time passed, most reports were that this is what happened: that patent trolls uniquely were suing first and negotiating later. But to date, no study has empirically considered the effect of the MedImmune ruling to determine who filed preemptive lawsuits. This Article tests MedImmune’s unintended consequences. The answer matters: lawsuits are costly, and while “quickie” settlements may be relatively inexpensive, increased incentive to file challenges and preemptive infringement suits can lead to entrenchment instead of settlement.
Using a novel longitudinal dataset, this article considers whether MedImmune led to more preemptive infringement lawsuits by NPEs. It does so in three ways. First, it performs a differences-in-differences analysis to test whether case duration for the most active NPEs grew shorter after MedImmune. One would expect that preemptive suits would settle more quickly because they are proxies for quick settlement cases rather than signals of drawn out litigation. Second, it considers whether, other factors equal, the rate of short-lived case filings increased after MedImmune. That is, even if cases grew longer on average, the share of shorter cases should grow if there are more placeholders. Third, it considers whether plaintiffs themselves disclosed sending a demand letter prior to suing.
It turns out that the conventional wisdom is wrong. Not only did cases not grow shorter – cases with similar characteristics grew longer after MedImmune. Furthermore, NPEs were not the only ones who sued first and negotiated later. Instead, every type of plaintiff sent fewer demand letters, NPEs and product companies alike. If anything, the MedImmune experience shows that everyone likes to sue in their preferred venue. As a matter of policy, it means that efforts to dissuade filing lawsuits should be broadly targeted, because all may be susceptible
Monday, February 25, 2019
Jiarui Liu on the Dominance and Ambiguity of Transformative Use
Posted by
Lisa Larrimore Ouellette
The Stanford Technology Law Review just published an interesting new copyright article, An Empirical Study of Transformative Use in Copyright Law by Prof. Jiarui Liu. Here is the abstract:
Liu also examines data such as the win rate for transformative use over time, by circuit, and by subject matter. But I particularly like that Liu is not just counting cases, but also arguing that courts are using this doctrine as a substitute for in-depth policy analysis.
This article presents an empirical study based on all reported transformative use decisions in U.S. copyright history through January 1, 2017. Since Judge Leval coined the doctrine of transformative use in 1990, it has been gradually approaching total dominance in fair use jurisprudence, involved in 90% of all fair use decisions in recent years. More importantly, of all the dispositive decisions that upheld transformative use, 94% eventually led to a finding of fair use. The controlling effect is nowhere more evident than in the context of the four-factor test: A finding of transformative use overrides findings of commercial purpose and bad faith under factor one, renders irrelevant the issue of whether the original work is unpublished or creative under factor two, stretches the extent of copying permitted under factor three towards 100% verbatim reproduction, and precludes the evidence on damage to the primary or derivative market under factor four even though there exists a well-functioning market for the use.
Although transformative use has harmonized fair use rhetoric, it falls short of streamlining fair use practice or increasing its predictability. Courts diverge widely on the meaning of transformative use. They have upheld the doctrine in favor of defendants upon a finding of physical transformation, purposive transformation, or neither. Transformative use is also prone to the problem of the slippery slope: courts start conservatively on uncontroversial cases and then extend the doctrine bit by bit to fact patterns increasingly remote from the original context.
This article, albeit being descriptive in nature, does have a normative connotation. Courts welcome transformative use not despite, but due to, its ambiguity, which is a flexible way to implement their intuitive judgments yet maintain the impression of stare decisis. However, the rhetorical harmony conceals the differences between a wide variety of policy concerns in dissimilar cases, invites casual references to precedents from factually unrelated contexts, and substitutes a mechanical exercise of physical or purposive transformation for an in-depth policy analysis that may provide clearer guidance for future cases.This article builds on and extends prior empirical work in this area, such as Barton Beebe's study of fair use decisions from 1978 to 2005. And it provides a nice mix of interesting new empirical results and normative analysis that illustrates why fair use doctrine is (at least for me) quite challenging to teach. For example, Figure 1 illustrates how transformative use has cannibalized fair use doctrine since the 1994 Campbell v. Acuff-Rose decision endorsed its use:
Liu also examines data such as the win rate for transformative use over time, by circuit, and by subject matter. But I particularly like that Liu is not just counting cases, but also arguing that courts are using this doctrine as a substitute for in-depth policy analysis.
Friday, February 22, 2019
Does Administrative Patent Law Promote Innovation About Innovation?
Posted by
Lisa Larrimore Ouellette
I am at Texas Law today for a symposium on The Intersection of Administrative & IP Law, and my panel was asked to address the question: Does Administrative Patent Law Promote Innovation? I focused my remarks on a specific aspect of this: Does Administrative Patent Law Promote Innovation About Innovation? I think the short answer, at least right now, is "no."
There is a lot we don't know about the patent system. USPTO Regional Director Hope Shimabuku started her remarks today by saying that we know IP creates nearly 30 million jobs and adds $6.6 trillion to the U.S. economy each year, citing this USPTO report. But that's not what the report says. It looks at jobs and value from "IP-intensive industries," defined as ones with "IP-count to employment ratio is higher than the average for all industries considered." As the report acknowledges, it is unable to determine how much of these firms' performance is attributable to IP.
And the real answer is: we don't know. In an article I reviewed for Jotwell, economist Heidi Williams recently summarized: "we still have essentially no credible empirical evidence on the seemingly simple question of whether stronger patent rights—either longer patent terms or broader patent rights—encourage research investments." And even on smaller questions, the existing evidence base is weak.
As I explained in Patent Experimentalism, to make empirical progress we need some source of empirical variation. Economists often look for "natural experiments" with variation across time, across jurisdictions, or across similar technologies, and the closer that variation is to random, the easier it is to draw causal inferences. Of course, it's even better to have variation that is actually random, which is why I have joined other scholars in arguing for more use of randomized policy experiments.
The USPTO has a huge opportunity here to both improve the patent system and help address the key administrative law challenge of encouraging accurate and consistent decisions by a decentralized bureaucracy. There are many questions the agency could help answer using more randomization, as I discuss in Patent Experimentalism. During the panel today, I noted two potential areas: experimenting with the time spent examining a given patent (see this great forthcoming article by Michel Frakes and Melissa Wasserman) and with the possibility that examiner bias affects the gender gap in patenting (which fits within the agency's recent mandate from Congress). I noted ways that each could be designed as opt-in progress to encourage buy-in from applicants and from examiners.
But my main point was not that the USPTO should adopt one of these particular experiments—it was that the agency should study something in a way that allows us to draw rigorous inferences. Failing to do so seems like a tremendous missed opportunity.
There is a lot we don't know about the patent system. USPTO Regional Director Hope Shimabuku started her remarks today by saying that we know IP creates nearly 30 million jobs and adds $6.6 trillion to the U.S. economy each year, citing this USPTO report. But that's not what the report says. It looks at jobs and value from "IP-intensive industries," defined as ones with "IP-count to employment ratio is higher than the average for all industries considered." As the report acknowledges, it is unable to determine how much of these firms' performance is attributable to IP.
And the real answer is: we don't know. In an article I reviewed for Jotwell, economist Heidi Williams recently summarized: "we still have essentially no credible empirical evidence on the seemingly simple question of whether stronger patent rights—either longer patent terms or broader patent rights—encourage research investments." And even on smaller questions, the existing evidence base is weak.
As I explained in Patent Experimentalism, to make empirical progress we need some source of empirical variation. Economists often look for "natural experiments" with variation across time, across jurisdictions, or across similar technologies, and the closer that variation is to random, the easier it is to draw causal inferences. Of course, it's even better to have variation that is actually random, which is why I have joined other scholars in arguing for more use of randomized policy experiments.
The USPTO has a huge opportunity here to both improve the patent system and help address the key administrative law challenge of encouraging accurate and consistent decisions by a decentralized bureaucracy. There are many questions the agency could help answer using more randomization, as I discuss in Patent Experimentalism. During the panel today, I noted two potential areas: experimenting with the time spent examining a given patent (see this great forthcoming article by Michel Frakes and Melissa Wasserman) and with the possibility that examiner bias affects the gender gap in patenting (which fits within the agency's recent mandate from Congress). I noted ways that each could be designed as opt-in progress to encourage buy-in from applicants and from examiners.
But my main point was not that the USPTO should adopt one of these particular experiments—it was that the agency should study something in a way that allows us to draw rigorous inferences. Failing to do so seems like a tremendous missed opportunity.
Tuesday, February 19, 2019
Using Insurance to Deter Lawsuits
Posted by
Michael Risch
The conventional wisdom (my anecdotal experience, anyway) is that the availability of insurance fuels lawsuits. People that otherwise might not sue would use litigation to access insurance funds. I'm sure there's a literature on this. But most insurance covers both defense and indemnity - that is, litigation costs and settlements. But what if the insurance covered the defense and not any settlement costs? Would that serve as a disincentive to bring suit? It surely would change the litigation dynamic.
In The Effect of Patent Litigation Insurance: Theory and Evidence from NPEs, Bernhard Ganglmair (University of Mannheim - Economics), Christian Helmers (Santa Clara - Economics), Brian J. Love (Santa Clara - Law) explore this question with respect to NPE patent litigation insurance. The draft is on SSRN, and the abstract is here:
And that's what they find, unsurprisingly. Assertions of insured patents went down as compared to uninsured patents, and those cases were less likely to settle -- even with the same plaintiff. My one concern about this finding is that patents targeted for insurance may have been weaker in the first place (hence the willingness to insure), and thus there is self-selection. The paper presents some data on the different patents in order to quell this concern, but if there is a methodological challenge, it is here.
This is a longish paper for an empirical paper, in part because they develop a complex game theory model of the insurance purchasing, patent assertion, and patent defense. It is interesting and worth a read.
In The Effect of Patent Litigation Insurance: Theory and Evidence from NPEs, Bernhard Ganglmair (University of Mannheim - Economics), Christian Helmers (Santa Clara - Economics), Brian J. Love (Santa Clara - Law) explore this question with respect to NPE patent litigation insurance. The draft is on SSRN, and the abstract is here:
We analyze the extent to which private defensive litigation insurance deters patent assertion by non-practicing entities (NPEs). We do so by studying the effect that a patent-specific defensive insurance product, offered by a leading litigation insurer, had on the litigation behavior of insured patents’ owners, all of which are NPEs. We first model the impact of defensive litigation insurance on the behavior of patent enforcers and accused infringers. Assuming that a firm’s purchase of insurance is not observed by patent enforcers, we show that the mere availability of defense litigation insurance can have an effect on how often patent enforcers will assert their patents. Next, we empirically evaluate the insurance policy’s effect on the behavior of owners of insured patents by comparing their subsequent assertion of insured patents with their subsequent assertion of their other patents not included in the policy. We additionally compare the assertion of insured patents with patents held by other NPEs with portfolios that were entirely excluded from the insurance product. Our findings suggest that the introduction of this insurance policy had a large, negative effect on the likelihood that a patent included in the policy was subsequently asserted, and our results are robust across different control groups. Our findings also have importance for ongoing debates on the need to reform the U.S. and European patent systems, and suggest that market-based mechanisms can deter so-called “patent trolling.”On reading the abstract, I was skeptical. After all, there are a bunch of reasons why more firms would defend against NPEs , why NPEs would be less likely to assert, and so forth. But the interesting dynamics of the patent litigation insurance market have me more convinced. Apparently, the insurance didn't cover any old lawsuit; instead, only specific patents were covered. So, the authors were able to look at the differences between firms asserting covered patents, firms that held both covered and non-covered patents, and firms that had no covered patents. Because each of these firms should be equally affected by background law changes, the differences should be limited to the role of insurance.
And that's what they find, unsurprisingly. Assertions of insured patents went down as compared to uninsured patents, and those cases were less likely to settle -- even with the same plaintiff. My one concern about this finding is that patents targeted for insurance may have been weaker in the first place (hence the willingness to insure), and thus there is self-selection. The paper presents some data on the different patents in order to quell this concern, but if there is a methodological challenge, it is here.
This is a longish paper for an empirical paper, in part because they develop a complex game theory model of the insurance purchasing, patent assertion, and patent defense. It is interesting and worth a read.
Sunday, February 17, 2019
Foreign Meaning Matters: Brauneis and Moerland on Trademark's Doctrine of Foreign Equivalents
Posted by
Camilla Hrdy
I was enjoying some siggi's® yogurt, and noticed, just below the trademark name siggi's®, an interesting piece of trivia: "skyr, that's Icelandic for thick yogurt!" You learn something new every day.
Robert Brauneis and Anke Moerland's recent article argues that it would not be good policy to allow the company that distributes siggi's ® yogurt to trademark the name SKYR for yogurt in the United States, even though most people in the United States do not currently know what the word "skyr" means. In short, they argue that when reviewing trademarks for purposes of distinctiveness, the U.S. Patent & Trademark Office (USPTO) and the courts should translate foreign terms that are generic or merely descriptive in their home country, because allowing such marks would cause unexpected harms for competition.
This is a fascinating paper that warrants serious thinking, and perhaps re-thinking, of how trademark law currently treats foreign terms.
Robert Brauneis and Anke Moerland's recent article argues that it would not be good policy to allow the company that distributes siggi's ® yogurt to trademark the name SKYR for yogurt in the United States, even though most people in the United States do not currently know what the word "skyr" means. In short, they argue that when reviewing trademarks for purposes of distinctiveness, the U.S. Patent & Trademark Office (USPTO) and the courts should translate foreign terms that are generic or merely descriptive in their home country, because allowing such marks would cause unexpected harms for competition.
This is a fascinating paper that warrants serious thinking, and perhaps re-thinking, of how trademark law currently treats foreign terms.
Tuesday, February 12, 2019
IP and the Right to Repair
Posted by
Michael Risch
I ran across an interesting article last week that I thought I would share. It's called Intellectual Property Law and the Right to Repair, by Leah Chan Grinvald (Suffolk Law) and Ofer Tur-Sinai (Ono Academic College). A draft is on SSRN and the abstract is here:
Many of the topics are those you see in the news, like how laws that forbid breaking DRM stop others from repairing their stuff (which now all has a computer) or how patent law can make it difficult to make patented repair parts.
The treatment of trade secrets, in particular, was a useful addition to the literature. As I wrote on the economics of trade secret many years ago, my view is that trade secrecy doesn't serve as an independent driver of innovation because people will keep their information secret anyway. Thus, any innovation effects are secondary, in the sense that savings made from not having to protect secrets so carefully can be channeled to R&D. But there was always a big caveat: this assumes that firms can "keep their information secret anyway," and that there's no forced disclosure rule.
So, when this article's hypothesized right to repair extended to disclosure of manuals, schematics, and other information necessary to repair, it caught my eye. On the one hand, as someone who has been frustrated by lack of manuals and reverse engineered repair of certain things, I love it. On the other hand, I wonder how requiring disclosure of such information would change the incentive to dynamics. With respect to schematics, companies would probably continue to create them, but perhaps they might make a second, less detailed schematic. Or, maybe nothing would happen because that information is required anyway. But with respect to manuals, I wonder whether companies would lose the incentive to keep detailed records of customer service incidents if they could not profit from it. Keeping such records is costly, and if repairs are charged to customers, it might be better to reinvent the wheel every time than to pay to maintain an information system that others will use. I doubt it, though, as there is still value in having others repair your goods, and if people can repair their own, then the market becomes even more competitive.
While the paper discusses the effect on the incentive to innovate with respect to other forms of IP, it does not do so for trade secrets.
With respect to other IP, the paper seems to take two primary positions on the effect of immunizing IP infringement for repair. The first is that the right to repair can also promote the progress, and thus it should be considered as part of the entire system. While I agree with the premise from a utilitarian point of view, I was not terribly convinced that the right to repair would somehow create incentives for more development that would outweigh initial design IP rights. It might, of course, but there's not a lot of nuanced argument (or evidence) in either direction.
The second position is that loosening IP rights will not weaken "core" incentives to develop the product in the first place, because manufacturers will still want to make the best/most innovative products possible. I think this argument is incomplete in two ways. Primarily, it assumes that manufacturers are monolithic. But the reality is that multiple companies design parts, and their incentive to do so (and frankly their ability to stay in business) may well depend on the ability to protect designs/copyright/etc. At the very least, it will affect pricing. For example, if a company charged for manuals, it may be because it had to pay a third party for each copy distributed. Knowing that such fees are not going to be paid, the original manual author will charge more up front, increasing the price of the product (indeed, the paper seems to assume very little effect on original prices to make up for lost repair revenue). Secondarily, downstream repairs may drive innovation in component parts. For example, how repairs are done might cause manufacturers to not improve parts for easy repair. The paper doesn't seem to grapple with this nuance.
This was an interesting paper, and worth a read. It's a long article - the authors worked hard to cover a large number of bases, and it certainly made me think harder about the right to repair.
In recent years, there has been a growing push in different U.S. states towards legislation that would provide consumers with a “right to repair” their products. Currently 18 states have pending legislation that would require product manufacturers to make available replacement parts and repair manuals. This grassroots movement has been triggered by a combination of related factors. One such factor is the ubiquity of microchips and software in an increasing number of consumer products, from smartphones to cars, which makes the repair of such products more complicated and dependent upon the availability of information supplied by the manufacturers. Another factor is the unscrupulous practices of large, multinational corporations designed to force consumers to repair their products only through their own offered services, and ultimately, to manipulate consumers into buying newer products instead of repairing them. These factors have rallied repair shops, e-recyclers, and other do-it-yourselfers to push forward, demanding a right to repair.
Unfortunately, though, this legislation has stalled in many of the states. Manufacturers have been lobbying the legislatures to stop the enactment of the right to repair laws based on different concerns, including how these laws may impinge on their intellectual property rights. Indeed, a right to repair may not be easily reconcilable with the United States’ far-reaching intellectual property rights regime. For example, requiring manufacturers to release repair manuals could implicate a whole host of intellectual property laws, including trade secret. Similarly, employing measures undercutting a manufacturer's control of the market for replacement parts might conflict with patent exclusivity. Nonetheless, this Article’s thesis holds that intellectual property laws should not be used to inhibit the right to repair from being fully implemented.
In support of this claim, this Article develops a theoretical framework that enables justifying the right to repair in a manner that is consistent with intellectual property protection. In short, the analysis demonstrates that a right to repair can be justified by the very same rationales that have been used traditionally to justify intellectual property rights. Based on this theoretical foundation, this Article then explores, for the first time, the various intellectual property rules and doctrines that may be implicated in the context of the current repair movement. As part of this overview, this Article identifies those areas where intellectual property rights could prevent repair laws from being fully realized, even if some of the states pass the legislation, and recommends certain reforms that are necessary to accommodate the need for a right to repair and enable it to take hold.I thought this was an interesting and provocative paper, even if I am skeptical of the central thesis. I should note that the first half of the paper or so makes the normative case, and the authors do a good job of laying out the case.
Many of the topics are those you see in the news, like how laws that forbid breaking DRM stop others from repairing their stuff (which now all has a computer) or how patent law can make it difficult to make patented repair parts.
The treatment of trade secrets, in particular, was a useful addition to the literature. As I wrote on the economics of trade secret many years ago, my view is that trade secrecy doesn't serve as an independent driver of innovation because people will keep their information secret anyway. Thus, any innovation effects are secondary, in the sense that savings made from not having to protect secrets so carefully can be channeled to R&D. But there was always a big caveat: this assumes that firms can "keep their information secret anyway," and that there's no forced disclosure rule.
So, when this article's hypothesized right to repair extended to disclosure of manuals, schematics, and other information necessary to repair, it caught my eye. On the one hand, as someone who has been frustrated by lack of manuals and reverse engineered repair of certain things, I love it. On the other hand, I wonder how requiring disclosure of such information would change the incentive to dynamics. With respect to schematics, companies would probably continue to create them, but perhaps they might make a second, less detailed schematic. Or, maybe nothing would happen because that information is required anyway. But with respect to manuals, I wonder whether companies would lose the incentive to keep detailed records of customer service incidents if they could not profit from it. Keeping such records is costly, and if repairs are charged to customers, it might be better to reinvent the wheel every time than to pay to maintain an information system that others will use. I doubt it, though, as there is still value in having others repair your goods, and if people can repair their own, then the market becomes even more competitive.
While the paper discusses the effect on the incentive to innovate with respect to other forms of IP, it does not do so for trade secrets.
With respect to other IP, the paper seems to take two primary positions on the effect of immunizing IP infringement for repair. The first is that the right to repair can also promote the progress, and thus it should be considered as part of the entire system. While I agree with the premise from a utilitarian point of view, I was not terribly convinced that the right to repair would somehow create incentives for more development that would outweigh initial design IP rights. It might, of course, but there's not a lot of nuanced argument (or evidence) in either direction.
The second position is that loosening IP rights will not weaken "core" incentives to develop the product in the first place, because manufacturers will still want to make the best/most innovative products possible. I think this argument is incomplete in two ways. Primarily, it assumes that manufacturers are monolithic. But the reality is that multiple companies design parts, and their incentive to do so (and frankly their ability to stay in business) may well depend on the ability to protect designs/copyright/etc. At the very least, it will affect pricing. For example, if a company charged for manuals, it may be because it had to pay a third party for each copy distributed. Knowing that such fees are not going to be paid, the original manual author will charge more up front, increasing the price of the product (indeed, the paper seems to assume very little effect on original prices to make up for lost repair revenue). Secondarily, downstream repairs may drive innovation in component parts. For example, how repairs are done might cause manufacturers to not improve parts for easy repair. The paper doesn't seem to grapple with this nuance.
This was an interesting paper, and worth a read. It's a long article - the authors worked hard to cover a large number of bases, and it certainly made me think harder about the right to repair.
Subscribe to:
Posts (Atom)