Wednesday, March 25, 2020

Does Gilead's (withdrawn) orphan designation request for a potential coronavirus treatment deserve your outrage?

Many commentators were outraged by the FDA's announcement on Monday that Gilead received orphan drug designation for using the drug remdesivir to treat COVID-19. The backlash led to a quick about-face by Gilead, which announced today that it is asking the FDA to rescind the orphan designation. For those trying to understand what happened here and the underlying policy questions, here's a quick explainer:

How could the Orphan Drug Act possibly apply to COVID-19?

Under 21 U.S.C. § 360bb(a)(2), a pharmaceutical company can request orphan designation for a drug that either (A) treats a disease that "affects less than 200,000 persons in the United States" at the time of the request or (B) "for which there is no reasonable expectation that the cost of developing and making available in the United States a drug for such disease or condition will be recovered from sales in the United States of such drug." An ArsTechnica explainer suggests that remdesivir received orphan designation under option (B), but this email from the FDA indicates that it was option (A).

The designation seems correct based on the plain language of the relevant statute and regulations: As of Monday, there were 44,183 cases diagnosed in the United States (and even fewer at the time of Gilead's request), and the Orphan Drug Act regulations indicate that orphan designation "will not be revoked on the ground that the prevalence of the disease . . . becomes more than 200,000 persons." But given the CDC's low-end estimates of 2 million Americans eventually requiring hospitalization, commentators have noted that this feels like a loophole that gets around the purpose of the Orphan Drug Act.

What benefits would Gilead have received from an orphan designation?

The main effect would have been a tax credit for 25% of Gilead's expenses for the clinical trials it is running to figure out whether remdesivir is actually effective for treating COVID-19. (The tax credit was 50% when the Orphan Drug Act became effective in 1983, but was reduced to 25% by the December 2017 tax reform.)

Gilead would also have received a 7-year market exclusivity period if remdesivir is approved, but this would have had little practical effect because (1) it would already receive a 5-year data exclusivity period for approval of a new chemical entity (because remdesivir has not yet been approved for any use) and (2) Gilead has later-expiring patents, including a patent on the remdesivir compound that expires no earlier than 2035. In theory those patents could be invalidated, but patent litigation is a time-consuming process. So it does not seem likely that the 7-year orphan exclusivity would meaningfully enhance Gilead's pricing power during the peak of the COVID-19 pandemic.

Jamie Love of Knowledge Ecology International has argued that the government could use 28 U.S.C. § 1498 to overcome the patents and buy generic versions of remdesivir (an approach generally advocated by Amy Kapczynski and some of her students at Yale), and that there is no equivalent for regulatory exclusivity. But (1) 21 U.S.C. § 360cc allows generic approval of an orphan drug during the 7-year exclusivity if the sponsor "cannot ensure the availability of sufficient quantities of the drug to meet the needs of persons with the disease" (though invoking either this or § 1498(a) seems unlikely politically) and (2) as noted above, if remdesivir is approved, Gilead will still receive a 5-year data exclusivity period.

Should we celebrate that Gilead will no longer get this tax credit?

Not necessarily. It's true—as noted by commentators such as Laura Pedraza-Fariña—that the Orphan Drug Act seems like an odd fit for a COVID-19 drug, and that little about the orphan designation process requires an assessment of whether additional incentives are needed for developing a drug. And perhaps, as Mark Lemley suggests, Gilead will "go full steam ahead with trying to cure COVID-19" without an orphan designation. But "full steam" may not be a corner solution—there's usually a fuller steam.

There are of course a lot of uncertainties here, but the error costs seem very asymmetric: it is much worse to set incentives for addressing this pandemic too low than too high. The social cost of delaying access to a treatment—even by weeks or days—is huge; the cost of awarding an unnecessary tax credit to Gilead is just a transfer. There should probably be much more public funding for COVID-19-related R&D than there is, so allowing companies to be reimbursed for some of their clinical trial expenses may be the right move from a policy perspective. Giving Gilead an unnecessary transfer isn't ideal, but it seems pretty low on the list of things to be outraged about these days.

Right now, one of the first-order public policy concerns should be getting COVID-19 treatments and preventatives that are demonstrated to be safe and effective. Once those products are approved and marketed, widespread low- or zero-cost access will of course be important, particularly given the positive externalities associated with use. But as I have explained with Daniel Hemel, both in general and in the context of COVID-19 vaccines, innovation and affordability are not either/or. And innovation is the problem we should focus on first.

3/27/20 Update: Jamie Love thinks I am underestimating the importance of the extra exclusivity, and he could be right. The 5-year data exclusivity period for NCEs would prevent a generic manufacturer from obtaining approval based on Gilead's clinical trial data through the Hatch–Waxman process, but another firm could obtain a approval if it has enough clinical trial data to support its own application. But even if it is true that the more robust market exclusivity for orphan drugs would have added to Gilead's profits, that's not obviously a bad thing. There are so many anti-profiting-from-coronavirus stories these days, but I think we want COVID-19 to be the most profitable thing biomedical firms could be working on now. We don't yet have effective treatments, and patients desperately need them.

Tuesday, March 17, 2020

Challenging what we think we know about "market failures" and "innovation"

I really enjoyed the final version of Brett Frischmann and Mark McKenna's article, "Comparative Analysis of Innovation Failures and Institutions in Context." The article was published in Houston Law Review in 2019. But I initially encountered it when the authors presented an early draft at the 2012 Yale Law School Information Society Project's "Innovation Beyond IP Conference," conceived and brought together by Amy Kapczynski and Written Description's Lisa Ouellette. The conference explored mechanisms and institutions besides federal intellectual property rights (IP) that government uses, or could use, in order to achieve some of IP's stated goals. Examples explored include research grants, prizes, and tax credits, among countless others.

Saturday, February 22, 2020

Deepa Varadarajan on Trade Secret Injunctions and Trade Secret "Trolls"

I wrote some previous posts about eBay in trade secret law, and in particular Elizabeth Rowe's empirical work. Rowe found not all trade secret plaintiffs actually ask for injunctions, even after prevailing at trial, along with many other fascinating findings. I would be remiss if I did not flag a characteristically excellent discussion of this issue by Deepa Varadarajan, in a piece that may have flown under readers' radars.

Tuesday, January 14, 2020

Google v. Oracle: Amicus Briefing

Hello again, it's been a while. My administrative duties have sadly kept me busy, limiting my blogging since the summer. But since I've blogged consistently about Google v. Oracle (fka Oracle v. Google) about every two years, the time has come to blog again.

I won't recap the case here -my former post(s) do so nicely. I'm just reporting that 20+ amicus briefs were filed in the last week, which SCOTUSblog has nicely curated from the electronic filing system.

There are many industry briefs. They all say much the same thing - an Oracle win would be bad for industry, and also inconsistent with the law (The R Street brief -and prior op ed-describes how Oracle has copied Amazon's cloud based API declarations). But since I'm an academic, I'll focus on those briefs:

1. Brief of IP Scholars (Samuelson & Crump): Merger means that the API declarations cannot be protected
2. Brief of IP Scholars (Tushnet): Fair Use is warranted
3. Brief of Menell, Nimmer, Balganesh: Channeling dictates that API declarations are not protected
4. Brief of Lunney: Infringement only occurs if whole work is copied; protection does not promote the progress
5. Brief of Snow, Eichhorn, Sheppard: Fair Use jury verdict should not be overturned
6. Brief of Risch: Protection should be viewed through the lens of abstraction, filtration, and comparison

My brief is listed last, but it's certainly not least. Indeed, I think it's a really good brief. But of course I would say that. If you've read my prior blog posts on this (including the one linked above), you'll note that my brief puts into legal terms what I've been complaining about for eight or so years: by framing this case as a pure copyrightability question, the courts have lost sight of the context in which we consider the protection of the API declarations. There might be a world where the declarations, if published as part of a novel and reprinted in a pamphlet made by Google, are eligible for copyright registration. But in the context of filtration, which neither the district court nor the Federal Circuit performed, the declarations are not the type of expression that can be infringed by a competing compiler. Give it a read. It's better than Cats (and not just the new movie), you'll read it again and again.

An even better summary of all the briefs is here.

I conclude with a paragraph from the brief, which I think sums things up:
This case boils down to [the] question: can a company own a programming language through copyright? Oracle would say yes, but the entire history of compatible language compilers, compatible APIs, compatible video games and game systems, and other compatible software says no. Michael Risch, How Can Whelan v. Jaslow and Lotus v. Borland Both Be Right? Reexamining the Economics of Computer Software Reuse, 17 The J. Marshall J. Info. Tech. & Priv. L. 511, 539–44 (1999) (analyzing economics of switching costs, lock-in, de facto standards, and competitive need for compatibility).


Sunday, November 10, 2019

Elizabeth Rowe: does eBay apply to trade secret injunctions?

Elizabeth Rowe has a highly informative new empirical paper, called “eBay, Permanent Injunctions, & Trade Secrets,” forthcoming in Washington and Lee Law Review. Professor Rowe examines—through both high-level case coding and individual case analysis—when, and under what circumstances, courts are willing to grant permanent injunctions in trade secret cases. (So-called “permanent” injunctions are granted or denied after the trade secret plaintiff is victorious, as opposed to “preliminary” injunctions granted or denied prior to on-the-merits review. They need not actually last forever).

Thursday, October 24, 2019

Response to Similar Secrets by Fishman & Varadarajan

Professors Joseph Fishman and Deepa Varadarajan have argued trade secret law should be more like copyright law. Specifically, they argue trade secret law should not prevent people (especially departing employees who obtained the trade secret lawfully within the scope of their employment) from making new end uses of trade secret information, so long as it's not a foreseeable use of the underlying information and is generally outside of the plaintiff's market. The authors made this controversial argument at last year's IP Scholars conference at Berkeley Law, and in their new article in University of Pennsylvania Law Review, called "Similar Secrets."

My full response to Similar Secrets is now published in the University of Pennsylvania Law Review Online. It is called: "Should Dissimilar Uses Of Trade Secrets Be Actionable?" The response explains in detail why I think the answer is, as a general matter, YES. It can be downloaded at: https://www.pennlawreview.com/online/168-U-Pa-L-Rev-Online-78.pdf

Tuesday, September 24, 2019

Lucy Xiaolu Wang on the Medicines Patent Pool

Patent pools are agreements by multiple patent owners to license related patents for a fixed price. The net welfare effect of patent pools is theoretically ambiguous: they can reduce numerous transaction costs, but they also can impose anti-competitive costs (due to collusive price-fixing) and costs to future innovation (due to terms requiring pool members to license future technologies back to the pool). In prior posts, I've described work by Ryan Lampe and Petra Moser suggesting that the first U.S. patent pool—on sewing machine technologies—deterred innovation, and work by Rob Merges and Mike Mattioli suggesting that the savings from two high tech pools are enormous, and that those concerned with pools thus have a high burden to show that the costs outweigh these benefits. More recently, Mattioli has reviewed the complex empirical literature on patent pools.

Economics Ph.D. student Lucy Xiaolu Wang has a very interesting new paper to add to this literature, which I believe is the first empirical study of a biomedical patent pool: Global Drug Diffusion and Innovation with a Patent Pool: The Case of HIV Drug Cocktails. Wang examines the Medicines Patent Pool (MPP), a UN-backed nonprofit that bundles patents for HIV drugs and other medicines and licenses these patents for generic sales in developing countries, with rates that are typically no more than 5% of revenues. For many diseases, including HIV/AIDS, the standard treatment requires daily consumption of multiple compounds owned by different firms with numerous patents. Such situations can benefit from a patent pool for the diffusion of drugs and the creation of single-pill once-daily drug cocktails. She uses a difference-in-differences method to study the effect of the MPP on both static and dynamic welfare and finds enormous social benefits.

On static welfare, she concludes that the MPP increases generic drug purchases in developing countries. She uses "the arguably exogenous variation in the timing of when a drug is included in the pool"—which "is not determined by demand side factors such as HIV prevalence and death rates"—to conclude that adding a drug to the MPP for a given country "increases generic drug share by about seven percentage points in that country." She reports that the results are stronger in countries where drugs are patented (with patent thickets) and are robust to alternative specifications or definitions of counterfactual groups.

On dynamic welfare, Wang concludes that the MPP increases follow-on innovation. "Once a compound enters the pool, new clinical trials increase for drugs that include the compound and more firms participate in these trials," resulting in more new drug product approvals, particularly generic versions of single-pill drug cocktails. And this increase in R&D comes from both pool insiders and outsiders. She finds that outsiders primarily increase innovation for new and better uses of existing compounds, and insiders reallocate resources for pre-market trials and new compound development.

Under these estimations, the net social benefit is substantial. Wang uses a simple structural model and estimates that the MPP for licensing HIV drug patents increased consumer surplus by $700–1400 million and producer surplus by up to $181 million over the first seven years of its establishment, greatly exceeding the pool's $33 million total operating cost over the same period. Of course, estimating counterfactuals from natural experiments is always fraught with challenges. But as an initial effort to understand the net benefits and costs of the MPP, this seems like an important contribution that is worth the attention of legal scholars working in the patent pool area.

Sunday, September 8, 2019

Anthony Levandowski: Is Being a Jerk a Crime?

Former Google employee Anthony Levandowski was recently indicted on federal criminal charges of trade secret theft. As reported in the Los Angeles Times, the indictment was filed by the U.S. attorney’s office in San Jose and is based on the same facts as the civil trade secrets lawsuit that Waymo (formerly Google’s self-driving car project) settled with Uber last year. It is even assigned to the same judge. The gist of the indictment is that, at the time of his resignation from Waymo, and just before taking a new job at Uber, Levandowski downloaded approximately 14,000 files from a server hosted on Google's network. These files allegedly contained "critical engineering information about the hardware used on [Google's] self-driving vehicles …" Each of the 33 counts with which Levandowski is charged carries a penalty of up to 10 years in prison and a $250,000 fine.

This is a crucial time to remember that being disloyal to your employer, on its own, is not illegal. Employees like Levandowski have a clear duty of secrecy with respect to certain information they receive through their employment. But if none of this information constitutes trade secrets, there is no civil trade secret claim. In other words, for a civil trade secrets misappropriation claim, if there is no trade secret, there is no cause of action. 

For criminal cases like Levandowski's, the situation is more complicated. The federal criminal trade secret statute shares the same definition of "trade secret" as the federal civil trade secret statute. See 18 U.S.C. § 1839(3). However, unlike in civil trade secret cases, attempt and conspiracy can be actionable. 18 U.S.C. § 1832(a)(4)-(5). This means that even if the crime was not successful—because the information the employee took wasn't actually a trade secret—the employee can still go to jail. See U.S. v. Hsu, 155 F. 3d 189 (3rd Cir. 1998); U.S. v. Martin, 228 F.3d 1 (2000).  

The Levandoski indictment brings counts of criminal theft and attempted theft of trade secrets. (There is no conspiracy charge, which perhaps suggests the government will not argue Uber was knowingly involved.) But the inclusion of an "attempt" crime means the key question is not just whether Levandowski stole actual trade secrets. It is whether he attempted to do so while having the appropriate state of mindThe criminal provisions under which Levandowski is charged, codified in18 U.S.C. §§ 1832(a)(1), (2), (3) and (4), provide that "[w]hoever, with intent to convert a trade secret ... to the economic benefit of anyone other than the owner thereof, and intending or knowing that the offense will, injure any owner of that trade secret, knowingly—steals...obtains... possesses...[etcetera]" a trade secret, or "attempts to" do any of those things, "shall... be fined under this title or imprisoned not more than 10 years, or both…" 

This means Levandowski can be found guilty of attempting to steal trade secrets that never actually existed. This seems odd. It contradicts fundamental ideas behind why we protect trade secrets. As law professor, Mark Lemley, observed in his oft-cited Stanford Law Review article, modern trade secret law is not a free-ranging license for judges to punish any acts they perceive as disloyal or immoral. It is a special form of property regime. Charles Tait Graves, a partner at Wilson, Sonsini, Goodrich & Rosati, who teaches trade secrets at U.C. Hastings College of Law, echoes this conclusion. Treating trade secrets as an employer’s property, Graves writes, counterintuitively "offers better protection for employees who change jobs” than the alternatives, because it means courts must carefully "define the boundaries" of the right, and may require the court to rule in the end "that not all valuable information learned on the job is protectable.” See Charles Tait Graves, Trade Secrets As Property: Theory and Consequences, 15 J. Intell. Prop. L. 39 (2007). 

So where does that leave Levandowski? In Google/Waymo’s civil case against Uber, Uber got off with a settlement deal, presumably in part because Google recognized the difficulty in proving key pieces of its civil case. Despite initial appearances, Google’s civil action was not actually a slam dunk. It was not clear Uber actually received the specific files Levandowski took or that the information contained in those files constituted trade secrets, versus generally known information or Levandonwki's own "general knowledge, skill, and experience.” (I discuss this latter issue in my recent article, The General Knowledge, Skill, and Experience Paradox, forthcoming in the Boston College Law Review). 

But thanks to criminal remedies under 18 U.S.C. §1832, and that pesky "attempt" charge, Levandowsi is left holding the blame and facing millions in fines, and many decades in jail. 

Maybe being a jerk is illegal after all.

Wednesday, July 17, 2019

Pushback on Decreasing Patent Quality Narrative

It's been a while since I've posted, as I've taken on Vice Dean duties at my law school that have kept me busy. I hope to blog more regularly as I get my legs under me. But I did see a paper worth posting mid-summer.

Wasserman & Frakes have published several papers showing that as examiners gain more seniority, their time spent examining patents decreases and their allowances come more quickly. They (and many others) have taken this to mean a decrease in patent quality.

Charles A. W. deGrazia (University of London, USPTO), Nicholas A. Pairolero (USPTO), and Mike H. M. Teodorescu (Boston College Management, Harvard Business) have released a draft that pushes back on this narrative. The draft is available on SSRN, and the abstract is below:

Prior research argues that USPTO first-action allowance rates increase with examiner seniority and experience, suggesting lower patent quality. However, we show that the increased use of examiner's amendments account for this prior empirical finding. Further, the mechanism reduces patent pendency by up to fifty percent while having no impact on patent quality, and therefore likely benefits innovators and firms. Our analysis suggests that the policy prescriptions in the literature regarding modifying examiner time allocations should be reconsidered. In particular, rather than re-configuring time allocations for every examination promotion level, researchers and stakeholders should focus on the variation in outcomes between junior and senior examiners and on increasing training for examiner's amendment use as a solution for patent grant delay.
In short, they hypothesize (and then empirically show with 4.6 million applications) that as seniority increases, the likelihood of examiner amendments goes up, and it goes up on the first office action. They measure how different the amended claims are, and they use measures of patent scope to show that the amended applications are no broader than those that junior examiners take longer to prosecute.

Their conclusion is that to the extent seniority leads to a time crunch through heavier loads, it is handled by more efficient claim amendment through the examiner amendment procedures, and quality is not reduced.

As with all new studies like this one, it will take time to parse out the methodology and hear critiques. I, for one, am glad to hear of rising use of examiner amendments, as I long ago suggested that as a way to improve patent clarity.

Monday, July 8, 2019

Jacob Victor: Should Royalty Rates in Compulsory Licensing of Music Be Set Below the Market Price?

Jacob Victor has a remarkable new article on copyright compulsory licenses, forthcoming in the Stanford Law Review. The article boldly wades into the notoriously convoluted history of the compulsory license option for obtaining rights to copyrighted music, and makes what I think is a very interesting and important normative argument about how compulsory license rates should be set.  Other scholars who have written on compulsory licensing, whose work Victor addresses, include, to name only a few: Kristelia GarciaJane C. Ginsburg, Wendy GordonLydia Pallas Loren, Robert P. Merges, Pam SamuelsonTim Wu, and more herein.

Tuesday, June 18, 2019

Freilich & Ouellette: USPTO should require prophetic examples to be clearly labeled to avoid confusion

Professor Janet Freilich (Fordham Law) has a fantastic forthcoming law review article, Prophetic Patents, which puts a spotlight on the common practice of submitting patent applications containing entirely hypothetical experimental results. These "prophetic examples" are permitted as long as the predicted results are not in the past tense. Using this tense rule, Freilich analyzed over two million U.S. patents in chemistry and biology, and she estimates that 17% of examples in these patents are prophetic. Prophetic examples may be familiar to patent drafters, but scientists and engineers who learn about them generally describe them as bizarre, and even some patent scholars are unfamiliar with the practice. Prophetic Patents was the one article by a lawyer selected for the 2018 NBER Summer Institute on Innovation, and the economist-heavy audience was fascinated by the concept—many were not even aware that researchers can obtain a patent without actually implementing an invention, much less that patents can contain hypothetical data.

Freilich notes the potential benefits of allowing untested ideas to be patented in terms of encouraging earlier disclosure and helping firms acquire financing, though she finds that patents with prophetic examples are not broader (based on claim word count), filed earlier (based on AIA implementation), or more likely to be filed by small entities. I'm very sympathetic to the argument that the current legal standard may allow speculative ideas to be patented too early—I've argued in prior work that all the competing policy considerations raised by Pierson v. Post about the optimal timing of property rights suggest that many patents are currently awarded prematurely. This is a challenging empirical question, however, because we cannot observe the counterfactual innovation ecosystem operating under a different legal standard.

But while pondering the hard question of the timing of patentability, patent scholars should not lose sight of the easy question: even if patenting untested inventions is socially desirable, there is no reason these patents need to be confusing. To me, Freilich's most interesting empirical result is her study of how often prophetic patents are mis-cited in scientific publications. She looked at 100 randomly selected patents with only prophetic examples that were cited in a scientific article or book for a specific proposition, and she found that 99 were not cited in a way that made clear they were prophetic. Instead, they were cited with phrases such as "[d]ehydration reaction in gas phase has been carried out over solid acid catalysts" (emphasis added). And it is not surprising that scientist readers are misled: many prophetic examples do confusingly mimic actual experiments, with specific numerical results. In prior work, I have shown that contrary to the assertions of some patent scholars, a substantial number of scientists do look to the patent literature to learn new technical information. So it is concerning that a large number of patents are written in a way that can be confusing to readers unfamiliar with the tense rule.

Freilich and I teamed up on a new project for which we interviewed patent drafters to explore whether prophetic examples have any important benefits for patentees that could not be obtained through less misleading methods of constructive reduction to practice. In Science Fiction: Fictitious Experiments in Patents—just published in last week's Science—we explain that the answer is no. Patent prosecutors who rarely use prophetic examples argued that there is no legal reason to use fictitious experiments with specific results rather than more general predictions. Those who usually use prophetic examples agreed that more explicit labeling would not affect the patents' legal strength. The only benefit to patentees that would be reduced by requiring greater clarity seems to be any benefit that comes from confusion, which does not seem worth preserving.

The USPTO already requires prophetic examples to be labeled by tense. But the tense rule is unfamiliar to many readers (including scientists and investors), and the distinction in tenses may be literally lost in translation in foreign patent offices. (For example, the form of Chinese verbs does not change with tense.) There is no good justification for not having a more explicit label, such as "hypothetical experiment." As Freilich and I conclude: "Just because some patents are not based on actual results does not mean they need to be confusing. Scientists regularly write grant applications in a way that makes clear what preliminary data they have already acquired and what the expected goal of the proposed project is. Perhaps this is an area in which the patent system could learn from the scientific community."

Monday, May 20, 2019

Inevitable Disclosure Injunctions Under the DTSA: Much Ado About § 1836(b)(3)(A)(i)(1)(I)

When trade secret law was federalized in 2016, some commentators and legislators expressed concern that federalization of trade secret law would make so-called "inevitable disclosure" injunctions against departing employees a federal remedy, and negatively impact employee mobility on a national scale.

In response to such concerns, the Defend Trade Secrets Act (DTSA) included a provision that is ostensibly designed to limit availability of inevitable disclosure injunctions under the DTSA. The limiting provision is codified in 18 U.S.C. 1836(b)(3)(A)(i)(1)(I), discussed further below.

The DTSA has been in effect for just over three years. My preliminary observation is that courts do not appear to view Section 1836(b)(3)(A)(i)(1)(I) as placing novel limitations on employment injunctions in trade secret cases. They also do not seem to be wary of "inevitable disclosure" language.

Friday, May 17, 2019

Likelihood of Confusion: Is 15% The Magic Number?

David Bernstein, Partner at Debevoise & Plimpton, gave an interesting presentation yesterday at NYU Law Engelberg Center's "Proving IP" Conference on the origins of the "fifteen percent benchmark" in trademark likelihood of confusion analysis. (The subject of the panel was "Proving Consumer Perception: What are the best ways to test what consumers and users perceive about a work and how it is being positioned in the market?")

In trademark law, infringement occurs if defendant’s use of plaintiff’s trademark is likely to cause confusion as to the source of defendant’s product or as to sponsorship or affiliation. Courts across circuits often frame the question as whether an "appreciable number" of ordinarily prudent purchasers are likely to be confused. But evidence of actual confusion is not required. There is not supposed to be a magic number. Courts are supposed to assess a variety of factors, including the similarity of the marks and the markets in which they are used, along with evidence of actual confusion, if any, in order to asses whether confusion is likely, at some point, to occur.

In theory.

But in practice, Bernstein asserted, there is a magic number: it's around fifteen percent. Courts will often state that a survey finding 15% or more is sufficient to support likelihood of confusion, while under 15% suggests no likelihood of confusion. See, e.g., 1-800 CONTACTS, INC. v. Lens. com, Inc., 722 F. 3d 1229, 1248-49 (10th Cir. 2013) (discussing survey findings on the low end).

Tuesday, May 14, 2019

The Stanford NPE Litigation Database

I've been busy with grading and end of year activities, which has limited blogging time. I did want to drop a brief note that the Stanford NPE Litigation Database appears to be live now and fully populated with 11 years of data from 2007-2017. They've been working on this database for a long while. It provides limited but important data: Case name and number, district, filing date, patent numbers, plaintiff, defendants, and plaintiff type. The database also includes a link to Lex Machina's data if you have access.

The plaintiff type, especially, is something not available anywhere else, and is the key value of the database (hence the name). There are surely some quibbles about how some are coded (I know of one where I disagree), but on the whole, the coding is much more useful than the "highly active" plaintiff designations in other databases.

I think this database is also useful as a check on other services, as it is hand coded and may correct errors in patent numbers, etc., that I've periodically found. I see the value as threefold:

  1. As a supplement to other data, adding plaintiff type
  2. As a quick, free guide to which patents were litigated in each case, or which cases involved a particular patent, etc.
  3. As a bulk data source showing trends in location, patent counts, etc., useful in its own right.

The database is here: http://npe.law.stanford.edu/ Kudos to Shawn Miller for all his hard work on this, and to Mark Lemley for having the vision to create it and get it funded and completed.

Friday, May 3, 2019

Fromer: Machines as Keepers of Trade Secrets

I really enjoyed Jeanne Fromer's new article, Machines as the New Oompa-Loompas: Trade Secrecy, the Cloud, Machine Learning, and Automation, forthcoming in the N.Y.U. Law Review and available on SSRN. I think Professor Fromer has an important insight that more use of machines in businesses, including but not limited to increasing automation (i.e. using machines as the source of labor rather than humans), has made it easier for companies to preserve the trade secrecy of their information. Secrecy is not only more technologically possible, Fromer argues, but the chances that information will spill out of the firm are reduced, since human employees are less likely to leave and transfer the information to competitors, either illegally in the form of trade secret misappropriation or legally in the form of unprotectable "general knowledge, skill, and experience."

Professor Fromer's main take-home is that we should be a little worried about this situation, especially when seen in light of Fromer's prior work on the crucial disclosure function of patents. Whereas patents (in theory at least) put useful information into the public domain through the disclosures collected in patent specifications, trade secret law does the opposite, providing potentially indefinite protection for information kept in secret. Fromer's insight about growing use of machines as alternatives to humans provides a new reason to worry about the impact of trade secrecy, which does not require disclosure and potentially lasts forever, for follow-on innovation and competition.

Here was what I see as a key passage:
In addition to the myriad of potential societal consequences that a shift toward automation would have on human happiness, subsistence, and inequality, automation that replaces a substantial amount of employment also turns more business knowledge into an impenetrable secret. How so? While a human can leave the employ of one business to take up employment at a competitor, a machine performing this employee’s task would never do so. Such machines would remain indefinitely at a business’s disposal, keeping all their knowledge self-contained within the business’s walls. Increasing automation thereby makes secrecy more robust than ever before. Whereas departing employees can legally take their elevated general knowledge and skill to new jobs, a key path by which knowledge spills across an industry, machines automating employees’ tasks will never take their general knowledge and skill elsewhere to competitors. Thus, by decreasing the number of employees that might carry their general knowledge and skill to new jobs and in any event the amount of knowledge and skill that each employee might have to take, increasing automation undermines a critical limitation on trade secrecy protection.
(17)

For more on trade secret law's "general knowledge, skill, and experience" status quo, see my new article, The General Knowledge, Skill, and Experience Paradox. I recently discussed this work on Brian Frye's legal scholarship podcast, Ipse Dixit  in an episode entitled "Camilla Hrdy on Trade Secrets and Their Discontents".