Monday, May 20, 2019

Inevitable Disclosure Injunctions Under the DTSA: Much Ado About § 1836(b)(3)(A)(i)(1)(I)

When trade secret law was federalized in 2016, some commentators and legislators expressed concern that federalization of trade secret law would make so-called "inevitable disclosure" injunctions against departing employees a federal remedy, and negatively impact employee mobility on a national scale.

In response to such concerns, the Defend Trade Secrets Act (DTSA) included a provision that is ostensibly designed to limit availability of inevitable disclosure injunctions under the DTSA. The limiting provision is codified in 18 U.S.C. 1836(b)(3)(A)(i)(1)(I), discussed further below.

The DTSA has been in effect for just over three years. My preliminary observation is that courts do not appear to view Section 1836(b)(3)(A)(i)(1)(I) as placing novel limitations on employment injunctions in trade secret cases. They also do not seem to be wary of "inevitable disclosure" language.

Friday, May 17, 2019

Likelihood of Confusion: Is 15% The Magic Number?

David Bernstein, Partner at Debevoise & Plimpton, gave an interesting presentation yesterday at NYU Law Engelberg Center's "Proving IP" Conference on the origins of the "fifteen percent benchmark" in trademark likelihood of confusion analysis. (The subject of the panel was "Proving Consumer Perception: What are the best ways to test what consumers and users perceive about a work and how it is being positioned in the market?")

In trademark law, infringement occurs if defendant’s use of plaintiff’s trademark is likely to cause confusion as to the source of defendant’s product or as to sponsorship or affiliation. Courts across circuits often frame the question as whether an "appreciable number" of ordinarily prudent purchasers are likely to be confused. But evidence of actual confusion is not required. There is not supposed to be a magic number. Courts are supposed to assess a variety of factors, including the similarity of the marks and the markets in which they are used, along with evidence of actual confusion, if any, in order to asses whether confusion is likely, at some point, to occur.

In theory.

But in practice, Bernstein asserted, there is a magic number: it's around fifteen percent. Courts will often state that a survey finding 15% or more is sufficient to support likelihood of confusion, while under 15% suggests no likelihood of confusion. See, e.g., 1-800 CONTACTS, INC. v. Lens. com, Inc., 722 F. 3d 1229, 1248-49 (10th Cir. 2013) (discussing survey findings on the low end).

Tuesday, May 14, 2019

The Stanford NPE Litigation Database

I've been busy with grading and end of year activities, which has limited blogging time. I did want to drop a brief note that the Stanford NPE Litigation Database appears to be live now and fully populated with 11 years of data from 2007-2017. They've been working on this database for a long while. It provides limited but important data: Case name and number, district, filing date, patent numbers, plaintiff, defendants, and plaintiff type. The database also includes a link to Lex Machina's data if you have access.

The plaintiff type, especially, is something not available anywhere else, and is the key value of the database (hence the name). There are surely some quibbles about how some are coded (I know of one where I disagree), but on the whole, the coding is much more useful than the "highly active" plaintiff designations in other databases.

I think this database is also useful as a check on other services, as it is hand coded and may correct errors in patent numbers, etc., that I've periodically found. I see the value as threefold:

  1. As a supplement to other data, adding plaintiff type
  2. As a quick, free guide to which patents were litigated in each case, or which cases involved a particular patent, etc.
  3. As a bulk data source showing trends in location, patent counts, etc., useful in its own right.

The database is here: http://npe.law.stanford.edu/ Kudos to Shawn Miller for all his hard work on this, and to Mark Lemley for having the vision to create it and get it funded and completed.

Friday, May 3, 2019

Fromer: Machines as Keepers of Trade Secrets

I really enjoyed Jeanne Fromer's new article, Machines as the New Oompa-Loompas: Trade Secrecy, the Cloud, Machine Learning, and Automation, forthcoming in the N.Y.U. Law Review and available on SSRN. I think Professor Fromer has an important insight that more use of machines in businesses, including but not limited to increasing automation (i.e. using machines as the source of labor rather than humans), has made it easier for companies to preserve the trade secrecy of their information. Secrecy is not only more technologically possible, Fromer argues, but the chances that information will spill out of the firm are reduced, since human employees are less likely to leave and transfer the information to competitors, either illegally in the form of trade secret misappropriation or legally in the form of unprotectable "general knowledge, skill, and experience."

Professor Fromer's main take-home is that we should be a little worried about this situation, especially when seen in light of Fromer's prior work on the crucial disclosure function of patents. Whereas patents (in theory at least) put useful information into the public domain through the disclosures collected in patent specifications, trade secret law does the opposite, providing potentially indefinite protection for information kept in secret. Fromer's insight about growing use of machines as alternatives to humans provides a new reason to worry about the impact of trade secrecy, which does not require disclosure and potentially lasts forever, for follow-on innovation and competition.

Here was what I see as a key passage:
In addition to the myriad of potential societal consequences that a shift toward automation would have on human happiness, subsistence, and inequality, automation that replaces a substantial amount of employment also turns more business knowledge into an impenetrable secret. How so? While a human can leave the employ of one business to take up employment at a competitor, a machine performing this employee’s task would never do so. Such machines would remain indefinitely at a business’s disposal, keeping all their knowledge self-contained within the business’s walls. Increasing automation thereby makes secrecy more robust than ever before. Whereas departing employees can legally take their elevated general knowledge and skill to new jobs, a key path by which knowledge spills across an industry, machines automating employees’ tasks will never take their general knowledge and skill elsewhere to competitors. Thus, by decreasing the number of employees that might carry their general knowledge and skill to new jobs and in any event the amount of knowledge and skill that each employee might have to take, increasing automation undermines a critical limitation on trade secrecy protection.
(17)

For more on trade secret law's "general knowledge, skill, and experience" status quo, see my new article, The General Knowledge, Skill, and Experience Paradox. I recently discussed this work on Brian Frye's legal scholarship podcast, Ipse Dixit  in an episode entitled "Camilla Hrdy on Trade Secrets and Their Discontents".

Wednesday, May 1, 2019

Measuring Patent Thickets

Measuring the effect of patenting on industry R&D is an age old pursuit in innovation economics. It's hard. The latest interesting attempt comes from Greg Day (Georgia Business) and Michael Schuster (OK State, but soon to be Georgia Business). They look at more than one million patents to determine that large portfolios tend to crowd out startups. I'm totally with them on that. As I wrote extensively during troll hysteria, patent portfolios and assertion by active companies can be harmful to innovation.

The question is how much, and what to do about it. Day and Schuster argue in their paper that the issue is patent thickets, as their abstract shows. The draft article Patent Inequality, is on SSRN:
Using an original dataset of over 1,000,000 patents and empirical methods, we find that the patent system perpetuates inequalities between powerful and upstart firms. When faced with growing numbers of patents in a field, upstart inventors reduce research and development expenditures, while those already holding many patents increase their innovation efforts. This phenomenon affords entrenched firms disproportionate opportunities to innovate as well as utilize the resulting patents to create barriers to entry (e.g., licensing costs or potential litigation).
A hallmark of this type of behavior is securing large patent holdings to create competitive advantages associated with the size of the portfolio, regardless of the value of the underlying patents. Indeed, this strategy relies on quantity, not quality. Using a variety of models, we first find evidence that this strategy is commonplace in innovative markets. Our analysis then determines that innovation suffers when firms amass many low-value patents to exclude upstart inventors. From these results, we not only provide answers to a contentious debate about the effects of strategic patenting, but also suggest remedial policies to foster competition and innovation.
The article uses portfolio sizes and maintenance renewals to find correlations with investment. They find, unsurprisingly, that the more patents there are in portfolios in an industry, the lower the R&D investment. However, the causal takeaways from this seem to me to be ambiguous. It could be the patent thickets that cause that limitation, or it could simply be that industries dominated by large players are less competitive and drive out startups. There are plenty of (non-patent) theorists that would predict such outcomes.

They also find that firms with large portfolios are more likely to renew their patents, holding other indicia of patent quality (and firm assets) equal. Even if we assume that their indicia of patent quality are complete (they use forward cites, number of inventors, and number of claims), the effect they find is really, really small. For the one reported industry - biology, the effect is something like a -0.00000982 percent likelihood of lapse for each additional patent. This is statistically significant, I assume, because of the very large sample size and a relatively small variation. But it seems barely economically significant. If you multiply it out, it means that each patent is 1% more likely to lapse for every 1,000 patents in the portfolio (that is, from 50% chance of lapse, to 49% chance of lapse. For IBM - the largest patentee of the time with about 25,000 patents during the relative time period, it's still only a 25% change. Most patentees, even with portfolios, would be nowhere near that. I'm just not sure what we can read into those numbers - certainly not the broad policy prescriptions suggested in the paper, in my view.

That said, this paper provides a lot of useful information about what drives portfolio patenting, as well as a comprehensive look at what drives maintenance rates. I would have liked to see litigation data mixed in, as that will certainly affect renewals one way or the other, but even as is, this paper is an interesting read.