Official announcement and application information here, and also pasted below. We're looking for someone to start this summer, and the application deadline is 2/29.
Research Fellow, Intellectual Property, Stanford Law School
Description Professor Mark Lemley and Professor Lisa Ouellette are looking for a research fellow with expertise in qualitative or quantitative empirical studies to help with empirical projects related to intellectual property. There will be opportunities to work closely with professors on academic projects and possibly to co-author papers. The research fellow will have the opportunity to enhance their knowledge of IP Law.
Patent & IP blog, discussing recent news & scholarship on patents, IP theory & innovation.
Sunday, January 31, 2016
Friday, January 29, 2016
Planning a patent citation study? Read this first.
Posted by
Lisa Larrimore Ouellette
Michael's post this morning about how patent citation data has changed over time reminded me of a nice review of the patent citation literature I saw recently by economists Adam Jaffe and GaƩtan de Rassenfosse: Patent Citation Data in Social Science Research: Overview and Best Practices. (Unfortunately, you need to be in an academic or government network or otherwise have access to NBER papers to read for free.) For those who are new to the field, this is a great place to start. In particular, it warns you about some common pitfalls, such as different citation practices across patent offices, changes across time and across technologies, examiner heterogeneity, and strategic effects. I think it understates the importance of recent work by Abrams et al. on why some high-value patents seem to receive few citations, but overall, it seems like a nice overview of the area.
Rethinking Patent Citations
Posted by
Michael Risch
Patent citations are one of the coins of the economic analysis realm. Many studies have used which patents cite which others to determine value, technological relatedness, or other opaque information about a batch of patents. There are some drawbacks, of course, including recent work that questions the role of citations in calculating value or in predicting patent validity.
But what if citing itself has changed over the years? What if easier access to search engines, strategic behavior, or other factors have changed citing patterns? This would mean that citation analysis from the past might yield different answers than citation analysis today.
This is the question tackled by Jeffrey Kuhn and Kenneth Younge in Patent Citations: An Examination of the Data Generating Process, now on SSRN. Their abstract:
As I read the paper, they find that there is a subset of patents that cite significantly more patents than others, and that those citations are attenuated from the technology listed in those patents -- they are filler.
On the one hand, this makes perfect intuitive sense to me, for a variety of reasons. Indeed, in my own study of patents in litigation, I found that more citations were associated with invalidity findings. The conventional wisdom is the contrary, that more backward citations means the patent is strong, because the patent surmounted all that prior art. But if the prior art is filler, then there is no reason to expect a validity finding.
On the other hand, I wonder about the word matching methodology used here. While it's clever, might it represent patentee wordsmithing? People often think that patent lawyers use complex words to say simple ideas (mechanical interface device = plug). Theoretically this shouldn't matter if patentees wordsmith at the same rate over time, but if newer patents add filler words in addition to more cited patents, then perhaps lack of matching words also reflect changes in data over time.
These are just a few thoughts - the data in the paper is both fascinating and illuminating, and there are plenty of nice charts that illustrate it will, along with ideas for better analyzing citations that I think will deserve some close attention.
But what if citing itself has changed over the years? What if easier access to search engines, strategic behavior, or other factors have changed citing patterns? This would mean that citation analysis from the past might yield different answers than citation analysis today.
This is the question tackled by Jeffrey Kuhn and Kenneth Younge in Patent Citations: An Examination of the Data Generating Process, now on SSRN. Their abstract:
Existing measures of innovation often rely on patent citations to indicate intellectual lineage and impact. We show that the data generating process for patent citations has changed substantially since citation-based measures were validated a decade ago. Today, far more citations are created per patent, and the mean technological similarity between citing and cited patents has fallen significantly. These changes suggest that the use of patent citations for scholarship needs to be re-validated. We develop a novel vector space model to examine the information content of patent citations, and show that methods for sub-setting and/or weighting informative citations can substantially improve the predictive power of patent citation measures.I haven't read the methods for improving predictive power carefully enough yet to comment on them, so I'll limit my comments to the factual predicate: that citation patterns are changing.
As I read the paper, they find that there is a subset of patents that cite significantly more patents than others, and that those citations are attenuated from the technology listed in those patents -- they are filler.
On the one hand, this makes perfect intuitive sense to me, for a variety of reasons. Indeed, in my own study of patents in litigation, I found that more citations were associated with invalidity findings. The conventional wisdom is the contrary, that more backward citations means the patent is strong, because the patent surmounted all that prior art. But if the prior art is filler, then there is no reason to expect a validity finding.
On the other hand, I wonder about the word matching methodology used here. While it's clever, might it represent patentee wordsmithing? People often think that patent lawyers use complex words to say simple ideas (mechanical interface device = plug). Theoretically this shouldn't matter if patentees wordsmith at the same rate over time, but if newer patents add filler words in addition to more cited patents, then perhaps lack of matching words also reflect changes in data over time.
These are just a few thoughts - the data in the paper is both fascinating and illuminating, and there are plenty of nice charts that illustrate it will, along with ideas for better analyzing citations that I think will deserve some close attention.
Thursday, January 28, 2016
Christopher Funk: Protecting Trade Secrets in Patent Litigation
Posted by
Camilla Hrdy
What should a court do when attorneys involved in patent litigation get access to the other party's unpatented trade secrets, at the same time as they are also involved in amending or drafting new patents for their client? Take the facts of In re Deutsche Bank. After being sued for allegedly infringing patents on financial deposit-sweep services, the defendant Deutsche Bank had to reveal under seal significant amounts of confidential information about its allegedly infringing products, including source code and descriptions of its deposit sweep services. Yet the plaintiff, Island Intellectual Property, was simultaneously in the process of obtaining nineteen more patents covering the same general technology; and those patents were being drafted by the same patent attorneys who were viewing Deutsche Bank's secrets in the course of litigation.
Monday, January 25, 2016
Consent and authorization under the CFAA
Posted by
Michael Risch
James Grimmelmann (Maryland) has posted Consenting to Computer Use on SSRN. It's a short, terrific essay on how we should think about solving the definition of authorized access and exceeding authorized access under the CFAA, what I've previously called a very scary statute.
At the heart of the matter is this: how do we know when use of a publicly accessible computer is authorized or when that authorization has been exceeded? Grimmelmann suggests that we the question is not as new as it seems; rather than focusing on the behavior of the accused, we should be looking at the consent given by the computer owner. And there's plenty of law, analysis, and philosophy relating to consent. The abstract is here:
Grimmelmann moves the ball forward by distinguishing legal consent (which can be imposed by law) even if factual consent is implicitly or explicitly lacking. But diverging views of what the law should allow (along with zealous prosecutors and no ex ante notice) still leaves the CFAA pretty scary in my view.
At the heart of the matter is this: how do we know when use of a publicly accessible computer is authorized or when that authorization has been exceeded? Grimmelmann suggests that we the question is not as new as it seems; rather than focusing on the behavior of the accused, we should be looking at the consent given by the computer owner. And there's plenty of law, analysis, and philosophy relating to consent. The abstract is here:
The federal Computer Fraud and Abuse Act (CFAA) makes it a crime to “access[] a computer without authorization or exceed[] authorized access.” Courts and commentators have struggled to explain what types of conduct by a computer user are “without authorization.” But this approach is backwards; authorization is not so much a question of what a computer user does, as it is a question of what a computer owner allows.On the one hand, I thought the analysis was really helpful. It separates legal from factual consent, for example. On the other hand, it does not offer an answer to the conundrum (nor does it pretend to - it is admittedly a first step): in the borderline case, how is a user to know in advance whether a particular action will be consented to?
In other words, authorization under the CFAA is an issue of consent, not conduct; to understand authorization, we need to understand consent. Building on Peter Westen’s taxonomy of consent, I argue that we should distinguish between the factual question of what uses a computer owner manifests her consent to and the legal question of what uses courts will deem her to have consented to. Doing so allows to distinguish the different kinds of questions presented by different kinds of CFAA cases, and to give clearer and more precise answers to all of them. Some cases require careful fact-finding about what reasonable computer users in the defendant’s position would have known about the owner’s expressed intentions; other cases require frank policy judgments about which kinds of unwanted uses should be considered serious enough to trigger the CFAA.
Grimmelmann moves the ball forward by distinguishing legal consent (which can be imposed by law) even if factual consent is implicitly or explicitly lacking. But diverging views of what the law should allow (along with zealous prosecutors and no ex ante notice) still leaves the CFAA pretty scary in my view.
Thursday, January 21, 2016
A Literature Review of Patenting and Economic History
Posted by
Michael Risch
Petra Moser (NYU Stern) has posted Patents and Innovation in Economic History on SSRN. Is is a literature review of economic history papers relating to patents and innovation. In general, I think the prestige market undervalues literature reviews, because who cares that you can summarize what everyone in the field should already know about (or can look up on their own)? In practice, though, I think there is great value in such reviews. First, knowing about articles and having them listed, organized, and discussed are two different things. Second, not everyone is in the field, nor does everyone take the time to look up every article. Even in areas where I consider myself a subject matter expert (to avoid critique, I'll leave out which), a well done literature review will often turn up at least one writing I was unaware of or frame prior work in a way I hadn't thought of.
And so it is with this draft; the abstract is below. Many different studies are discussed, dating back to the 1950's. They are organized by topic and many are helpfully described. As you would expect, more space is devoted to Moser's work and the analysis and critique tends to favor her point of view on the evidence (though she does point out some limitations of her own work). To that, my response is if you don't like the angle or focus, write your own literature review that highlights all the other studies and their viewpoints. Better yet, do some Bayesian analysis!
And so it is with this draft; the abstract is below. Many different studies are discussed, dating back to the 1950's. They are organized by topic and many are helpfully described. As you would expect, more space is devoted to Moser's work and the analysis and critique tends to favor her point of view on the evidence (though she does point out some limitations of her own work). To that, my response is if you don't like the angle or focus, write your own literature review that highlights all the other studies and their viewpoints. Better yet, do some Bayesian analysis!
A strong tradition in economic history, which primarily relies on qualitative evidence and statistical correlations, has emphasized the importance of intellectual property rights in encouraging innovation. Recent improvements in empirical methodology - through the creation of new data sets and advances in identification - challenge this traditional view. These empirical results provide a more nuanced view of the effects of intellectual property, which suggests that, whenever intellectual property rights have been too broad or too strong, they have discouraged innovation. This paper summarizes existing results from this research agenda and presents some open questions.
Tuesday, January 19, 2016
Do Patents Help Startups?
Posted by
Michael Risch
Do patents help startups? I've debated this question many times over the years, and no one seems to have a definitive answer. My own research, along with others, shows that patents are associated with higher levels of venture funding. In my own data (which comes from the Kauffmann Firm Survey), startups with patents were 10 times as likely to have venture funding than startups without patents.
But even this is not definitive. First, a small fraction of firms--even of those with patents--get venture funding, so it is unclear what role patents play. Second, causality is notoriously hard to show, especially where unobserved factors may lead to both patenting and success. Third, timing is also difficult; many have answered my simple data with the argument that it is the funding that causes patenting, and not vice-versa. Fourth (and contrary to the third in a way), signaling theory suggests that the patent (and even the patent application) signals value to investors, regardless of the value of the underlying invention.
Following my last post, I'll discuss here a paper that uses granular application data to get at some causality questions. The paper is The Bright Side of Patents by Joan-Farre Mensa (Harvard Bus. School), Deepak Hegde (NYU Stern School of Business), and Alexander Ljungqvist (NYU Finance Dept.). Here is the abstract:
But even this is not definitive. First, a small fraction of firms--even of those with patents--get venture funding, so it is unclear what role patents play. Second, causality is notoriously hard to show, especially where unobserved factors may lead to both patenting and success. Third, timing is also difficult; many have answered my simple data with the argument that it is the funding that causes patenting, and not vice-versa. Fourth (and contrary to the third in a way), signaling theory suggests that the patent (and even the patent application) signals value to investors, regardless of the value of the underlying invention.
Following my last post, I'll discuss here a paper that uses granular application data to get at some causality questions. The paper is The Bright Side of Patents by Joan-Farre Mensa (Harvard Bus. School), Deepak Hegde (NYU Stern School of Business), and Alexander Ljungqvist (NYU Finance Dept.). Here is the abstract:
Motivated by concerns that the patent system is hindering innovation, particularly for small inventors, this study investigates the bright side of patents. We examine whether patents help startups grow and succeed using detailed micro data on all patent applications filed by startups at the U.S. Patent and Trademark Office (USPTO) since 2001 and approved or rejected before 2014. We leverage the fact that patent applications are assigned quasi-randomly to USPTO examiners and instrument for the probability that an application is approved with individual examiners’ historical approval rates. We find that patent approvals help startups create jobs, grow their sales, innovate, and reward their investors. Exogenous delays in the patent examination process significantly reduce firm growth, job creation, and innovation, even when a firm’s patent application is eventually approved. Our results suggest that patents act as a catalyst that sets startups on a growth path by facilitating their access to capital. Proposals for patent reform should consider these benefits of patents alongside their alleged costs.The sample size is large: more than 45,000 companies, which the authors believe constitute all the startups filing for patents during their sample years. For those not steeped in econometric lingo, the PTO examiner "instrument" is a tool that allows the authors to make causal inferences from the data. More on this after the jump.
Wednesday, January 13, 2016
A New Source for Using Patent Application Data for Empirical Research
Posted by
Michael Risch
Getting detailed patent application data is notoriously difficult. Traditionally, such information was only available via Public Pair, the PTO's useful, but clunky for bulk research, interface for getting application data. Thus, there haven't been too many such papers. Sampat & Lemley was an early and well known paper from 2009, which looked at a cross-section of 10,000 applications. That was surely daunting work at the time.
Since then, FOIA requests and bulk downloads have allowed for more comprehensive papers. Frakes & Wasserman have papers using a more comprehensive dataset, as does Tu.
But now the PTO has released an even more comprehensive dataset, available to the masses. This is a truly exciting day for people who have yearned for better patent application data but lacked the resources to obtain it. Here's an abstract introducing the dataset, by Graham, Marco & Miller -- The USPTO Patent Examination Research Dataset: A Window on the Process of Patent Examination:
The data isn't completely straightforward - each "tab" in public pair is a different data file, so users will have to merge them as needed (easily done in any statistics package, sql, or even with excel lookup functions).
Thanks to Alan Marco, Chief Economist at the PTO, as well as anyone else involved in getting this project done. I believe it will be of great long term research value.
In my next post, I'll highlight a recent paper that uses granular examination data to useful ends.
Since then, FOIA requests and bulk downloads have allowed for more comprehensive papers. Frakes & Wasserman have papers using a more comprehensive dataset, as does Tu.
But now the PTO has released an even more comprehensive dataset, available to the masses. This is a truly exciting day for people who have yearned for better patent application data but lacked the resources to obtain it. Here's an abstract introducing the dataset, by Graham, Marco & Miller -- The USPTO Patent Examination Research Dataset: A Window on the Process of Patent Examination:
A surprisingly small amount of empirical research has been focused on the process of obtaining a patent grant from the United States Patent and Trademark Office (PTO). The purpose of this document is to describe the Patent Examination Dataset (PatEX), make a large amount of information from the Public Patent Application Information Retrieval system (Public PAIR) more readily available to researchers. PatEX includes records on over 9 million US patent applications, with information complete as of January 24, 2015 for all applications included in Public PAIR with filing dates prior to January 1, 2015. Variables in PatEX cover most of the relevant information related to US patent examination, including characteristics of inventions, applications, applicants, attorneys, and examiners, and status codes for all actions taken, by both the applicant and examiner, throughout the examination process. A significant section of this documentation describes the selectivity issues that arise from the omission of “nonpublic” applications. We find that the selection issues were much more pronounced for applications received prior to the implementation of the American Inventors Protection Act (AIPA) in late 2000. We also find that the extent of any selection bias will be at least partially determined by the sub-population of interest in any given research project.That's right, data on 9 million patent applications - the patents granted, and the patent applications not granted (after they became published in 2000). The paper does a comparison with the internal PTO records (which shows non-public applications) to determine whether there is any bias in the data. There are a few areas where there isn't perfect alignment, but the data is generally representative. That said, be sure to read the paper to make sure your application is representative (much older applications, for example, have more trouble aligning with USPTO internal data).
The data isn't completely straightforward - each "tab" in public pair is a different data file, so users will have to merge them as needed (easily done in any statistics package, sql, or even with excel lookup functions).
Thanks to Alan Marco, Chief Economist at the PTO, as well as anyone else involved in getting this project done. I believe it will be of great long term research value.
In my next post, I'll highlight a recent paper that uses granular examination data to useful ends.
Monday, January 11, 2016
Samuel Ernst on Reviving the Reverse Doctrine of Equivalents
Posted by
Lisa Larrimore Ouellette
Samuel Ernst (Chapman University) has recently posted The Lost Precedent of the Reverse Doctrine of Equivalents, which argues that this doctrine is the solution to the patent crisis. The reverse doctrine of equivalents was established by the Supreme Court in the 1898 case Boyden Power-Brake v. Westinghouse, in which the Court wrote that "[t]he patentee may bring the defendant within the letter of his claims, but if the latter has so far changed the principle of the device that the claims of the patent, literally construed, have ceased to represent his actual invention," the defendant does not infringe.
Here is Professor Ernst's abstract:
I don't have high hopes for the revival of this doctrine, but the Federal Circuit has made clear that it is not dead yet; for example, Plant Genetic Systems v. DeKalb (2003) quoted an earlier case as saying that "the judicially-developed 'reverse doctrine of equivalents' . . . may be safely relied upon to preclude improper enforcement against later developers." So litigators should keep this in their toolkits, just in case.
Here is Professor Ernst's abstract:
Proponents of legislative patent reform argue that the current patent system perversely impedes true innovation in the name of protecting a vast web of patented inventions, the majority of which are never even commercialized for the benefit of the public. Opponents of such legislation argue that comprehensive, prospective patent reform legislation would harm the incentive to innovate more than it would curb the vexatious practices of non-practicing entities. But while the “Innovation Act” wallows in Congress, there is a common law tool to protect innovation from the patent thicket lying right under our noses: the reverse doctrine of equivalents. Properly applied, this judge-made doctrine can be used to excuse infringement on a case-by-case basis if the court determines that the accused product is substantially superior to the patented invention, despite proof of literal infringement. Unfortunately, the reverse doctrine is disfavored by the Court of Appeals for the Federal Circuit and therefore rarely applied. It was not always so. This article is the first comprehensive study of published opinions applying the reverse doctrine of equivalents to excuse infringement between 1898, when the Supreme Court established the doctrine, and the 1982 creation of the Federal Circuit. This “lost precedent” reveals a flexible doctrine that takes into account the technological and commercial superiority of the accused product to any embodiment of the patented invention made by the patent-holder. An invigorated reverse doctrine of equivalents could therefore serve to protect true innovations from uncommercialized patents on a case-by-case basis, without the potential harm to the innovation incentive that prospective patent legislation might cause.Interestingly, according to Ernst, "the Second, Sixth, and Ninth Circuits had precedent requiring that the district court must always consider reverse equivalents prior to determining infringement," and the standard was only whether the accused product was "substantially changed," not whether it was a "radical improvement" (a standard that emerged from scholarly articles, not case law).
I don't have high hopes for the revival of this doctrine, but the Federal Circuit has made clear that it is not dead yet; for example, Plant Genetic Systems v. DeKalb (2003) quoted an earlier case as saying that "the judicially-developed 'reverse doctrine of equivalents' . . . may be safely relied upon to preclude improper enforcement against later developers." So litigators should keep this in their toolkits, just in case.
Tuesday, December 22, 2015
Burk: Is Dolly patentable subject matter in light of Alice?
Posted by
Lisa Larrimore Ouellette
Dan Burk's work should already be familiar to those who follow patentable subject matter debates (see, e.g., here, here, and here). In a new essay, Dolly and Alice, he questions whether the Federal Circuit's May 2014 In re Roslin decision—holding clones such as Dolly to not be patentable subject matter—should have come out differently under the Supreme Court's June 2014 decision in Alice v. CLS Bank. Short answer: yes.
Burk does not have kind words for either the Federal Circuit or the Supreme Court, and he reiterates his prior criticism of developments like the gDNA/cDNA distinction in Myriad. His analysis of how Roslin should be analyzed under Alice begins on p. 11 of the current draft:
Burk does not have kind words for either the Federal Circuit or the Supreme Court, and he reiterates his prior criticism of developments like the gDNA/cDNA distinction in Myriad. His analysis of how Roslin should be analyzed under Alice begins on p. 11 of the current draft:
[E]ven assuming that the cloned sheep failed the first prong of the Alice test, the analysis would then move to the second prong to look for an "inventive concept" that takes the claimed invention beyond an attempt to merely capture the prohibited category of subject matter identified in the first step. . . . The Roslin patent claims surely entail such an inventive concept in the method of creating the sheep. The claims recite "clones," which the specification discloses were produced by a novel method that is universally acknowledged to have been a highly significant and difficult advance in reproductive technology—an "inventive concept" if there ever was one . . . [which] was not achieved via conventional, routine, or readily available techniques . . . .But while Burk thinks Roslin might have benefited from the Alice framework, he also contends that this exercise demonstrates the confusion Alice creates across a range of doctrines, and particularly for product by process claims. He concludes by drawing an interesting parallel to the old Durden problem of how the novelty of a starting material affects the patentability of a process, and he expresses skepticism that there is any coherent way out; rather, he thinks Alice "leaves unsettled questions that will haunt us for years to come."
Tuesday, December 15, 2015
3 New Copyright Articles: Buccafusco, Bell & Parchomovsky, Grimmelmann
Posted by
Lisa Larrimore Ouellette
My own scholarship and scholarly reading focuses most heavily on patent law, but I've recently come across a few interesting copyright papers that seem worth highlighting:
- Christopher Buccafusco, A Theory of Copyright Authorship – Argues that "authorship involves the intentional creation of mental effects in an audience," which expands copyrightability to gardens, cuisine, and tactile works, but withdraws it from aspects of photographs, taxonomies, and computer programs.
- Abraham Bell & Gideon Parchomovsky, The Dual-Grant Theory of Fair Use – Argues that rather than addressing market failure, fair use calibrates the allocation of uses among authors and the public. A prima facie finding of fair use in certain categories (such as political speech) could only be defeated by showing the use would eliminate sufficient incentives for creation.
- James Grimmelmann, There's No Such Thing as a Computer-Authored Work – And It's a Good Thing, Too – "Treating computers as authors for copyright purposes is a non-solution to a non-problem. It is a non-solution because unless and until computer programs can qualify as persons in life and law, it does no practical good to call them 'authors' when someone else will end up owning the copyright anyway. And it responds to a non-problem because there is nothing actually distinctive about computer-generated works."
Are there other copyright pieces posted this fall that I should take a look at?
Update: For readers not on Twitter, Chris Buccafusco added some additional suggestions:
Update: For readers not on Twitter, Chris Buccafusco added some additional suggestions:
Thanks @PatentScholar How about:
@PamelaSamuelson https://t.co/L9ypUO3vTo
Shyam: https://t.co/jR2ZrAbCHb
Bair: https://t.co/9ujC4tWPns
— Chris Buccafusco (@cjbuccafusco) December 15, 2015
Tuesday, December 8, 2015
Bernard Chao on Horizontal Innovation and Interface Patents
Posted by
Lisa Larrimore Ouellette
Bernard Chao has posted an interesting new paper, Horizontal Innovation and Interface Patents (forthcoming in the Wisconsin Law Review), on inventions whose value comes merely from compatibility rather than improvements on existing technology. And I'm grateful to him for writing an abstract that concisely summarizes the point of the article:
Scholars understandably devote a great deal of effort to studying how well patent law works to incentive the most important inventions. After all, these inventions form the foundation of our new technological age. But very little time is spent focusing on the other end of the spectrum, inventions that are no better than what the public already has. At first blush, studying such “horizontal” innovation seems pointless. But this inquiry actually reveals much about how patents can be used in unintended, and arguably, anticompetitive ways.
This issue has roots in one unintuitive aspect of patent law. Despite the law’s goal of promoting innovation, patents can be obtained on inventions that are no better than existing technology. Such patents might appear worthless, but companies regularly obtain these patents to cover interfaces. That is because interface patents actually derive value from two distinct characteristics. First, they can have “innovation value” that is based on how much better the patented interface is than prior technology. Second, interface patents can also have “compatibility value.” In other words, the patented technology is often needed to make products operate (i.e. compatible) with a particular interface. In practical terms, this means that an interface patent that is not innovative can still give a company the ability to foreclose competition.
This undesirable result is a consequence of how patent law has structured its remedies. Under current law, recoveries implicitly include both innovation and compatibility values. This Article argues that the law should change its remedies to exclude the latter kind of recovery. This proposal has two benefits. It would eliminate wasteful patents on horizontal technology. Second, and more importantly, the value of all interface patents would be better aligned with the goals of the patent system. To achieve these outcomes, this Article proposes changes to the standards for awarding injunctions, lost profits and reasonable royalties.The article covers examples ranging from razor/handle interfaces to Apple's patented Lightning interface, so it is a fun read. And it also illustrates what seems like an increasing trend in patent scholarship, in which authors turn to remedies as the optimal policy tool for effecting their desired changes.
Wednesday, December 2, 2015
Sampat & Williams on the Effect of Gene Patents on Follow-on Innovation
Posted by
Lisa Larrimore Ouellette
Bhaven Sampat (Columbia Public Health) and Heidi Williams (MIT Econ) are two economists whose work on innovation is always worth reading. I've discussed a number of their papers before (here, here, here, here, and here), and Williams is now a certified genius. They've posted a new paper, How Do Patents Affect Follow-On Innovation? Evidence from the Human Genome, which is an important follow-up to Williams's prior work on gene patents. Here is the abstract:
We investigate whether patents on human genes have affected follow-on scientific research and product development. Using administrative data on successful and unsuccessful patent applications submitted to the US Patent and Trademark Office, we link the exact gene sequences claimed in each application with data measuring follow-on scientific research and commercial investments. Using this data, we document novel evidence of selection into patenting: patented genes appear more valuable — prior to being patented — than non-patented genes. This evidence of selection motivates two quasi-experimental approaches, both of which suggest that on average gene patents have had no effect on follow-on innovation.Their second empirical design is particularly clever: they use the leniency of the assigned patent examiner as an instrumental variable for which patent applications are granted patents. Highly recommended.
Saturday, November 28, 2015
Tim Holbrook on Induced Patent Infringement at the Supreme Court
Posted by
Lisa Larrimore Ouellette
Tim Holbrook (Emory Law) has a new article, The Supreme Court's Quiet Revolution in Induced Patent Infringement (forthcoming in the Notre Dame Law Review), arguing that with all the hand-wringing over Supreme Court patentable subject matter cases, scholars have missed the substantial changes the Court has wrought in induced patent infringement. Here is the abstract:
The Supreme Court over the last decade or so has reengaged with patent law. While much attention has been paid to the Court’s reworking of what constitutes patent eligible subject matter and enhancing tools to combat “patent trolls,” what many have missed is the Court’s reworking of the contours of active inducement of patent infringement under 35 U.S.C. § 271(b). The Court has taken the same number of § 271(b) cases as subject matter eligibility cases – four. Yet this reworking has not garnered much attention in the literature. This article offers the first comprehensive assessment of the Court’s efforts to define active inducement. In so doing, it identifies the surprising significance of the Court’s most recent case, Commil USA, LLC v. Cisco Systems, Inc., where the Court held that a good faith belief on the part of the accused inducer cannot negate the mental state required for inducement – the intent to induce acts of infringement. In so doing, the Court moved away from its policy of encouraging challenges to patent validity as articulated in Lear, Inc. v. Adkins and its progeny. This step away from Lear is significant and surprising, particularly where critiques of the patent system suggest there are too many invalid patents creating issues for competition. This article critiques these aspects of Commil and then addresses lingering, unanswered questions. In particular, this article suggests that a good faith belief that the induced acts are not infringing, which remains as a defense, should only act as a shield against past damages and not against prospective relief such as injunctions or ongoing royalties. The courts so far have failed to appreciate this important temporal dynamic.The four cases he's talking about are Grokster, Global-Tech, Limelight, and Commil. (You might say, "Wait, Grokster is a copyright case!" But Holbrook explains the substantial impact it had on patent law.) I think the article is worth a read, and that the concluding point on damages is quite interesting.
Tuesday, November 24, 2015
Decoding the Patent Venue Statute
Posted by
Michael Risch
Last Friday, Colleen Chien and I published an op-ed in the Washington Post arguing that the courts and/or Congress should take a hard look at venue provisions. It was a fun and challenging project, because we worked hard to delineate where we agreed and where we disagreed. One area where we weren't sure if we disagreed or not was whether the 2011 amendment to the general venue provisions should affect patent venue.
This is a thorny statutory interpretation issue, and because we didn't have space to discuss it in the op-ed (nor did we agree on all the details), I thought I would lay out my view of the issues here. My views don't speak for Colleen. Further, while my views fall on one side, they do so based solely on statutory interpretation. I don't have a horse in the policy race other than to say that it's important, it's complicated, and it should be considered.
Here is my tracing of the history:
This is a thorny statutory interpretation issue, and because we didn't have space to discuss it in the op-ed (nor did we agree on all the details), I thought I would lay out my view of the issues here. My views don't speak for Colleen. Further, while my views fall on one side, they do so based solely on statutory interpretation. I don't have a horse in the policy race other than to say that it's important, it's complicated, and it should be considered.
Here is my tracing of the history:
Subscribe to:
Posts (Atom)