Official announcement and application information here, and also pasted below. We're looking for someone to start this summer, and the application deadline is 2/29.
Research Fellow, Intellectual Property, Stanford Law School
Description Professor Mark Lemley and Professor Lisa Ouellette are looking for a research fellow with expertise in qualitative or quantitative empirical studies to help with empirical projects related to intellectual property. There will be opportunities to work closely with professors on academic projects and possibly to co-author papers. The research fellow will have the opportunity to enhance their knowledge of IP Law.
Patent & IP blog, discussing recent news & scholarship on patents, IP theory & innovation.
Sunday, January 31, 2016
Friday, January 29, 2016
Planning a patent citation study? Read this first.
Posted by
Lisa Larrimore Ouellette
Michael's post this morning about how patent citation data has changed over time reminded me of a nice review of the patent citation literature I saw recently by economists Adam Jaffe and GaƩtan de Rassenfosse: Patent Citation Data in Social Science Research: Overview and Best Practices. (Unfortunately, you need to be in an academic or government network or otherwise have access to NBER papers to read for free.) For those who are new to the field, this is a great place to start. In particular, it warns you about some common pitfalls, such as different citation practices across patent offices, changes across time and across technologies, examiner heterogeneity, and strategic effects. I think it understates the importance of recent work by Abrams et al. on why some high-value patents seem to receive few citations, but overall, it seems like a nice overview of the area.
Rethinking Patent Citations
Posted by
Michael Risch
Patent citations are one of the coins of the economic analysis realm. Many studies have used which patents cite which others to determine value, technological relatedness, or other opaque information about a batch of patents. There are some drawbacks, of course, including recent work that questions the role of citations in calculating value or in predicting patent validity.
But what if citing itself has changed over the years? What if easier access to search engines, strategic behavior, or other factors have changed citing patterns? This would mean that citation analysis from the past might yield different answers than citation analysis today.
This is the question tackled by Jeffrey Kuhn and Kenneth Younge in Patent Citations: An Examination of the Data Generating Process, now on SSRN. Their abstract:
As I read the paper, they find that there is a subset of patents that cite significantly more patents than others, and that those citations are attenuated from the technology listed in those patents -- they are filler.
On the one hand, this makes perfect intuitive sense to me, for a variety of reasons. Indeed, in my own study of patents in litigation, I found that more citations were associated with invalidity findings. The conventional wisdom is the contrary, that more backward citations means the patent is strong, because the patent surmounted all that prior art. But if the prior art is filler, then there is no reason to expect a validity finding.
On the other hand, I wonder about the word matching methodology used here. While it's clever, might it represent patentee wordsmithing? People often think that patent lawyers use complex words to say simple ideas (mechanical interface device = plug). Theoretically this shouldn't matter if patentees wordsmith at the same rate over time, but if newer patents add filler words in addition to more cited patents, then perhaps lack of matching words also reflect changes in data over time.
These are just a few thoughts - the data in the paper is both fascinating and illuminating, and there are plenty of nice charts that illustrate it will, along with ideas for better analyzing citations that I think will deserve some close attention.
But what if citing itself has changed over the years? What if easier access to search engines, strategic behavior, or other factors have changed citing patterns? This would mean that citation analysis from the past might yield different answers than citation analysis today.
This is the question tackled by Jeffrey Kuhn and Kenneth Younge in Patent Citations: An Examination of the Data Generating Process, now on SSRN. Their abstract:
Existing measures of innovation often rely on patent citations to indicate intellectual lineage and impact. We show that the data generating process for patent citations has changed substantially since citation-based measures were validated a decade ago. Today, far more citations are created per patent, and the mean technological similarity between citing and cited patents has fallen significantly. These changes suggest that the use of patent citations for scholarship needs to be re-validated. We develop a novel vector space model to examine the information content of patent citations, and show that methods for sub-setting and/or weighting informative citations can substantially improve the predictive power of patent citation measures.I haven't read the methods for improving predictive power carefully enough yet to comment on them, so I'll limit my comments to the factual predicate: that citation patterns are changing.
As I read the paper, they find that there is a subset of patents that cite significantly more patents than others, and that those citations are attenuated from the technology listed in those patents -- they are filler.
On the one hand, this makes perfect intuitive sense to me, for a variety of reasons. Indeed, in my own study of patents in litigation, I found that more citations were associated with invalidity findings. The conventional wisdom is the contrary, that more backward citations means the patent is strong, because the patent surmounted all that prior art. But if the prior art is filler, then there is no reason to expect a validity finding.
On the other hand, I wonder about the word matching methodology used here. While it's clever, might it represent patentee wordsmithing? People often think that patent lawyers use complex words to say simple ideas (mechanical interface device = plug). Theoretically this shouldn't matter if patentees wordsmith at the same rate over time, but if newer patents add filler words in addition to more cited patents, then perhaps lack of matching words also reflect changes in data over time.
These are just a few thoughts - the data in the paper is both fascinating and illuminating, and there are plenty of nice charts that illustrate it will, along with ideas for better analyzing citations that I think will deserve some close attention.
Thursday, January 28, 2016
Christopher Funk: Protecting Trade Secrets in Patent Litigation
Posted by
Camilla Hrdy
What should a court do when attorneys involved in patent litigation get access to the other party's unpatented trade secrets, at the same time as they are also involved in amending or drafting new patents for their client? Take the facts of In re Deutsche Bank. After being sued for allegedly infringing patents on financial deposit-sweep services, the defendant Deutsche Bank had to reveal under seal significant amounts of confidential information about its allegedly infringing products, including source code and descriptions of its deposit sweep services. Yet the plaintiff, Island Intellectual Property, was simultaneously in the process of obtaining nineteen more patents covering the same general technology; and those patents were being drafted by the same patent attorneys who were viewing Deutsche Bank's secrets in the course of litigation.
Monday, January 25, 2016
Consent and authorization under the CFAA
Posted by
Michael Risch
James Grimmelmann (Maryland) has posted Consenting to Computer Use on SSRN. It's a short, terrific essay on how we should think about solving the definition of authorized access and exceeding authorized access under the CFAA, what I've previously called a very scary statute.
At the heart of the matter is this: how do we know when use of a publicly accessible computer is authorized or when that authorization has been exceeded? Grimmelmann suggests that we the question is not as new as it seems; rather than focusing on the behavior of the accused, we should be looking at the consent given by the computer owner. And there's plenty of law, analysis, and philosophy relating to consent. The abstract is here:
Grimmelmann moves the ball forward by distinguishing legal consent (which can be imposed by law) even if factual consent is implicitly or explicitly lacking. But diverging views of what the law should allow (along with zealous prosecutors and no ex ante notice) still leaves the CFAA pretty scary in my view.
At the heart of the matter is this: how do we know when use of a publicly accessible computer is authorized or when that authorization has been exceeded? Grimmelmann suggests that we the question is not as new as it seems; rather than focusing on the behavior of the accused, we should be looking at the consent given by the computer owner. And there's plenty of law, analysis, and philosophy relating to consent. The abstract is here:
The federal Computer Fraud and Abuse Act (CFAA) makes it a crime to “access[] a computer without authorization or exceed[] authorized access.” Courts and commentators have struggled to explain what types of conduct by a computer user are “without authorization.” But this approach is backwards; authorization is not so much a question of what a computer user does, as it is a question of what a computer owner allows.On the one hand, I thought the analysis was really helpful. It separates legal from factual consent, for example. On the other hand, it does not offer an answer to the conundrum (nor does it pretend to - it is admittedly a first step): in the borderline case, how is a user to know in advance whether a particular action will be consented to?
In other words, authorization under the CFAA is an issue of consent, not conduct; to understand authorization, we need to understand consent. Building on Peter Westen’s taxonomy of consent, I argue that we should distinguish between the factual question of what uses a computer owner manifests her consent to and the legal question of what uses courts will deem her to have consented to. Doing so allows to distinguish the different kinds of questions presented by different kinds of CFAA cases, and to give clearer and more precise answers to all of them. Some cases require careful fact-finding about what reasonable computer users in the defendant’s position would have known about the owner’s expressed intentions; other cases require frank policy judgments about which kinds of unwanted uses should be considered serious enough to trigger the CFAA.
Grimmelmann moves the ball forward by distinguishing legal consent (which can be imposed by law) even if factual consent is implicitly or explicitly lacking. But diverging views of what the law should allow (along with zealous prosecutors and no ex ante notice) still leaves the CFAA pretty scary in my view.
Thursday, January 21, 2016
A Literature Review of Patenting and Economic History
Posted by
Michael Risch
Petra Moser (NYU Stern) has posted Patents and Innovation in Economic History on SSRN. Is is a literature review of economic history papers relating to patents and innovation. In general, I think the prestige market undervalues literature reviews, because who cares that you can summarize what everyone in the field should already know about (or can look up on their own)? In practice, though, I think there is great value in such reviews. First, knowing about articles and having them listed, organized, and discussed are two different things. Second, not everyone is in the field, nor does everyone take the time to look up every article. Even in areas where I consider myself a subject matter expert (to avoid critique, I'll leave out which), a well done literature review will often turn up at least one writing I was unaware of or frame prior work in a way I hadn't thought of.
And so it is with this draft; the abstract is below. Many different studies are discussed, dating back to the 1950's. They are organized by topic and many are helpfully described. As you would expect, more space is devoted to Moser's work and the analysis and critique tends to favor her point of view on the evidence (though she does point out some limitations of her own work). To that, my response is if you don't like the angle or focus, write your own literature review that highlights all the other studies and their viewpoints. Better yet, do some Bayesian analysis!
And so it is with this draft; the abstract is below. Many different studies are discussed, dating back to the 1950's. They are organized by topic and many are helpfully described. As you would expect, more space is devoted to Moser's work and the analysis and critique tends to favor her point of view on the evidence (though she does point out some limitations of her own work). To that, my response is if you don't like the angle or focus, write your own literature review that highlights all the other studies and their viewpoints. Better yet, do some Bayesian analysis!
A strong tradition in economic history, which primarily relies on qualitative evidence and statistical correlations, has emphasized the importance of intellectual property rights in encouraging innovation. Recent improvements in empirical methodology - through the creation of new data sets and advances in identification - challenge this traditional view. These empirical results provide a more nuanced view of the effects of intellectual property, which suggests that, whenever intellectual property rights have been too broad or too strong, they have discouraged innovation. This paper summarizes existing results from this research agenda and presents some open questions.
Tuesday, January 19, 2016
Do Patents Help Startups?
Posted by
Michael Risch
Do patents help startups? I've debated this question many times over the years, and no one seems to have a definitive answer. My own research, along with others, shows that patents are associated with higher levels of venture funding. In my own data (which comes from the Kauffmann Firm Survey), startups with patents were 10 times as likely to have venture funding than startups without patents.
But even this is not definitive. First, a small fraction of firms--even of those with patents--get venture funding, so it is unclear what role patents play. Second, causality is notoriously hard to show, especially where unobserved factors may lead to both patenting and success. Third, timing is also difficult; many have answered my simple data with the argument that it is the funding that causes patenting, and not vice-versa. Fourth (and contrary to the third in a way), signaling theory suggests that the patent (and even the patent application) signals value to investors, regardless of the value of the underlying invention.
Following my last post, I'll discuss here a paper that uses granular application data to get at some causality questions. The paper is The Bright Side of Patents by Joan-Farre Mensa (Harvard Bus. School), Deepak Hegde (NYU Stern School of Business), and Alexander Ljungqvist (NYU Finance Dept.). Here is the abstract:
But even this is not definitive. First, a small fraction of firms--even of those with patents--get venture funding, so it is unclear what role patents play. Second, causality is notoriously hard to show, especially where unobserved factors may lead to both patenting and success. Third, timing is also difficult; many have answered my simple data with the argument that it is the funding that causes patenting, and not vice-versa. Fourth (and contrary to the third in a way), signaling theory suggests that the patent (and even the patent application) signals value to investors, regardless of the value of the underlying invention.
Following my last post, I'll discuss here a paper that uses granular application data to get at some causality questions. The paper is The Bright Side of Patents by Joan-Farre Mensa (Harvard Bus. School), Deepak Hegde (NYU Stern School of Business), and Alexander Ljungqvist (NYU Finance Dept.). Here is the abstract:
Motivated by concerns that the patent system is hindering innovation, particularly for small inventors, this study investigates the bright side of patents. We examine whether patents help startups grow and succeed using detailed micro data on all patent applications filed by startups at the U.S. Patent and Trademark Office (USPTO) since 2001 and approved or rejected before 2014. We leverage the fact that patent applications are assigned quasi-randomly to USPTO examiners and instrument for the probability that an application is approved with individual examiners’ historical approval rates. We find that patent approvals help startups create jobs, grow their sales, innovate, and reward their investors. Exogenous delays in the patent examination process significantly reduce firm growth, job creation, and innovation, even when a firm’s patent application is eventually approved. Our results suggest that patents act as a catalyst that sets startups on a growth path by facilitating their access to capital. Proposals for patent reform should consider these benefits of patents alongside their alleged costs.The sample size is large: more than 45,000 companies, which the authors believe constitute all the startups filing for patents during their sample years. For those not steeped in econometric lingo, the PTO examiner "instrument" is a tool that allows the authors to make causal inferences from the data. More on this after the jump.
Wednesday, January 13, 2016
A New Source for Using Patent Application Data for Empirical Research
Posted by
Michael Risch
Getting detailed patent application data is notoriously difficult. Traditionally, such information was only available via Public Pair, the PTO's useful, but clunky for bulk research, interface for getting application data. Thus, there haven't been too many such papers. Sampat & Lemley was an early and well known paper from 2009, which looked at a cross-section of 10,000 applications. That was surely daunting work at the time.
Since then, FOIA requests and bulk downloads have allowed for more comprehensive papers. Frakes & Wasserman have papers using a more comprehensive dataset, as does Tu.
But now the PTO has released an even more comprehensive dataset, available to the masses. This is a truly exciting day for people who have yearned for better patent application data but lacked the resources to obtain it. Here's an abstract introducing the dataset, by Graham, Marco & Miller -- The USPTO Patent Examination Research Dataset: A Window on the Process of Patent Examination:
The data isn't completely straightforward - each "tab" in public pair is a different data file, so users will have to merge them as needed (easily done in any statistics package, sql, or even with excel lookup functions).
Thanks to Alan Marco, Chief Economist at the PTO, as well as anyone else involved in getting this project done. I believe it will be of great long term research value.
In my next post, I'll highlight a recent paper that uses granular examination data to useful ends.
Since then, FOIA requests and bulk downloads have allowed for more comprehensive papers. Frakes & Wasserman have papers using a more comprehensive dataset, as does Tu.
But now the PTO has released an even more comprehensive dataset, available to the masses. This is a truly exciting day for people who have yearned for better patent application data but lacked the resources to obtain it. Here's an abstract introducing the dataset, by Graham, Marco & Miller -- The USPTO Patent Examination Research Dataset: A Window on the Process of Patent Examination:
A surprisingly small amount of empirical research has been focused on the process of obtaining a patent grant from the United States Patent and Trademark Office (PTO). The purpose of this document is to describe the Patent Examination Dataset (PatEX), make a large amount of information from the Public Patent Application Information Retrieval system (Public PAIR) more readily available to researchers. PatEX includes records on over 9 million US patent applications, with information complete as of January 24, 2015 for all applications included in Public PAIR with filing dates prior to January 1, 2015. Variables in PatEX cover most of the relevant information related to US patent examination, including characteristics of inventions, applications, applicants, attorneys, and examiners, and status codes for all actions taken, by both the applicant and examiner, throughout the examination process. A significant section of this documentation describes the selectivity issues that arise from the omission of “nonpublic” applications. We find that the selection issues were much more pronounced for applications received prior to the implementation of the American Inventors Protection Act (AIPA) in late 2000. We also find that the extent of any selection bias will be at least partially determined by the sub-population of interest in any given research project.That's right, data on 9 million patent applications - the patents granted, and the patent applications not granted (after they became published in 2000). The paper does a comparison with the internal PTO records (which shows non-public applications) to determine whether there is any bias in the data. There are a few areas where there isn't perfect alignment, but the data is generally representative. That said, be sure to read the paper to make sure your application is representative (much older applications, for example, have more trouble aligning with USPTO internal data).
The data isn't completely straightforward - each "tab" in public pair is a different data file, so users will have to merge them as needed (easily done in any statistics package, sql, or even with excel lookup functions).
Thanks to Alan Marco, Chief Economist at the PTO, as well as anyone else involved in getting this project done. I believe it will be of great long term research value.
In my next post, I'll highlight a recent paper that uses granular examination data to useful ends.
Monday, January 11, 2016
Samuel Ernst on Reviving the Reverse Doctrine of Equivalents
Posted by
Lisa Larrimore Ouellette
Samuel Ernst (Chapman University) has recently posted The Lost Precedent of the Reverse Doctrine of Equivalents, which argues that this doctrine is the solution to the patent crisis. The reverse doctrine of equivalents was established by the Supreme Court in the 1898 case Boyden Power-Brake v. Westinghouse, in which the Court wrote that "[t]he patentee may bring the defendant within the letter of his claims, but if the latter has so far changed the principle of the device that the claims of the patent, literally construed, have ceased to represent his actual invention," the defendant does not infringe.
Here is Professor Ernst's abstract:
I don't have high hopes for the revival of this doctrine, but the Federal Circuit has made clear that it is not dead yet; for example, Plant Genetic Systems v. DeKalb (2003) quoted an earlier case as saying that "the judicially-developed 'reverse doctrine of equivalents' . . . may be safely relied upon to preclude improper enforcement against later developers." So litigators should keep this in their toolkits, just in case.
Here is Professor Ernst's abstract:
Proponents of legislative patent reform argue that the current patent system perversely impedes true innovation in the name of protecting a vast web of patented inventions, the majority of which are never even commercialized for the benefit of the public. Opponents of such legislation argue that comprehensive, prospective patent reform legislation would harm the incentive to innovate more than it would curb the vexatious practices of non-practicing entities. But while the “Innovation Act” wallows in Congress, there is a common law tool to protect innovation from the patent thicket lying right under our noses: the reverse doctrine of equivalents. Properly applied, this judge-made doctrine can be used to excuse infringement on a case-by-case basis if the court determines that the accused product is substantially superior to the patented invention, despite proof of literal infringement. Unfortunately, the reverse doctrine is disfavored by the Court of Appeals for the Federal Circuit and therefore rarely applied. It was not always so. This article is the first comprehensive study of published opinions applying the reverse doctrine of equivalents to excuse infringement between 1898, when the Supreme Court established the doctrine, and the 1982 creation of the Federal Circuit. This “lost precedent” reveals a flexible doctrine that takes into account the technological and commercial superiority of the accused product to any embodiment of the patented invention made by the patent-holder. An invigorated reverse doctrine of equivalents could therefore serve to protect true innovations from uncommercialized patents on a case-by-case basis, without the potential harm to the innovation incentive that prospective patent legislation might cause.Interestingly, according to Ernst, "the Second, Sixth, and Ninth Circuits had precedent requiring that the district court must always consider reverse equivalents prior to determining infringement," and the standard was only whether the accused product was "substantially changed," not whether it was a "radical improvement" (a standard that emerged from scholarly articles, not case law).
I don't have high hopes for the revival of this doctrine, but the Federal Circuit has made clear that it is not dead yet; for example, Plant Genetic Systems v. DeKalb (2003) quoted an earlier case as saying that "the judicially-developed 'reverse doctrine of equivalents' . . . may be safely relied upon to preclude improper enforcement against later developers." So litigators should keep this in their toolkits, just in case.
Subscribe to:
Posts (Atom)