Tuesday, November 13, 2018

Measuring Alice's Effect on Patent Prosecution

It's a bit weird to write a blog post about something posted at another blog in order to bring attention to it, when that blog has many more readers than this blog. Nonetheless, I thought that the short essay Decoding Patentable Subject Matter by Colleen Chien (Santa Clara) and her student Jiun Ying Wu, in the Patently-O Law Journal was worth a mention. The article is also on SSRN, and the abstract is here:
The Supreme Court’s patentable subject matter jurisprudence from 2011 to 2014 has raised significant policy concerns within the patent community. Prominent groups within the IP community and academia, and commentators to the 2017 USPTO Patentable Subject Matter report have called for an overhaul of the Supreme Court’s “two-step test.” Based on an analysis of 4.4 million office actions mailed from 2008 through mid-July 2017 covering 2.2 million unique patent applications, this article uses a novel technology identification strategy and a differences-in-differences approach to document a spike in 101 rejections among select medical diagnostics and software/business method applications following the Alice and Mayo decisions. Within impacted classes of TC3600 (“36BM”), the 101 rejection rate grew from 25% to 81% in the month after the Alice decision, and has remained above 75% almost every month through the last month of available data (2/2017); among abandoned applications, the prevalence of 101 rejection subject matter rejections in the last office action was around 85%. Among medical diagnostic (“MedDx”) applications, the 101 rejection rate grew from 7% to 32% in the month after the Mayo decision and continued to climb to a high of 64% and to 78% among final office actions just prior to abandonment. In the month of the last available data (from early 2017), the prevalence of subject matter 101 rejections among all office actions in applications in this field was 52% and among office actions before abandonment, was 62%. However outside of impacted areas, the footprint of 101 remained small, appearing in under 15% of all office actions. A subsequent piece will consider additional data and implications for policy.
This article is the first in a series of pieces appearing in Patently-O based on insights gleaned from the release of the treasure trove of open patent data starting the USPTO from 2012.
The essay is a short, easy read, and the graphs really tell you all you need to know from a differences-in-differences point of view - there was a huge spike in medical diagnostics rejections following Mayo and software & business patent rejections following Alice. We already knew this from the Bilski Blog, but this is comprehensive. Interesting to me from a legal history/political economy standpoint is the fact that software rejections were actually trending downward after Mayo but before Alice. I've always thought that was odd. The Mayo test, much as I dislike it, easily fits with abstract ideas in the same way it fits with natural phenomena. Why courts and the PTO simply did not make that leap until Alice has always been a great mystery to me.

Another important finding is that 101 apparently hasn't destroyed any other tech areas the way it has software and diagnostics. Even so, 10% to 15% rejections in other areas is a whole lot more than there used to be. Using WIPO technical classifications shows that most areas have been touched somehow.

Another takeaway is that the data used came from Google BigQuery, which is really great to see. I blogged about this some time ago and I'm glad to see it in use.

So, this was a good essay, and the authors note it is the first in a series. In that spirit, I have some comments for future expansion:

1. The authors mention the "two-step" test many times, but provide no data about the two steps. If the data is in the office action database, I'd love to see which step is the important one. My gut says we don't see a lot of step two determinations.

2. The authors address gaming the claims to avoid certain tech classes, but discount this by showing growth in the business methods class. However, the data they use is office action rejections, which is lagged--sometimes by years. I think an interesting analysis would be office action rejections by date of patent filing, both earliest priority and by the date the particular claim was added. This would show growth or decline in those classes, as well as whether the "101 problem" is limited to older patents.

3. All of the graphs start in the post-Bilski (Fed. Cir.) world. The office actions date back to 2008. I'd like to see what happened between 2008 and 2010.

4. I have no sense of scale. The essay discusses 2000 rejections per month, and it discusses in terms of rates, but I'd like to know, for example, a) what percentage of applications are in the troubled classes? b) how many applications are in the troubled classes (and others)? c) etc.? In other words, is this devastation of a few or of many?

5. Are there any subclasses in the troubled centers that have a better survival rate? The appendix shows the high rejection classes, what about the low rejection classes (if any)?

I look forward to future work on this!


No comments:

Post a Comment