Guest Post By: Colleen Chien, Professor, Santa Clara University Law School
Over the last several years, the USPTO has continued to release high-quality data about the patent system. This is the first of a series of posts by Professor Chien based on insights developed from that data. (The accompanying Patently-O Law Journal Paper includes additional views and methodological notes, and links to queries for replication of the analysis by Chien and Jiun-Ying Wu, a 3L at Santa Clara Law. )
“Everything should be made as simple as possible,
but not simpler” — Albert Einstein
A few weeks ago, the USPTO Director Andrei Iancu announced progress on new guidance to clarify 101 subject matter eligibility by categorizing exceptions to the law. The new guidance will likely be welcomed by prominent groups in the IP community, academia, and the majority of commentators to the 2017 USPTO Patentable Subject Matter report that have called for a overhaul of the Supreme Court’s “two-step test.” Behind these calls are at least two concerns, that the two-step test (1) has stripped protection from meritorious inventions, particularly in medical diagnostics, and (2) is too indeterminate to be implemented predictably. To Director Iancu’s laudable mission, to produce reliable, clear, and certain property rights through the patent system, 101 appears to pose a threat.
Last November, the USPTO released the Office Action Dataset, a treasure trove of data about 4.4 million office actions from 2008 through July 2017 related to 2.2 million unique patent applications. This release was made possible by the USPTO Digital Services & Big Data (DSBD) team in collaboration with the USPTO Office of the Chief Economist (OCE) and is one of a series of open patent data and tool releases since 2012 that have seeded well over a hundred of companies and laid the foundation for an in-depth, comprehensive understanding of the US patent system. The data on 101 is particularly rich in detail, breaking out 101 subject matter from other types of 101 rejections and coding references to Alice, Bilski, Mayo and Myriad.
With the help of Google’s BigQuery tool and public patents ecosystem which made it possible to implement queries with ease, research assistant Jiun-Ying Wu and I looked over several months for evidence that the two-step test had transformed patent prosecution. We did not find it, because, as the PTO report notes, a relatively small share of office actions – 11% – actually contain 101 rejections. However once we disaggregated the data into classes and subclasses and created a grouping of the TC3600 art units responsible for examining software and business methods (art units 362X, 3661, 3664, 368X, 369X), which we dub “36BM,” borrowed a CPC-based identification strategy for Medical Diagnostic (“MedDx”) technologies, and developed new metrics to track the footprint of 101 subject matter rejections, we could better see the overall impact of the two-step test on patent prosecution. (As a robustness check against the phenomenon of “TC3600 avoidance,” as described and explored in the accompanying Patenty-O Law Journal article, we regenerate this graph by CPC-delineated technology sector, which is harder to game than art unit, finding the decline in 101 more evenly spread).
Mayo v. Prometheus, decided in March 2012, and Alice v. CLS Bank, decided in June 2014, elicited the strongest reactions. The data suggest that an uptick in 101 subject matter rejections following these cases was acute and discernible among impacted art units as measured by two metrics: overall rejection rate and “the pre-abandonment rate” rate – among abandoned applications, the prevalence of 101 subject matter rejections within the last office action prior to abandonment.
Within impacted classes of TC3600 (“36BM”), represented by the top blue line, the 101 rejection rate grew from 25% to 81% in the month after the Alice decision, and has remained above 75% almost every month since then. (Fig 1) In the month of the last available data, among abandoned applications, the prevalence of 101 rejection subject matter rejections in the last office action was around 85%. (Fig 2)
Among medical diagnostic (“MedDx”) applications, represented by the top red line, the 101 rejection rate grew from 7% to 32% in the month after the Mayo decision and continued to climb to a high of 64% (Fig 1) and to 78% among final office actions just prior to abandonment (Figure 2). In the month of the last available data (from early 2017), the prevalence of subject matter 101 rejections among all office actions in applications in this field was 52% and among office actions before abandonment, was 62%. (Fig 2)
However, outside of these groupings and other impacted art units (see paper for longer list) the impact of 101 caselaw has been more muted. 101 rejections overall (depicted by the thick black line) have grown – rising from 8% in Feb 2012 to 15% in early 2017 (Fig.1) – but remain exceptional.
On balance, the data confirm that 101 is playing an increasingly important role in the examination of software and medical diagnostics patents. More than four years after the Alice decision, the role of subject matter does not appear to be receding, remaining an issue in a large share of cases not only at their outset but among applications that go abandoned through the last office action. That patentees cannot tell before they file whether or not their invention will be considered patent-eligible, and perceive that much depends not on the merits of the case but in what art unit the application is placed also presents a challenge to the goal of predictability in the patent system.
It is also the case that the vast majority of inventions examined by the office are not significantly impacted by 101. Even when an office action does address subject matter, rejections and amendments on 101 subject matter on the record are often cursory, in contrast with, for example, novelty and nonobviousness discussions.
What does the data teach us and what directions for policy might it suggest? I save this topic, as well as the impact of USPTO guidance on prosecution and some data issues left unexplored here, for the next post, as data gathering continues.
In the meantime the USPTO continues to move forward on revised examiner guidance. As it does, it may want to decide which metrics most matter – overall prevalence of 101, 101 in pre-abandonment phases, or others – and how it hopes the metrics might change as a result of its revised guidance. The USPTO should also consider keeping the office action data up-to-date — right now, high quality data stops around February 2017 without any plans to update it of which I’m aware (my subsequent FOIA request for updates was denied). That leaves a gap in our ability to monitor and understand the impact of various interventions as they change over time – certainly not a unique phenomena in the policy world – but one that is fixable by the USPTO with adequate resources. In the meantime, it is thanks to the USPTO’s data release that this and other analyses of the impact of the two-step test is even possible.
Thanks to Jonah Probell and Jennifer Johnson for comments on an earlier draft and Ian Wetherbee for checking the SQL queries used to generate the graphs. Comments welcome at email@example.com.
= = = = =
 The AIPLA has proposed a “clean break from the existing judicial exceptions to eligibility by creating a new framework with clearly defined statutory exceptions.”; the IPO has suggested replacing the Supreme Court’s prohibition on the patenting of abstract ideas, physical phenomena, and laws of nature with a new statutory clause, 101(b), to be entitled “Sole Exception to Subject Matter Patentability.”
 Using Google Patents Public Data by IFI CLAIMS Patent Services and Google, used under CC BY 4.0, Patent Examination Data System by the USPTO, for public use, Patent Examination Research Dataset by the USPTO (Graham, S. Marco, A., and Miller, A. (2015) described in “The USPTO Patent Examination Research Dataset: A Window on the Process of Patent Examination”), for public use.
 We put the remainder of TC 3600 art units into the category “TC36 Other” however because many months contained insufficient data (of less than 50 office actions), we did not include it in the Figures.
 Coded using a patent classification code (CPC)-based methodology developed from Colleen Chien and Arti Rai, An Empirical Analysis of Diagnostic Patenting Post-Mayo, forthcoming (defining medical diagnostic inventions by use of any of the following CPC codes: C12Q1/6883; C12Q1/6886; G01N33/569; G01N33/571; G01N33/574; C12Q2600/106).
 The later months of 2017 have insufficient counts for research purposes.