Guest Post by Professors Arti Rai (Duke) and Colleen Chien (Santa Clara). Professors Rai and Chien both served in the Obama Administration.
On the eve of a new Administration, it is useful to take stock of progress that has been made on patent quality over the last eight years, and particularly since February 2015, when the U.S. Patent and Trademark Office launched its Enhanced Patent Quality Initiative (EPQI). In this contribution, we review progress to date and outline directions for the future.
Patent quality means agency decision making that is appropriate as a matter of both product and process – that is, legally correct, clear, consistent, and efficient. Measuring correctness, clarity, consistency, and efficiency is difficult, however. Achieving these goals can also entail significant resources – a challenge for an agency funded entirely by applicants.
When former USPTO Director David Kappos took the helm in 2009, budgetary strains and application backlog demanded immediate attention. Even so, then-Director Kappos pushed through redesign of the agency’s IT system, gave an across-the-board increase in time to examiners, adjusted count allocation so as to reduce incentives for rework, and emphasized quality improvements through international worksharing, industry training, and the creation of the Common Patent Classification system. Then, with the passage of the American Invents Act of 2011, the agency’s budgetary position stabilized and the stage was set for further focus on quality. The backlog subsided, with the queue of patents reduced by 30% over the last eight years, according to statistics released by the USPTO.
Building on these steps, as well as executive actions to crowdsource prior art and technical training, the USPTO launched the EPQI in 2015. Introduced and championed by Director Michelle Lee, the EPQI comprises a group of initiatives that embrace not only the substantive goal of quality but also take seriously issues of measurement. In total, the EPQI comprises 12 different programs and initiatives. We focus here on four initiatives that have thus far yielded data: the Clarity of the Record Pilot; the Master Review Form for measuring quality; the USPTO’s case study on examiners’ use of Section 101; and the Post-Grant Outcomes Pilot.
The Clarity of the Record Pilot ran from March 5 to August 20, 2016 as an effort to train examiners on best practices with respect to claim interpretation, reasons for allowance, and interview summaries and to determine the impact on this training on their work relative to a control group. Relative to the control group, the 125 trained examiners who examined 2600 cases averaged a 15% improvement in the clarity of their interview summary and a 25% improvement in the clarity of their reasons for allowance. Notably, although examiners in the pilot were allowed as much time as they wanted, they reported using only about 4 more hours per bi-week than the control group.
On the quality measurement front, the PTO is using its new Master Review Form (MRF) to provide both reviewers at the Office of Patent Quality Assurance (OPQA) and Technology Center (TC) supervisors a single, comprehensive record of the accuracy and clarity of patent work products. Historically, OPQA and TC supervisor reviews had used different criteria and only OPQA reviews were systematically recorded for identification of trends across different TCs. Additionally, in contrast with prior quality scores used by the PTO, the quality metrics used in the MRF disaggregate product quality (legal correctness and clarity) from process quality (efficiency and consistency) as well as perceptions of quality.
Current data on product quality, focusing on compliance with the statutory requirements of Section 101; prior art (Sections 102 and 103); and Section 112, is available for reviews conducted in the 4th quarter of FY2016. These data, admittedly self-reported, indicate compliance in about 97% of cases for Section 101; 88% of cases for prior art; and about 94% of cases for Section 112.
The USPTO’s case study on examiners’ use of Section 101, one of six stakeholder-requested case studies that the agency is currently conducting, examined a sample of 816 Office actions with an Alice/Mayo-type rejection issued between January 2016 and August 2016. Overall, the study found that 90% of rejected claims were in fact ineligible under the USPTO’s 101 guidance. However, only 75% of the substantively correct rejections were properly explained. Rates of properly explained rejections rose significantly after the USPTO conducted training on Section 101 in May and June of 2016.
The USPTO Post-Grant Outcomes pilot provided examiners of pending applications that related to patents that were the subject of an AIA trial with the contents of the trial. This common-sense initiative, which ran from April to August 2016, aimed to alert examiners of highly relevant prior art, identify training opportunities, and build a bridge between PTAB and the examining corps. 44% of the 323 examiners surveyed by the USPTO reported that they had referred to references cited in the AIA trial petition when examining the child case.
Going forward, the USPTO is pursuing bold initiatives on automated pre-examination search and on revising time allocations for examiners, the latter which is the subject of a current request for comment. To carry out efficient, correct examination, examiners must have the appropriate amount of time to examine each individual application, which can vary, making these initiatives critical for improving quality.
Going forward, one important question that remains to be fully addressed is the extent to which examination should be an “one size fits all” enterprise. In 2011, the USPTO established a separate Track 1 process for those applicants who need decisions made quickly. Small and micro-sized firms have filed more than fifty percent of the Track 1 applications, even though such entities only represented twenty percent of applicants in 2015. (The heaviest individual users of the system are large firms, however.) For other applicants, it may be appropriate to offer options for deferred, “Track 3” examination.
With a go-ahead from Congress, the USPTO might also make available varied intensity of examination. Thus applications that were more commercially valuable might be subject to heightened review, by peers or others, in exchange for greater protection from post-grant challenges. Conversely, applications that were filed for defensive reasons only could pay lower fees in exchange for a promise to limit patent use.
Another way to ensure that the quality conversation continues in public is to support continued transparency and measurement of progress on patent quality. The focus should be on objective, independently verifiable metrics such as the percentage of cases with examiner-cited non-patent literature, or the percentage of cases resolved through compact prosecution – “once and done.” Facilitating tracking of such metrics not only by examiner but by Art Unit or Technology Center could stimulate some healthy competition and also help identify best practices.
For the new focus on metrics and quality two USPTO administrators in particular deserve credit – the Office of the Chief Economist and the Deputy Commissioner for Patent Quality. Together these positions and these personnel, newly created and appointed during the last 8 years, may indeed end up being some of this Administration’s most enduring legacies on patent quality.
= = = =
For an alternate viewpoint, read Gene Quinn’s post: Patently Surreal.