Office Actions per Grant Ratio (OGR): A New Metric for Patent Examiner Activity

By Prof. Sean Tu (WVU) with Chris Holt (VP at LexisNexis IP)

Previously, I published a study which set out to determine how examiners issued patent applications. This original study focused only on issued patents, and coded over 1.5 million patents issued January 2001-July 2011. The main limitation on this study was that it focused only on allowed patents. Accordingly it suffered from a “denominator” problem. Specifically, it was difficult to determine how examiners actually behaved without knowing how many pending applications they had in their docket, as well as how many applications went abandoned.

Six years later, I have followed up on this study by collecting information regarding not only the examiners’ issued patents, but their abandoned patents as well as the total number of office actions issued by every examiner with an active docket as of June 2017. Using LexisNexis PatentAdvisor, we were able to pull the dockets of all active examiners (every examiner with a pending application as of June 8, 2017). Accordingly, this study captures 8,537,660 office actions, 2,812,177 granted patents and 1,255,552 abandonments from 9,535 examiners including from January 1, 2001 to June 8, 2017.

This new dataset is both more detailed and more narrow compared to my previous study. However, I believe, this information is more useful to practitioners today because it shows what is going on at the office at this moment in time. It excludes previous examiners who have retired or are no longer at the PTO. These data exclude examiners with any pending training academy cases (Technology Center 4000) in their active docket. Accordingly, these data filter out the most junior examiners.

From this dataset, we create two new examiner metrics the: (1) Office Actions per Grant Ratio (OGR) and (2) Office Action per Disposal Ratio (ODR). The OGR score simply divides the examiner’s total number of Office Actions by the number of issued patents. The ODR score divides the examiner’s total number of Office Actions by the sum of the number of issued patents and abandonments. From these two scores, we can determine if examiners are spending their time writing a lot of Office Actions or granting patents.

As shown in Figure 1, overall, there is a wide range of OGR scores across the United States Patent & Trademark Office (USPTO), commonly ranging from approximately 0.2 to 23. Furthermore, overall, most examiners have an OGR of 3.0 or below. Examiners with training academy cases (Technology Center 4000) have been filtered from these data.

As shown in Figure 2, these OGR scores roughly correlate to allowance rates, but there are a significant number of examiners that do not have an allowance rate that corresponds with OGR scores. Predictably, those examiners with a low OGR have a high allowance rate (those examiners who grant patents within one Office Action will almost necessarily grant more patents). In contrast, examiners with a high OGR score have a low allowance rate, that is, those examiners who write many office actions before allowance will have lower allowance rate. Accordingly, at the periphery, allowance rate and OGR correlates fairly well. This relationship between OGR and allowance rate, however, is not perfectly linear, and this is especially true for those examiners with an OGR score between 2.0 and 4.9.

Furthermore, there are more examiners with high OGR scores in Technology Centers 1600 and 1700, which may reflect the complex nature associated with biotechnology and chemical patents. Specifically, of the examiners that have an OGR score of 10 or more, 17.6% and 12.8% come from 1700 and 1600, respectively. In contrast, there are a higher number of examiners with low OGR scores in Technology Center 2800, which corresponds to Semiconductors, Electrical and Optical Systems and Components. Specifically, 64% of examiners with an OGR score of lower than 1.0 come from Technology Center 2800.

Interestingly, when broken down into Workgroups, this study finds that there can be large variation in OGR scores. For example, Figure 3 shows that in Technology Center 1700 (Chemical and Materials Engineering), Workgroup 1780 (Food, Miscellaneous Articles, Stock Material) has a disproportionate number of examiners with high OGR scores when compared with other Workgroups within 1700. In contrast, Workgroup 1750 (Solar Cells and Electrochemistry) has a disproportionately high number of examiners with low OGR scores when compared with other Workgroups within 1700. Similarly, in Technology Center 3600, Workgroups 3620 and 3680 have many examiners with high OGR scores, which is unsurprising since both Workgroups encompass “Data Processing: Financial, Business Practice, Management, or Cost/Price Determination” or business methods type applications.


This paper confirms many of the findings from my previous study. Specifically, junior examiners have a much lower allowance rate and a much higher OGR score than their more experienced counterparts. This is unsurprising since junior examiners will have written only a few Office Actions (usually less than 500) and have only allowed a few cases. However, this study also goes much further than my previous study. By looking at allowances and abandonments as a function of Office Actions written, we can get an idea of the general rate of patenting at the patent office and how these rates differs among different technology centers, workgroups and art units. This study shows how examiners spend their time, either writing office actions or allowing cases.

This study focuses on overall patent office trends as well as trends at the technology center and workgroup levels.  PatentAdvisor’s “Examiner Time Allocation” metric can also be used to forecast the time and expense required to obtain a patent and is based on each specific examiner’s body of work.

As with my previous study, I note that there is no “ideal” patent allowance rate. It is possible that both populations of examiners with low and high OGR scores are doing an excellent job of rejecting “bad” patents and allowing “good” patents. This study gives insight into how “average” examiners behave in a particular technology group. One argument may be that examiners who are two or three standard deviations from this “average” should be scrutinized at a higher degree.

A full draft of “Office Actions per Grant Ratio (OGR): A New Metric for Patent Examiner Activity” is available on SSRN at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3100326.

51 thoughts on “Office Actions per Grant Ratio (OGR): A New Metric for Patent Examiner Activity

  1. 10

    Unless the lines are in the wrong place, Fig. 1 shows that approximately 41%, not “most” examiners have an OGR of 3.0 or below. Is the text or the graph incorrect?

  2. 9

    There are far too many dots in that top left corner. What exactly are those examiners doing all day?

    Nit: If law professors don’t use ‘that’ and ‘which’ correctly, how can we expect practitioners to ever learn?

    1. 9.1

      re: your “nit,” I’ve pretty much given up hope. Especially among patent practitioners. Other pet peeves of mine, “peruse” does not mean, and in fact means something very different than, “browse.” Sounding similar is no excuse to swap those words in an effort to “sound smart.” “Could care less” means that you are NOT at the bottom of your “caring” scale. I’ll stop there before I get into full rant mode 🙂

      1. 9.1.1

        Peruse actually has two definitions according to Merriam Webster, one of which seems to be the same as “browse.” Perhaps the usage of this word is evolving?

        a : to examine or consider with attention and in detail : study
        b : to look over or through in a casual or cursory manner

        1. 9.1.1.1

          Hmm, having those definitions doesn’t make for very clear communication. For example, say your partner tells you to “peruse” some documents and then get back to him. Does he want you to bill 0.5 and give him a quick overview, or bill 4 hours and give him a detailed report?

          1. 9.1.1.1.1

            Not really that difficult – there is most always context (including history) and if any doubt, one asks for clarification.

          1. 9.1.1.2.1

            “Dictionaries, in general, just report how words are used.”

            Also known as “the definition”. Remember, all words are made up.

            1. 9.1.1.2.1.1

              Also known as “the definition”. Remember, all words are made up.

              Even the words within the definition.

              Turtles all the way down

      2. 9.1.2

        “Could care less” does mean that you aren’t at the bottom of your “caring” scale, but people say, “I couldn’t care less,” which mean you are at the bottom of your “caring” scale.

      3. 9.1.3

        see 2nd definition:

        Definition of peruse

        perused; perusing
        transitive verb
        1 a : to examine or consider with attention and in detail : study
        b : to look over or through in a casual or cursory manner

    2. 9.2

      use ‘that’ and ‘which’ correctly

      News flash: language evolves and, presently, the “correct” use of “that” and “which” is a subject of some debate.

  3. 8

    Back up a sec Scooter. I don’t know how close a professor of patent law is to the actual practice of patent law, so allow me to educamate you. I have never conducted a candid interview with a junior examiner who did not confess that his SPE would countenance only X number of allowances per quarter, with X usually being arbitrarily defined to be around 3. X, of course, is therefore utterly divorced from legal merits. It is a sledgehammer ad hoc measure assumed to correspond to the reality of legal merit, without the nuisance of supervisory input informed by an actual understanding of the legal merits involved.

    Had enough of watching how sausage gets made?

    1. 8.1

      Tourbillon is basically right. One of the first things I look at is whether the examiner has signing authority. If not, I cringe.

      One way, I have found that helps is insist that the person that signs the OA be present at the meeting. If it is in person, then look them right in the eyes and explain why it is a bad rejection. Often this helps.

      (Actually, I hate to bring this up, but you know me, I can never resist stirring the pot. Consider a study to see whether companies are being favored by having primary examiners assigned to their patent applications. I think you might find a bias.)

      1. 8.1.1

        re: assignment of applications to examiners

        The few art units I’m aware of have a pretty neutral way to divide up the work. For example, an art unit with 20 examiners might divide up the work based on application number, where any applications ending in 00-04 go to the first examiner, 05-09 go to the second, 10-15 go to the third, etc., with primary examiners having a larger range (because of their larger required output) than junior examiners. But basically there is no real way to game a system like that.

        1. 8.1.1.1

          Interesting. I would think one would want to divide them by the subject matter disclosed or claimed.

          1. 8.1.1.1.1

            Not sure if you’re being intentionally obtuse, but yes, the applications are divided into the art units first (which are divided by subject matter / technology), THEN are divided pseudo-randomly amongst the examiners.

              1. 8.1.1.1.1.1.1

                contractors. I have no idea their qualifications, I’d give them an 80% accuracy rating, which is fine, depending on how much the Office is paying them. There are also specialists (examiners who decide to become good at classification, no financial reward for them) in each art unit that help transfer cases that were misclassified.

                1. Thanks curious,

                  Contractors are being given free reign to actually assign cases to individual examiners?

                  That seems like a gross dereliction of managerial responsibility to me. So much so that I find myself doubting it. Perhaps the distribution of a case to an art unit may be done by a contractor, but I cannot see the assignment, once in an art unit, to an individual be anything but a pure management decision, and one that would not be outsourced.

            1. 8.1.1.1.1.2

              curious, I am talking about finer classification where examiners can become experts in small fields.

              I think there should be a preliminary search and classification process to get the applications to the right examiner.

              I don’t think just being in the right AU is enough.

              1. 8.1.1.1.1.2.1

                That might be a good idea, but it would require more managing on the part of management. So it’s dead on arrival.

                1. would require more managing on the part of management.

                  If this critical element of “setting the table” is not a part of the current management duties, that what exactly do those management duties include?

                  How is any examiner expected to meet what amounts to a random “pure widget number” type of examination level?

                  Is “management” really only (merely) a “numbers” game of “X” assigned widgets?

                  How is the examiner union “OK” with this?

    2. 8.2

      Yup Tour, so when I finally get around to the verbiage, I am greeted with the usual Lemley-Lemmingness of:

      Coverage of patent examiners who allow “bad” patents have been pervasive in the news. This issue has been exacerbated by the concern over non-practicing entities (NPEs). Issuing patents that do not meet the patentability requirements acts as a windfall to these patentees. Additionally, “bad” patents can hinder innovation by increasing transaction costs for competitors and harm the public with increased product costs.

      Translate “pervasive in the news” as your typical propaganda machine rhetoric…

        1. 7.1.1.1

          Per Examiner Ninja, there are exactly zero examiners with the name of Einstein.

          Is this “Bruce” merely in your dreams?

  4. 6

    “Accordingly, the OGR reflects the average number of office actions it takes before an examiner
    grants a patent.”

    While your metric may “reflect” that average, it is not a good measure of it.

    Say an examiner produces final actions which in 25% of cases indicate allowable material that ends up allowed in an after-final examiner’s amendment, and in 75% of cases properly convince the applicant that there is no/no-worthwhile patentable scope in the application and to abandon the case. This examiner would have a steady state OGR of 9, despite only taking 3 actions prior to allowances.

    The “average number of office actions it takes before an examiner grants a patent” is much more meaningful on a ‘per application’ basis than a ‘per docket’ basis.

    1. 6.1

      Is this “per docket” thing in the paper itself, Ben? How does your steady state OGR of 9 come about?

      (and where does this “proper” 75% abandon rate come from?)

    2. 6.2

      1. The paper doesn’t phrase it like that, but it’s what they’re doing when the calculation goes ALL office actions / ALL grants.
      2. 25% of cases requiring 3 actions to terminate with grant and 75% of cases requiring 2 actions to terminate in abandonment. So for every grant there were 3 actions leading to allowance and 6 actions leading to disposal. 9 actions/grant.
      3. It’s a hypothetical.

      1. 6.2.2

        Upon further recollection, the tactic of taking en gross all examiner office actions and all grants removes any “per docket” aspect.

        May it not be more advantageous (or at least more reflective of reality) to attempt to “put back in” some type of normalization factor for the various docket factors(i.e., particular art unit, experience level and the like)…?

  5. 5

    Like a lot of these metrics, I am not sure what I am to do with this information. Is this just a metric for metrics sake, or is there a useful purpose to this?

  6. 4

    More than 20 years ago, it was explained to me that volume filing corporations in Japan monitor “Quality” amongst the firms in Europe they use as outside patent counsel for prosecution work at the EPO. In particular, they monitor number of Office Actions per case, so they can determine an average number for each firm.

    Assuming that, for any particular industrial conglomerate bulk filer, each EPO firm gets a comparable spread of cases, in the same technical field, this performance/quality metric strikes me as meaningful. Woe betide the firm that (according to the collected stats) needs more Office Actions than average, to get to issue.

    Is it not more significant, so to compare prosecution firms than PTO Examiners? Who knows more than me? Who can update me?

    1. 4.1

      I think both would be valuable – especially if you happen to be assigned one of the outlier examiners that drives your “woe, my number is higher than some other firm’s number” in providing a cogent rationale for the “woe” that is outside the control of the counsel.

    2. 4.2

      “Assuming that, for any particular industrial conglomerate bulk filer, each EPO firm gets a comparable spread of cases, in the same technical field, this performance/quality metric strikes me as meaningful. ”

      It is meaningful, but what does it mean?

      For one thing, a low OGR could mean a Firm gives up the fight too quickly and amends to make the claims narrower than they need to be. Is that a good thing?

        1. 4.2.1.1

          A good indicator could be how frequently a law firm wins after filing a response without amendment (i.e. the response is followed by a second non-final action, an allowance, or the examiner is reversed by the PTAB). That law firm picks the good fights.

          1. 4.2.1.1.1

            I agree. A law firm astute enough to “win” more often than the average “win rate” is worth a lot of effort to find.

            There are stats available, by mining the EPO Register, which EPO firms “win” more inter Partes post-issue oppositions than the average firm. Acting for the opponent, they wipe the patent out more often than average. Acting for the patent owner, they see off the opposition more often than the average firm does.

            Are such stats available for IPR’s yet?

          2. 4.2.1.1.2

            PiKa,

            But a win could come in other forms, and you have not accounted for the examiner variable – which is one of the major points of this thread (and its predecessor).

            1. 4.2.1.1.2.1

              “But a win could come in other forms,”

              That is right. A “win” is getting the client the claims they want under the budget they have. There are many ways to skin a cat.

              1. 4.2.1.1.2.1.1

                Many seek to forget that client decisions have a lot to do with the types of win metrics being bandied about.

                Clients dictate many final decisions as well as many operational ones (i.e., my mix of clients include those that absolutely refuse to go the path of appeal). Other clients that I have known have been literally “worn out” by recalcitrant examiners (as noted a few times here, this study shows a very real “per examiner” curse of a random draw.

                An indication of the futility of some of these metrics might be better understood if error bars (or error bubbles) would be added – such would quickly show that most any “trend” would be swallowed in the noise.

                Further, and perhaps even more importantly, any “benefit” of such (questionable) data would be (should be) declined to be used by any patent firm for their promotion efforts. Such use (to me) would draw more negative inferences than any positive spin attempted. I would gladly engage a client in explaining this were a client offer this “metric” in an engagement negotiation.

          3. 4.2.1.1.3

            PiKa. I think it is more complex than that.

            There is the rejection, the claims, and the clients goals. In an important patent application, the client sometimes wants a fight for every inch. In a case that doesn’t matter much, the client often just wants an amendment and allowance.

            I think the client’s goal matter a lot in this.

            1. 4.2.1.1.3.2

              I guess too few clients file cases that don’t matter much, at least not initially, and when the cases don’t matter anymore, an allowance is rarely desired. I need some of these clients.

              1. 4.2.1.1.3.2.1

                PiKa, it is a matter of what budget they want to spend on prosecution. I work a lot with claim charts so if an application is not going to map, then the client is less interested in it.

                But, if an application can map to a product/standard, then the client is willing to spend more on prosecution and will want us to put up a big fight for every inch.

  7. 3

    Should we take “grant” seriously? Are these stats really restricted to information regarding applications that eventually resulted in the granting of a Patent?

  8. 2

    While extensive, there still remains a denominator problem in so far as examiner work may not be captured for the whole sector of applications filed with non-publication requests.

    As I have put forth in the past, the non-publication request should be the primary aim of all initial filings (except those cases with known conditions prohibiting a non-publication request). EVEN IF later publication is highly likely, the initial default condition should be non-publication.

    As being an aspect of non-publication, the scope of this “denominator problem” is unknowable (absent some further intensive algorithmic processing).

  9. 1

    Interestingly enough, the Actions Per Disposal metric is used internally as well (although any disposal count is included, which includes abandonments and allowances as well as RCE disposals and examiner’s answers). It’s viewed as somewhat difficult to extract meaning from. A value somewhere in the mid two’s is usually considered nominal, but values can go higher or lower than that for a variety of reasons, both good and bad.

Comments are closed.