78 thoughts on “Patent Quality.

  1. 9

    Dennis –

    I’m sure the recent downturn is due to the ever moving goal post regarding 101. I’m sure the Examiner’s don’t know how to do the Alice analysis and are issuing boilerplate rejections on anything that claims a method that might be implemented with a processor and that the “randomly selected” Office Action’s actions are being recognized by the random sampling Quality inspectors as deficient.

    1. 9.1

      You think reviewers are calling a lot of 101 errors? I’m doubting that bro. That may be why quality is actually suffering to some minor extent, but I doubt seriously that is why reviewers are calling errors.

      1. 9.1.1

        Well, if they are actually reviewing, then they should be calling a lot of 101 errors. We are seeing a significant number of attempts at Alice type 101 rejections and none of them provide the analysis called for in the training materials.

  2. 8

    Off topic: according to Law360, earlier this week SCOTUS approved the Judicial Conference’s change to the pleading requirement in patent infringement complaints. When this comes into force in December, it will be yet another hurdle for serial patent infringement plaintiffs. It remains to be seen if it will help stem the tide of calls for further patent law “reform”, despite logic dictating that it should.

    1. 8.1

      I don’t know if it will stem the tide sufficiently to reverse the inevitability of much-needed patent reform, but I’m sure that opponents such as yourself will continue making the argument that it should.

  3. 7

    All of this misses the most important single quality question, which is: Is there much better patent or publication prior art available which was not considered in the application examination? That is not and never has been a PTO quality test.
    Yet folks wonder why such a high percentage of IPRs, reexaminations and patent suits shoot down patent claims on such unconsidered prior art? How well an examiner applied inferior prior art becomes completely irrelevant when the patent is actually asserted.

    I also agree with comments here that PTAB reversals of examiner rejections should count as quality indicators of such rejections, nor just ignored.

    1. 7.1

      “All of this misses the most important single quality question, which is: Is there much better patent or publication prior art available which was not considered in the application examination? That is not and never has been a PTO quality test.”

      Mmmm, that is not the best quality question. Bro, hate to break this to you, we only offer a search, we don’t offer a “finding” “scorched earth” campaign.

        1. 7.1.1.1

          I am not sure if this is still the case – and 6 can chime in – but in past discussions, 6 believed that mere claim key word searches without first reading and understanding the specification was entirely appropriate and sufficient to perform the search. Of course, I did point out with particularity why this view was in error.

          1. 7.1.1.1.1

            “but in past discussions, 6 believed that mere claim key word searches without first reading and understanding the specification was entirely appropriate and sufficient to perform the search.”

            That’s the policy now. I generally still do more, though as the subclasses in the CPC grow to gigantic proportions I can no longer maintain even the old style of search. You just can’t be searching, in detail, 4k+ patents for every app.

            1. 7.1.1.1.1.1

              That’s the policy now.

              Cite please (you have already the rule and guidance from me that directly contradicts this supposed guidance of yours).

              Is this guidance of yours in some double secret SAWS type program?

              1. 7.1.1.1.1.1.1

                “you have already the rule and guidance from me ”

                The rule and guidance from you? lol!

                Tell you what bro, when you’re my spe or director or the undersec I’ll take rules and guidance from you. Until then, nah.

                1. Don’t be pedantic 6 – clearly I meant that I provided the official Office rule and guidance.

                  Still waiting for your policy cite…

            2. 7.1.1.1.1.2

              It is not the policy to just do text searches. Some examiners & SPEs think that just doing a text search using terms from the claims is sufficient. It is not. See MPEP 904.02 & 904.02(a), for example.

              The examiner needs to understand what the invention is before they can ever do a proper search. So at least read enough of the spec to understand the invention. And learn the art so you have an idea what synonyms there may be. In a few arts, just doing a (good) text search may be sufficient (though not by official policy) & in some arts it may not be practical to do a subclass search due to the size of some subs. I understand that. I am just talking what appears to be official policy in the MPEP.

              Of course, even the above MPEP cites are wishy-washy. 904.02 states “It is rare that a text search alone will constiture a thorough search of patent documents. Some combination of text search with other criteria, in particular classification, would be a normal expectation in most technologies.”

              Personally, in my opinion, these lines sound contradictory. Why would a text search by itself not be sufficient but a text search limited to certain classes or subs be sufficient?? This is saying that the more limited search is better to do than the more comprehensive. No sense to that. Maybe they mean doing a text search AND a class/subclass search is the normal procedure, which would make more sense. Poorly worded and needs to be more clearly written.

              1. 7.1.1.1.1.2.1

                “It is not the policy to just do text searches.”

                Yep it is. It’s called “hybrid searches” now. But it’s just a fancy way of saying text search limited to certain subclasses. I guess some bozo in the EU came up with the term “hybrid” when we did the switch over and it has caught on here.

                “I am just talking what appears to be official policy in the MPEP.”

                What appears to be official policy may well differ from what is official policy that is constantly trumpeted internally.

                “Some combination of text search with other criteria, in particular classification, would be a normal expectation in most technologies.”

                ^that’s the new “hybrid” search. It used to be text searches separate from and in addition to classification searches. But now the subs are so large its pretty much rarer where people are doing the whole subs. Because otherwise the searches take FOREVER. And the CPC was supposed to alleviate this by being “maintained” but they’re not breaking down the subs sufficiently as was supposedly supposed to happen.

                “Personally, in my opinion, these lines sound contradictory. Why would a text search by itself not be sufficient but a text search limited to certain classes or subs be sufficient??”

                Because you’re absolutely right and it’s tar ded that’s why. The whole search policy is in shambles. Though searches aren’t too bad these days thanks to the excellent actual classification work done in the CPC.

                “Maybe they mean doing a text search AND a class/subclass search is the normal procedure, ”

                Nope, they USED to say that. But not anymore. It’s all “hybrid” and just what you just got done saying doesn’t make any sense.

                1. 6,
                  Can you point to anything – MPEP, memo, etc. – that actually talks about a “hybrid search” being the search that is to be performed? I never heard of that and would appreciate something concrete rather than just word of mouth, which usually isn’t worth the paper it’s printed on 🙂

                2. It’s plastered all over these internal search things we get for CPC. I’m guessing it’s coming from the EPO people. Idk if I have anything promulgated outside just yet. It’ll surely trickle forth in not too long.

                3. may well differ

                  I SAWS that…

                  (still waiting for your citation to back up your statement, 6)

        2. 7.1.1.2

          Why? In a nutshell, because the USPTO searches to knock out the independent claim whereas EPO Examiners proceed to figure out what subject matter Applicant will have to narrow down to, and then they do their best to knock out that subject matter.

          That’s in turn because, in the EPO, Examiners get only one go at searching patentability, in an exercise separate to, and preceding, examination on the merits.

          1. 7.1.1.2.1

            It is a good question and certainly search goes to the very heart of what the USPTO should excel at.

          2. 7.1.1.2.2

            Also, because in Europe being a patent examiner is a prestigious, hard-to-get job, often performed by PhDs. And have you looked at their fees? Probably 3x higher? We get what we pay for…

            1. 7.1.1.2.2.1

              Their fees?

              At the EPO? EUR 775 for a Post Grant Review. EUR 1860 for an Appeal.

              Remind me, what is the PGR fee at the USPTO? USD 14,400 or so, isn’t it?

              At the EPO, every case is examined by a 3-member “Examining Division”. Signatures of all three Examiners required, on every Notice of Allowance.

              Three Office languages at the EPO. Every Examiner has to be able to examine in all three. And mostly, EPO Examiners have as first language one that is not an EPO Office language.

              Indeed, it’s a high level job.

              1. 7.1.1.2.2.1.1

                …and yet MaxDrei, as I read the IPKat page, management there seems intent on wrecking that esteem…

                What’s up with that?

                1. Well, anon, EPO management serves the EPO President. The President reports to the governing body, the Administrative Council. Who sits on that Council? The representatives of the 38 EPC Member States. What do they want of the President? More profit? Advantage to their own national Patent Offices? Management is always rational, always aiming to deliver what is required of it by its master. So, “intent on wrecking”? In its own eyes, not.

                  Is this a viable way to run the EPO though? You might well ask. But what can one do, as a voter in an EU Member State? Write to one’s Member of Parliament perhaps. But which Parliament, national or EU, or both?

                2. You are asking me questions of a jurisprudence in which you should be the one supplying answers, not questions.

                  Would it be snarky to suppose that you just don’t have those answers?

                3. (MaxDrei, I think the “gist” of the situation is that EP management is serving itself)

        3. 7.1.1.3

          Lol, the searches I see out of the EPO produce nowhere near as nice as my regular art. Though you should see a bit of an increase in finding good art on all sides now that we’re all using the CPC.

          1. 7.1.1.3.1

            It may be that you are not representative of the population under consideration…

    2. 7.2

      “All of this misses the most important single quality question, which is: Is there much better patent or publication prior art available which was not considered in the application examination? ”

      Excellent point, Paul.

      1. 7.2.1

        Absolutely.

        Too much examination time is being burned over gamesmanship.

        Much too often, the same unreasonable office action gets punted out repeatedly, relying on unreasonable extensions under BRI as a crutch – because they can, and no one is forcing them to do more searching.

        Too much examination time, including appeals, gets burned by citing references that are superficially a bit similar – but if you take more than 20 seconds to read the reference, you see that it’s totally unrelated and/or mutually exclusive with the claimed invention. Or, where a reference is cited in a manner that is just flat-out technically and literally incorrect.

        Too much examination time is being burned on immaterial matters – objections to the title (not based on any formal requirement, but just because the examiner didn’t like it), nitpicky and implausible objections to grammar or whitespace that the examiner just doesn’t like…

        The proposal that I submitted to the PTO for improving patent quality (by improving examination quality) recommends a reduction of these time-wasting activities – specifically so that examiners can spend more time searching a wider and deeper cross-section of the art.

        1. 7.2.1.1

          objections to the title (not based on any formal requirement, but just because the examiner didn’t like it)

          If you all would stop submitting useless titles to us, we wouldn’t need to make this objection.

          1. 7.2.1.1.1

            OK, I’ll tell my story.

            The applications that prompted a title objection – only a handful, but not the same examiner – had titles like this (these are hypothetical, but approximately the same level of detail): “Fingerprint-Based Object De-Duplication,” or “Identity-Based Data Encryption.” Enough to convey both the general field (de-duplication, encryption, etc.), and the interesting aspect of the claimed techniques.

            The objections did not cite any statute. They simply asserted that the examiner found them “non-descriptive.”

            My reply asserted that 37 CFR sets forth all of the requirements of the title, and that my title met all of them. It then listed a few dozen titles of recently issued patents that were far worse, like: “Data Processing Apparatus” and “System and Method for Authentication” and “Compute Module,” and asserted that the title of my application was far more specific.

            Invariably, the objection was tacitly withdrawn. But my point is: What in the world was the point of that exercise? Why was the examiner wasting time and ink formulating this objection, instead of a substantive examination?!

            This is hardly the only example of such nonsense: I’ve encountered similarly meritless objections to the figures, the abstract, to typographical errors in the specification, to pedantic claim-drafting preferences like “I don’t like ‘at least one,’ please change every instance to ‘one or more'”… meritless restriction requirements, meritless objections to IDSes… all of which take precious time and energy away from *examining the application*.

            If the patent office is serious about improving examination quality, these inefficiencies are an easy target.

    3. 7.3

      I agree Paul. One indication of quality is the percentage of claims found to be unpatentable at the PTAB.

      I will say it again, though, that the most powerful way to increase patent quality is to go back to TSM. Make the basic rejection with a few exceptions. TSM would enable a two step process of the examiner formulating a search and then outsourcing a search for the TSM. It is the sort of process that vastly improve patent quality.

    4. 7.4

      Great Point Paul. Look at the EPO! At the EPO just now, Examiners are in ever more fear of the in-house Quality Police, to the extent that your sort of “Quality” is at risk.

      How so? Well, the Quality Police monitor ferociously what is easily monitorable. Too hard for them, is to monitor whether the Examiner found the best art.

      Examiners plead, that finding the closest art takes much time. Managers do not like such pleading. So they are inclined to close their ears to it. I see problems coming, to maintain the EPO’s high reputation, built up over 30+ years, for doing a top class “quality” search.

    5. 7.5

      Respectfully Paul, the presence or absence of appropriate prior art during the (do the Fn job right the first time) examination is only one part of the examination.

      Yes it is important.

      No it is not the only important thing.

      Yes it is perhaps more easily and “objectively” measured and yes, it is one traditional thing that those post-grant with unlimited (or effectively so) resources WILL pursue. But please do not let this be the only (or even the most prominent) focal point, as that just perpetuates the notion that the Office is to merely check against prior art and move on. After all, the examination is to the law and not just to the prior art.

  4. 6

    Please be sure to let me know when there is remotely as much worthwhile “innovation” going unprotected due to “examiner error” as there is junk flowing out of the PTO. Last time I checked, the PTO still hadn’t put “information is ineligible for protection with patents” at the top of its List of Stuff Every Examiner Should Know.

    As it stands, most of the “appeal appeal appeal!” crud that is taken to up the Federal Circuit gets hammered under Rule 36, although occasionally some of the sillier “non-analogous” art arguments are documented for posterity (e.g., In Re Holness, CAFC May 20, 2015).

      1. 6.1.1

        You mean these people are shocked that “garbage in” and you can get “garbage out?” Of course their applications are never garbage….well, until the client doesn’t pay. LOL

        1. 6.1.1.1

          Shocked…?

          You have not been paying attention: rubber stamping of either Reject-Reject-Reject or Accept-Accept-Accept is just not acceptable.

          Do the Fn job right.
          The first time.

  5. 5

    One thing that has resulted in a lot of errors, and probably skewing the error rate some, has been the changes to 101 & the abstract idea rejections. Many people still miss & results in both allowance errors & IPR errors.
    Probably other things as well, but that has had an impact.

    1. 5.1

      I’m still, today, dealing with rejections that present 101 rejections citing the machine-or-transformation test, or even lack of significant post-solution activity. Alice was issued 11 months ago – there is no excuse for that sort of misstatemement of the law.

      1. 5.1.1

        I agree with you completely. Too many times people do not apply the most current laws/rules/guidelines. And they are wrong when they do so. Of course I have occassionally seen lawyers cite out of date guidelines in responses also 🙂 Though not nearly as often.

  6. 4

    BINGO! The fillings have been down, so the new hot button issue is quality, but not necessary of issue patents, but the Office actions. Expect better “quality” Office actions, but if your measuring “quality” as valid patents, well….most attorneys will say when they get a notice of allowance, refuse it because it’s not a “quality” allowance? Also the Office does expect the time for each application to disposal to increase.

  7. 3

    Very interesting.

    First, note that most of the changes are quite minor. The ones that have really taken a hit are the “External Quality Survey” (6.4 to 5.6″ and the “Internal Quality Survey” (6.1 to 5). And before we read too much into that – these metrics are accustomed to sharp swings, going from 5.1 (2012 Q2) to 9.4 (2012 Q4) back to 5.1 (2013 Q2).

    Note, also, that those quality surveys have absolutely no effect on the “Quality Index Reporting” metric, which has hovered around the 90% mark without significant changes since 2010.

    None of that is surprising, based on what we learned from (1) the Office of the Inspector General (OIG) Report about how these metrics are obtained, and (2) the presentation by the OPQA during the Patent Quality Summit. These metrics are:

    1) Based entirely on opinion surveys with absolutely no basis of reference – literally: “rate this office action on a scale from 1 to 5.” If that seems like a joke, well, it’s not.

    2) Completely unrepresentative of the USPTO’s work product. The total review process of the OPQA covers about 0.1% of the USPTO’s work product – less than one office action per examiner per year.

    3) Reported as an aggregate metric with a formula that attributes weights to different inputs, based on… nothing concrete. It’s just a mishmash aggregation.

    4) Skewed by arbitrary choices to flag problems as “Needs Attention” instead of “Error,” and unwritten policies privately developed within the OPQA, such as “if an examiner fails to follow a policy within six months of its enactment, we don’t count that as an error.”

    And as a result:

    5) Completely out of sync with reality, even the USPTO’s other metrics – such as a “compliance” and “quality” scores between 90% and 97%, while Patent Trial and Appeal Board metrics reveal that 44% of appealed cases end with the reversal of at least one examiner rejection.

    6) Not taken seriously by anyone else within the USPTO – e.g.: findings of errors by the OPQA are intentionally disconnected from examiners’ performance ratings and bonuses.

    I’ve written extensively about the problems with these quality metrics, and my submission to the USPTO about improving patent quality provided specific recommendations that I hope the USPTO will consider.

    1. 3.1

      Completely out of sync with reality, even the USPTO’s other metrics – such as a “compliance” and “quality” scores between 90% and 97%, while Patent Trial and Appeal Board metrics reveal that 44% of appealed cases end with the reversal of at least one examiner rejection.

      The PTAB metrics include unquantified selection bias, since applicants tend to appeal cases that they think they have a chance of winning. Also, examination practice under compact prosecution requires making both a 112(b) rejection and any prior art rejections that can be made based on a best guess of the claim’s intended scope, so you end up with this situation where the PTAB reverses a prior art rejection because the claim is also indefinite and therefore can’t be construed.

      So, while I’m sure the quality metrics need some work, I don’t think it tells you anything to look at the reversal-in-part rate at the PTAB and directly compare it to the quality metrics without substantial further analysis.

      1. 3.1.1

        The PTAB metrics include unquantified selection bias, since applicants tend to appeal cases that they think they have a chance of winning.

        Well, conversely, examiners and SPEs only allow cases to complete the appeal process if they think they have a chance of winning. When they don’t, they can choose to reopen prosecution instead of filing an examiner’s answer (or earlier, i.e., in response to the notice of appeal or during the pre-appeal conference).

        Moreover, the appeal process favors the examiner – the question is not: “is the applicant’s argument more persuasive than the examiner’s argument over the interpretation of this reference?”, but “is there a clear error in the examiner’s argument?”

        So the fact that despite these constraints, PTAB appeals have a 44% reversal rate on at least one issue – that’s an astounding metric. And comparing it with “quality” self-assessments of 90% reveals a wide chasm of contradiction.

        1. 3.1.1.1

          “but “is there a clear error in the examiner’s argument?””

          Nah bro, it is just plain ol “has the applicant shown reversible error in the action taken by the office?” It need not be clear.

          1. 3.1.1.1.1

            On fact-based issues – such as whether a reference qualifies as prior art – I agree with you.

            But those issues aren’t usually appealed; they’re more issues like broadest reasonable interpretation (“is it reasonable for an examiner to cite a shovel as equivalent to a baseball bat?”) and obviousness (“is it reasonable for the examiner to combine 47 references to reject a 100-word claim?”) and, now, abstractness. These determinations are purely discretionary, and the PTAB will only reverse them if the examiner’s rationale is ridiculously, indefensibly bad… which, apparently, occurs in 44% of cases that reach an appeal decision.

            1. 3.1.1.1.1.1

              My suggestion to you is to focus on the facts as much as possible in 103’s and try to cite art showing your claim constructions. But idk if you’re right about them all the time upholding rejections that aren’t plainly unambiguously indefensibly bad. I see a lot of rejections reversed over at Karen’s blog. Some of which I would not have reversed. Most of those hinge on facts though.

        2. 3.1.1.2

          “Well, conversely, examiners and SPEs only allow cases to complete the appeal process if they think they have a chance of winning. When they don’t, they can choose to reopen prosecution instead of filing an examiner’s answer (or earlier, i.e., in response to the notice of appeal or during the pre-appeal conference).”

          Thank you, thank you, thank you, for pointing out the fallacy in that tired old argument from the examiners.

          “Moreover, the appeal process favors the examiner…”

          True. The “affirm at all costs”mentality of the lifer APJ’s has, surprisingly and disappointingly, been transplanted to most of the new hires of the PTAB.

        3. 3.1.1.3

          Well, conversely, examiners and SPEs only allow cases to complete the appeal process if they think they have a chance of winning.

          Sure. I agree completely. But the result is that the affirmance rate statistic is even less useful, because there are two types of unquantified selection bias involved.

      2. 3.1.2

        “…so you end up with this situation where the PTAB reverses a prior art rejection because the claim is also indefinite and therefore can’t be construed.”

        Can’t say I’ve ever seen this. As you noted, “examination practice under compact prosecution requires making both a 112(b) rejection and any prior art rejections that can be made based on a best guess of the claim’s intended scope” so if there’s a 112, 2nd/112(b) rejection that should be made, the examiner should make it. If the examiner hasn’t made the 112, 2nd/112(b) rejection, and has only made prior art rejections, the assumption, by both the Applicant and the Board, is that there is no indefiniteness.

        1. 3.1.2.1

          No, my point is that the examiner makes both the 112(b) and the 102/103 rejections because of compact prosecution. The 112(b) gets affirmed and the 102/103 gets reversed because the claim is indefinite.

          I’ve also seen where the PTAB sua sponte adds a 112(b) rejection and reverses the prior art rejection. See, for instance, Ex parte Cadarso.

          1. 3.1.2.1.1

            I’m sure that those scenarios have occurred at some point – but I’m skeptical that they occur often enough to influence the metrics. That’s a very specific pattern that probably arises in a small handful of appealed cases.

          2. 3.1.2.1.2

            One would think if they’re using the “new” (they’ve been around a while) rules for appeal briefs, they would’ve had to put in description of what the “means” could be. Putting in that description can take a while.

            Perhaps this is old, though, and they didn’t have to put in the description?

      3. 3.1.3

        so you end up with this situation where the PTAB reverses a prior art rejection because the claim is also indefinite and therefore can’t be construed.

        Now that’s something I’ve never seen, and I find it difficult to believe that it has any significant impact on the 44% reversal rate. Can you point me to an example?

  8. 2

    Does anyone have any idea how the PTO is going to measure the overall quality of their product, valid patents? All they do now is measure the quality of their process.

    1. 2.1

      Not really Ned, measuring the quality of the process gives you a pretty good idea of the “valid patents”. It isn’t perfect, but what is? I doubt if they have the resources to double examine all million patents that were just issued not long ago.

    2. 2.2

      First – the “product” of the USPTO is not issuing patents, since the USPTO has almost no input.

      95% of the “quality” of a patent is determined by the significance of the invention and the skill of the drafter. The PTO has no input into either aspect – indeed, they’re both set in stone well before the PTO receives the application.

      The USPTO’s contribution to “quality” is (1) confirming that the application satisfies the legal requirements, and (2) fancy paper and a shiny ribbon. If an application that passes all of the criteria of 35 USC and 37 CFR, the USPTO is legally obligated to allow it – even if the application is otherwise deemed to be of low “quality” for other reasons.

      Second – the way that the PTO measures “quality” of issuing patents is by handing allowed applications to the OPQA, and asking if they agree that the application should have been allowed. Agreement in the result = acceptable “quality,” as far as the OPQA is concerned. The OPQA doesn’t even look at the quality of office actions: it only looks at the result.

      You’re going to reply that this is a terrible measurement of patent “quality,” and I completely agree with you – but that is, in fact, the OPQA’s yardstick.

      1. 2.2.1

        Well, David, you do have a point. Simply complying with the statutory requirements is not really “quality.” That is like saying we will give the space shuttle contract to the lowest bidder. You get what you pay for — 40% blew up.

        Schindler really knew how to pump out those artillery shells — just that they were defective in one way or another. But I suspect his factory got high ratings on process quality.

        The we have the story of Hiawatha, the Indian brave that could shoot more arrows in a given time than any other brave. Just, he did not aim. that took too much time.

        1. 2.2.1.1

          Simply complying with the statutory requirements is not really “quality.” That is like saying we will give the space shuttle contract to the lowest bidder.

          This is partly the fault of the patent community. I am routinely astonished at the poor quality of patent applications that get transferred to me from other firms, often including some big-name and prestigious firms – specifications that wholly miss the point of the invention, that reflect a poor understanding of the technology space, that feature claims with a host of technical issues… it’s disheartening. In this field, there is a very poor correlation between the prestige and wages garnered by any particular firm, and the quality of work from that firm. Statistically, it approaches sheer randomness.

          But the other half of the problem is the PTO: vast amounts of time-wasting activities that are conducted in lieu of legitimate examination. The PTO’s administrative apparatus doesn’t just tolerate many of these activities – it actively encourages and financially rewards them.

          1. 2.2.1.1.1

            Big firms charge too much, the prosecutors can’t bill like the litigators, so prosecution takes second (or third or fourth) priority. They charge more, but the attorneys have less time to do the work. And if you really want to do prosecution, you go to a small shop where people are better and you are appreciated. So that leaves big law firms with either unsatisfied and overworked prosecutors, or incompetent prosecutors. I would much rather go to a 5-10 person shop with a good reputation than any big law firm. (Well, I’d actually do it myself, but hypothetically if I didn’t know what I were doing . . . )

      2. 2.2.2

        David,
        What you say re OPQA & reviewing allowed applications is not completely correct.

        OPQA does 2 types of reviews – allowances & in process (IPR – non-final & finals). For the allowance reviews, you are correct that the earlier actions are not reviewed by OPQA. Just the allowance itself, which would looking at 101, 112, as well as prior art issues. For IPR cases, the most recent action is what is reviewed for the correctness of any rejection made as well as any that were not made but should have.

        This is why there are 2 different sets of basic results – allowance error rate & IPR error rate. Generally, the allowance error rate is better than the IPR rate. There are also FAOM reviews which are a separate catagory on the page Dennis cited.

        1. 2.2.2.1

          For IPR cases, the most recent action is what is reviewed for the correctness of any rejection made as well as any that were not made but should have.

          That is completely consistent with my description – for the following reason.

          The OPQA in-process review that you described, and that everyone else has described, looks like this:

          1) Look at the application. Decide the grounds on which the claims should have been rejected: 101, 102, 103, 112p1, 112p2, 112p6, etc.

          2) Look at the office action. Identify which grounds of rejection have been cited.

          3) Compare the list of expected rejections and the list of issued rejections. If they match, declare it high-quality examination. If not, determine that some type of problem (“Needs Attention”) or more serious error occurred, and kick it back to the examiner.

          And my point is this: “Quality” is not adequately determined by looking at which grounds of rejection were issued. It also depends on how well the examiner articulated them.

          A 103 rejection can be based on a perfectly valid combination of references. Or, it can be based on a technically inaccurate interpretation of references; or a combination of references that is literally incompatible, or that teach away from each other; or a combination of a wildly inflated number of references.

          A 101 rejection can state a fully detailed, well-reasoned Alice analysis. Or, it can be based on a spurious and inarticulate reasoning, or a completely incorrect statement of the standard of law. Etc.

          The OPQA does not appear to consider any of that. In the words of high-ranking OPQA personnel at the Patent Quality Summit, the OPQA’s review involves “determining whether claims were rejected that should have been rejected.”

          1. 2.2.2.1.1

            …nothing but the stain of Reject-Reject-Reject… (and yes, we still need sunlight into the star-chamber “examination” tactics)

          2. 2.2.2.1.2

            More important than just looking at a single action should be looking at the full history of a case.

            There’s nothing ‘wrong’ with making a 103 with two references that may have incompatability or teaching away. It is the attorney’s job to point that out clearly in the response. The error should be if the examiner maintains the rejection when the attorney is correct and has rebutted the prima facie case.

            It’s hard to see if the line is pointing the right way with only a single reference point.

          3. 2.2.2.1.3

            David,
            I mostly agree with what you say. Though I think that the term “high quality” is a term that may be having different meanings to different people.

            The unfortunate thing is that what qualifies as an “error” is guided by what is in the examiner’s PAP & not OPQA. If the proper grounds of rejection is made, almost no matter how badly explained, it cannot be charged as an error. (Personally, I think that it should be an error, but that is just my opinion.) Until that changes we are all stuck with badly written & explained rejections that rely upon the wrong things to be “not an error”.

            Though I do have to disagree with you that a 103 “can be based on a technically inaccurate interpretation of references; or a combination of references that is literally incompatible, or that teach away from each other; or a combination of a wildly inflated number of references.” and still exceed the threshold for not being an “error”. If a reasonable rejection “could not have been made” (which is the key) then an error should be charged. If a reasonable rejection could have been made with the applied prior art, but was not properly explained (e.g., bad or no citations, bad motivation, etc.) than it is not an “error”. So if the references are “literally incompatible, or that teach away from each other”, for example, it should be an error. If a 101 should properly be made & the rejection relies upon M or T test & it should rely upon abstract ideas, then no error (unfortunately).

            As to the term “high quality”, I think that the office & you have different interpretation (& I understand your opinion of “high quality & largely agree with you what it should be). I think that the office looks at a high level of quality meaning that the high percentage of cases do not have clear errors (errors as explained above). This mean that cases do not reach the threshold to be charged as an error. If does NOT mean that they cases are a “high quality” of work. Just that a high percentage do not contain clear errors.

            The threshold for “no error” is much lower than it would be for actual “high level of quality”. Sort of the old adage of the difference between making a Chevy & a Cadillac.

            I agree that the standards need to be higher. It will just be a long, hard fight to actually get there.

            1. 2.2.2.1.3.1

              It is even longer and harder when the focus is hij acked for “pet” philosophy windmill chases…

    3. 2.3

      They use something called QIR.

      Its based off actions per disposal (not per application), thus an examiner who does 5 extra FOAM at the end of the quarter raises their actions per disposal without actually doing extra non finals in a case.

      % actions allowed

      Disposals non-RCE

      restrictions after non final

      The PTO’s quality metrics have nothing to do with actual quality. Errors, losses at the board etc aren’t really part of the calculation. Its like they confused pendency with quality or something…

      1. 2.3.1

        I should follow up, that the actions per disposal is based off the current fiscal year, not a rolling 12 month period, so that may be one reason why Q1 “quality” numbers go down since examiners are more likely to allow to dispose of a case in q4 and thus have few of them lying around for q1.

        Doesn’t seem like a very accurate metric.

      2. 2.3.2

        Here’s the thing with using a loss at the board as a quality metric: most of the cases that I have that go to the board are “grey area” applications. Basically, I have a pretty strong position, and applicant also has some strong points. But when I personally as an examiner “weigh the evidence”, I think the evidence is weighed against patentability. Basically, for me, a case goes to the board only after I have put my best position forward and applicant has made some valid points but I don’t feel comfortable allowing the case over my position and art of record. When we get to a standstill in prosecution, I will often suggest appeal as the best option for furthering prosecution.

        I was told once during training that you should expect about a 50% affirm/reverse record at the board. If you are affirmed 100% of the time, this is an indication that your actions should be clearer. If you are reversed 100% of the time, this is an indication that your art should be better.

        My current record (when defending my own actions) is about 75% affirm, 25% reverse at least in part. I hardly ever get a total reversal but it isn’t unheard of.

        Basically, if a reversal at the board would be used as a quality metric, I could see me being hesitant to send a “grey area” application to appeal. I would worry that the board would side with applicant and I would get hit with an error and a hit with quality. Even though my art is strong and my position clear, applicant’s arguments could still be persuasive. So, I could see allowing that “grey area” application and there would possibly be a few more “questionable” patents out there.

  9. 1

    “I have not yet followed-up, but my initial guess is that the change is a measurement change rather than a change in practice.”

    It is. Though it doesn’t appear to be one that has any basis all that much in the actual quality guidelines.

Comments are closed.