USPTO Survey

The PTO is been conducting a quality survey of examination.  Here are some sample questions:

6. Consider all rejections you have received over the past 3 months. How often do you think the rejections made under the following statutes were reasonable in terms of being technically, legally, and logically sound with respect to:

35 U.S.C. 101 Rejections (Rarely – . . . – Always)
35 U.S.C. 102 Rejections (Rarely – . . . – Always)
35 U.S.C. 103 Rejections (Rarely – . . . – Always)
35 U.S.C.112 Rejections, P1 (Rarely – . . . – Always)
35 U.S.C. 112 Rejections, P2 (Rarely – . . . – Always)

7. In the past 3 months, how would you rate overall examination quality.

Very Poor    
Poor         
Fair         
Good         
Excellent 

10. What should the USPTO focus on to improve examination quality? Please be specific.

We will look forward to some interesting results. Since 103 is hot, lets do our own survey.

Create Free Polls

24 thoughts on “USPTO Survey

  1. 24

    Can anyone tell me what questions do I expect on an examiner job interview? What GS level do I expect? I have a master’s degree in MIS.

  2. 23

    The poll is now showing as a black box – can you update to show us what the final results were?

  3. 22

    “How anyone can think that either all or no actions are proper is beyond me. I suspect that nearly everyone believes that they perform at a high quality level, yet no one is perfect (“all the time”). I also find it suspicious if an attorney believes that “none” of the rejections he/she receives are appropriate. I must admit to doubting either that person’s capacity for self reflection or competence.”

    I’ll chime on this subject, because I do think it is very important. With regard to § 101 rejections, I would say that 98% of the rejections I have received are not even close. In fact, I have only received one rejection that made me think twice (and I’m appealing that rejection). The problem with § 101 rejections (based upon my very small sample size) is that mainline examiners are being forced to reject claims by the special examiners (I’ve talked with examiners that say he/she wouldn’t reject a particular claim but he/she was being forced to). Also, after reading one of the PowerPoint presentations (I discovered on the USPTO website) directed to the new guidelines and the MPEP (which, in my mind, does a poor job in briefing some of the case law), it is readily apparent that the “higher-ups” are feeding the mainline examiners bad interpretations of the law. There are a plethora of common mistakes I could identify, but one of my favorites is the misunderstanding, by examiners, of what actually constitutes a claim to “software, per se.” There are so many times, after reading a rejection, I say to myself (or actually the examiner in absentia), “you really don’t know what ‘per se’ means.”

    Another problem I find with § 101 rejections, besides the rejection being plain wrong, is the complete lack of analysis that accompanies these rejections. Actually, this is a common theme among the problems I see with rejections (of any flavor).

    When I evaluate a rejection, I look at it two ways: first, is the rejection actually correct; and two, is the rejection supported by sufficient analysis and factual support. I will rarely argue a rejection that I think is correct but lacks sufficient analysis and factual support. The problem I have with the vast majority of the rejections I experience is that the analysis and factual support is so lacking within the statement of the rejection it is almost impossible to ascertain whether or not the rejection is actually “correct.”

    For example, when I look at a § 102 rejection, for each recited element and each recited relationship found in the claim, I look at the examiner’s rejection and try to find the element/relationship in the applied prior art. However, in the vast, vast majority of rejections that I have reviewed over the last couple of years, there is some (or in some cases, a lot) of ambiguity in the statement of the rejection which leads me to start to guess as to how the examiner is interpreting a particular phrase. Oftentimes, all I get is the citation to a particular paragraph to disclose a clause that may contain, for example, 7 elements and a couple of claimed relationships, and I have to guess as to what elements in the reference are disclosing these claimed features.

    In certain art groups this isn’t a problem because there are certain elements that are so well-defined that these elements cannot be confused with anything else. However, over the last 2 years, I’ve been working primary in an art group in which the same element could have several different names and new names for old elements are constantly being introduced into the lexicon of the art. Because of this, it is critical that the examiner make perfectly clear what elements in the applied prior art are being relied upon in the rejection.

    What I think is really disturbing is that I’ve even been told by a SPE is that a particular art unit has been directed not to explain the rejections in detail (until an appeal brief has been filed) because doing so negatively effects the number of rejections, on average, an examiner can issue over a given time period. When I heard that a couple of months ago, I was appalled but not surprised. This is NOT a symptom of bad management by the PTO. Instead, this is a symptom of bad management, in general, which looks to maximize short-term results while actually making things worse in the long term.

    Although I may get irritated with examiner for producing rejections that I don’t feel are justified, I understand that this irritation with a particular examiner is solely because the examiner happens to be, in my eyes, the immediate face of the USPTO. I recognize the root of the problem is not with the examining corps. Instead, the root of the problem lies with a system that is under funded and understaffed, which leads to management both not investing the time to properly train examiner and not giving examiners reasonable deadlines to perform examinations.

  4. 21

    an examiner. Thanks for the comments.

    This question was taken virtually verbatim from the PTO’s own survey. The extreme responses can be explained by the fact that the question is limited to the past three months.

    I’ll work on some more surveys.

  5. 20

    How anyone can think that either all or no actions are proper is beyond me. I suspect that nearly everyone believes that they perform at a high quality level, yet no one is perfect (“all the time”). I also find it suspicious if an attorney believes that “none” of the rejections he/she receives are appropriate. I must admit to doubting either that person’s capacity for self reflection or competence.

    Quality of examination is one of the most difficult aspects for the USPTO to analyze. The major approach taken, to have a separate individual review the allowance for errors, generally will only find mistakes of one sort, issuance of patents that have prior art or other problems. This review will not find patents that do issue with claims that have been unduly limited during prosecution or which are abandoned due to improper rejections.

    However, this survey approach clearly brings out the whiners in the attorney crowd. I think Dennis should create two more surveys in order to fairly address the complete quality question. The first survey would ask what percentage of patents people believe, based on their simple reading without analysis, to be valid. If the number is low, then either most attorneys are doing a terrible job in obtaining valid patents (along with a poor job by patent examiners) or arbitrary opinions are not particularly useful.

    The second survey would be a 360 type survey, to permit examiners to rate the work of patent attorneys. As with the survey of examiner quality, I know that “all” attorneys do not due bad jobs, nor do “no” attorneys present poor quality claims and arguments. At least “some” attorneys, however, for reasons of money, time, effort or ability (or reasons beyond my ken), write arguments that are absurd. I find these attorneys are the most likely to complain, perhaps because they are unable to persuade in actual prosecution.

  6. 19

    THere is lot of bashing for examiner!!! Anyway as an examiner [not from USPTO] there is lot of pressure on the examiner as said RCE i don’t know what it is but look similar to ours so it becomes difficult for examiner to go through the whole file completely and understand it fully as time is not sufficient as most of the application is the frontier technologies so … And raising doubt about the application is always better than giving benefit of doubt to the applicant so if there is not much of the hassle the applicant’s reply is always there in case of appeals so it better to raise as many objection as possible before final decision since its better to go through the hard way first rather than take chances with further possible appeal

  7. 18

    THere is lot of bashing for examiner!!! Anyway as an examiner [not from USPTO] there is lot of pressure on the examiner as said RCE i don’t know what it is but look similar to ours so it becomes difficult for examiner to go through the whole file completely and understand it fully as time is not sufficient as most of the application is the frontier technologies so … And raising doubt about the application is always better than giving benefit of doubt to the applicant so if there is not much of the hassle the applicant’s reply is always there in case of appeals so it better to raise as many objection as possible before final decision since its better to go through the hard way first rather than take chances with further possible appeal

  8. 17

    I’m not sure that there would be a need to make the feedback part of the prosecution record. In fact, it would be important that the feedback not be associated with a particular office action so that the feedback doesn’t influence (positively or negatively) the prosecution of an application.

    What I had in mind was communicating feedback to an “office of quality control” that need only know the particular examiner and art unit to which the feedback applied. Any information associating the feedback with a particular application or a particular practitioner could easily be stripped by the EFS software and thus never stored. That would appear to solve the problem of making undesirable admissions on the record.

    By including the opportunity to give feedback as part of EFS the USPTO could also vary the feedback questionnaire as needed. For example, within the last year the USPTO appears to have been encouraging restriction requirements. A random distribution of questions relating to this particular topic could provide information on how well training is going on this subject.

  9. 16

    Dennis,
    Survey responses that are initiated by those offering the opinions are typically biased. Those that are unhappy are the most likely to complain. I know I would never admit that a rejection was good.

    I think the quality of office actions would improve if there were some way to provide feedback that would reflect on the examiner’s performance (other than an appeal). Unfortunately, talking with the SPE is sometimes talking with the person that creates/authorizes/condones the problem.

  10. 15

    It’s annual review season around here. Customer service is one of the things we are rated on. Usually this generally means the small stuff (don’t be mean to people on the phone) but if you frequently get an examiner telling you he/she is always right and will never listen to arguments, that might be something you would want to bring to the SPE’s attention. It might also raise the question, in the SPE’s eyes, as to whether the examiner is doing a good job on patent examining functions, which are a large part of our annual reviews.

    Just an idea from An Examiner who believes that being reasonable, fair, and intelligible is a crucial part of this job and who will remain anonymous.

  11. 14

    The individualized reviews would provide important feedback, but they simply would not work because the feedback would be later used in litigation against the patentee. (admission that the rejection was good.).

  12. 13

    SMC has a great idea. I would like to be able to rate each office action individually. With EFS-WEB, the PTO has the mechanism in place for implementing such a feedback system.

    I’ve had good office actions and I’ve had lousy ones. The good ones are articulate and cite good references. The bad ones, well . . . During an interview, one examiner told me that there is nothing that I could say that would change his mind. (After I filed the appeal brief, I guess someone did tell him something that changed his mind!) That examiner’s office actions were terrible.

  13. 12

    I agree with the above comments that the survey does not appear to be designed to capture the substantial variations between art units. I have on my desk office actions from two examiners in art unit 2622 (both supervised by Ngoc-Yen Vu) that include very accurate and well written rejections. On the other hand, I also have a final rejection from another art unit in which a primary examiner rejects roughly 60 claims using a total of just two sentences. This final rejection is an improvement over the first office action in that the two sentences vaguely refer to a single 70 page reference rather than several longer references. In the first case are two examiners who clearly put a lot of effort and thought into their work, while the second case includes an experienced examiner who has found that he can earn points by doing less than the minimum. It is not clear to me how I would answer the above survey questions given such wide variations between individual examiners and art units.

    Rather than a one time survey, I would prefer to see a continuous feedback mechanism through which I could provide brief comments on an office action by office action basis. Such feedback from many practitioners could be analyzed on an art unit by art unit, or examiner by examiner, basis. It would also be more likely to produce a more representative distribution of positive and negative comments. The USPTO could provide the feedback to the art units and examiners in a summary form such that a particular examiner-practitioner relationship is not “influenced” by the feedback.

  14. 11

    Examiners have an incentive to make weak and often clearly baseless rejections in an effort to extort an RCE, as Claire states. Examiners receive receive an extra point for an RCE. Although the examiners never allow a case on an advisory action based on a two month final resposne, they will quickly allow the case about two thirds of the time in response to an RCE, containing the exact same arguments and amendments as in the two month final resposne!! This is a huge waste of time and money; but the examiner gets a point for dragging the process on. Eliminating the extra point for an RCE is a start. I am curious, what is the rate of allowance after an RCE for others? Sounds like the basis of a study….

  15. 10

    So, did the USPTO randomly select some users to survey – or is this an open survey we can all access?
    I am less frustrated by the improper 103’s than I am by the improper restrictions, improper refusal to enter amendment after final to force me into an RCE/appeal, refusal of appeal on technicality, improper first action after RCE which is just a copy of the first action I got initially, improper 112 1P, yada-yada, etc. 102’s & 103’s are at least based on some objective criteria of dates, and psuedo-objective “what is taught/disclosed” – at least I know what the basis of the rejection is!

  16. 9

    There is a huge variation between examiners. The good ones can be very good, whilst the bad ones are laughable. Probably most of them are recent hires, or at least I hope so!

    English proficiency is also a problem, particularly amongst Vietnamese examiners. I don’t know why they have more problems learning English than all the multitude of ethnic groups in the PTO, but for some reason they do. They are all US citizens, but the citizenship test requires very little English. Still, one wonders how they get through a hiring interview.

    I’m biassed (being a Brit) but I think they should drop the citizenship requirement and recruit more people from English speaking countries. I would get more intelligible Office Actions from, say, an Indian who was not a US citizen than from a Vietnamese who was, all else being equal.

    I am pleased to see that single reference 103s seem to be abating, but we still have lots of 103s that have very dubious motivation to combine. Maybe they are hoping that KSR v Teleflex will retrospectively justify this.

  17. 8

    The problem is the point system. If Examiner’s can squeak and RCE out of an application, they get an additional point for very little work on the first office action that follows.

    Naturally, then, what follows are borderline to ridiculous rejects or throw everything including the kitchen sink at the applicant and hope something sticks. My experience says that once in awhile you get a quality examiner, but by in large, they quality is substandard and rather than judging the patentability on its face, they are more worried about racking up points.

  18. 7

    I have the dubious pleasure of working on both biotech and mechanical (agriculture) applications. The 103 rejections I see from the biotech units are far less reasonable, in my opinion, than those I see from the mechanical art units. The inconsistency between examiners in the biotech art units in how they use 103 is also startling. The results of the survey should be interesting, but unless they are correlated with art units, it will be difficult to interpret them. I would need to answer the survey completely differently based on these two separate technologies I work on.

  19. 6

    I do not believe in the many years of patent prosecution practice that I have ever had patent claims rejected except on an occasional 102 but mostly 103 basis. More often than not, a simple admendment to the claims have resulted in allowances of all claims without any undue narrowing of the overall invention. Thereafter, a number of these patents and their claims have been tested in litigation and successfully upheld for the happy clients. If the claims are clearly drawn, I find the patent Examiners do a great job in their examinations.

  20. 5

    The Office is going to get crushed on Qs 6 and 7 – for the principal reason that those who are disappointed with quality are more likely to respond than those who are satisfied. This is a major flaw with all such quality control surveys. It will be interesting to see if the Office actually publishes the results b/c they are going to be very embarassing.

  21. 4

    I think the quality varies significantly from one art unit to another. I’ve been fortunate to deal mostly with art units with reasonable examiners, but I’ve heard horror stories from other art units. I also think that the quality varies from examiner to examiner more than from one action to another.

  22. 3

    I’d like to elaborate on my “some of the time” vote. I think the most prominent characteristic of PTO examination is how uneven the quality is. Perhaps half the time the rejections are right on or pretty close. The other half of the rejections range from iffy to totally ridiculous. Perhaps a quarter to a third deserve failing grades. This comment applies to 102 rejections as well as 103.

    As an aside, the level of chickens*** formal shenanigans seems to be rising, particularly with respect to appeals.

  23. 2

    As a current examiner, I have been told by SPEs to adhere to the 102/113 system complained about by PA above. This was after I had attempted to give 112 2nd paragraph rejections in hopes of having the applicant narrow the scope of the language.
    As I have been instructed, the claims that I rejected under 112 are definite and understandable, technically, so no 112 issues apply. However, the claim wording I rejected should just be considered outrageously broad and using the broadest reasonable interpretation should just be rejected under 103.

  24. 1

    What is considered reasonable? If the criteria is: “did I, the patent agent, agree with the rejection?” then I would have to believe that “never” would be the most common answer.

    On the other hand, I believe most rejections are articulated clearly, but demonstrate a weakness in the way the invention is claimed. This can be fixed by a clarifying amendment that doesn’t affect the scope of the invention in any meaningful way…(except of course to provide fodder for litigators, a good reason to make your claims clear from the start)…. I guess my feeling is that many 103 rejections are really a disguised form of 112 2d P. The claim is technically definite under the 112 standard, but too easily susceptible to an overly broad interpretation that is not encompassed by the actual invention.

Comments are closed.