Report: Examiner Billing Fraud at the USPTO

The report is here.  This is the summary:

For the 9-month period, the OIG reviewed specific work activities of approximately 8,100 patent examiners and identified 137,622 unsupported hours. This equates to a one-year average of nearly 180,000 unsupported hours. For the 15-month period, the OIG analyzed work activities for roughly 8,400 examiners and identified 288,479 unsupported hours.

The OIG adopted a conservative approach in considering the evidence. These considerations resulted in the OIG excluding a significant amount of unsupported hours in order to ensure that the methodology did not assume unfairly that a particular examiner was not working. Based on certain examiner records, however, the OIG found that the total unsupported hours over the 9- and 15-month periods could be twice as high as reported in this investigation.

The OIG’s analysis further determined that for the 9-month period:

  •   The 137,622 unsupported hours equate to nearly $8.8 million in potential waste.
  •   Approximately 28% of the total unsupported time consisted of overtime hours. The overtime hours equate to over $2.1 million in potential waste.
  •   296 of all examiners covered in this analysis had 10% or more unsupported hours and accounted for 39% of the total unsupported hours. The USPTO paid over $1.4 million in bonuses to these examiners.
  •   226 of those 296 examiners accounted for over 42,000 unsupported hours and also received above-average annual performance ratings.
  •   36 of the same 296 examiners claimed unsupported hours equivalent to three days for every 80 hours of computer-related work time.
  •   The total unsupported hours could have reduced the patent application backlog by 7,530 cases.

For the 15-month period:

  •   The 288,479 unsupported hours equate to over $18.3 million in potential waste.
  •   Approximately 28.5% of the total unsupported time consisted of overtime hours. The overtime hours equate to over $5.4 million in potential waste.
  •   415 of all examiners covered in this analysis had 10% or more unsupported hours and accounted for 43% of the total unsupported hours. The USPTO paid approximately $7.8 million in bonuses to the 415 examiners.
  •   310 of those 415 examiners received above-average annual performance ratings and accounted for nearly 98,000 unsupported hours.
  •   56 of the same 415 examiners claimed unsupported hours equivalent to three days for every 80 hours of computer-related work time.
  •   The total unsupported hours could have reduced the patent application backlog by approximately 15,990 cases.

    The OIG also found that the USPTO policies limit the agency’s ability to prevent and detect time and attendance abuse. For example:

  •   The USPTO does not require teleworkers to log in to their computers on workdays if they do not telework full-time.
  •   Although the majority of examiners with unsupported hours received average or better performance ratings, the USPTO requires that only poor performers provide their supervisors with work schedules.
  •   The USPTO does not require that on-campus examiners use their USPTO-issued ID badges to exit through the access control turnstiles during weekday working hours.
  •   The data suggest that USPTO’s production goals are out of date and do not reflect current efficiencies.

20 thoughts on “Report: Examiner Billing Fraud at the USPTO

  1. Patent examiners aren’t lawyers and are not subject to bar discipline, so how is this report related to legal ethics (in contrast to “ethics” as normal people understand and use the term)?

  2. While I agree that this sort of report could be useful, and is the type of thing that Congress should be monitoring, the tone and sentiment of where it is likely coming from probably will undermine the likelihood the outcome being anything that truly useful and positive. Perhaps the culture at the US PTO needs to be improved on modified to be more honest or perhaps it is just fine the way it is. Or perhaps, some accommodations need to be made that improve the Examiner’s working conditions. (For example, do we really care when the examiners work their hours so long as they work the hours they are being paid for?)

    If you demonize examiner’s they will become the demons that you feared and if you honor them, they will become honorable (that is at least what I believe other studies show about other work place environments).

    Note that the report normalizes the data for the nine month period to a one year period (inflating the number of hours unaccounted for), but does not similarly normalize the number of hours for the 15 month period, indicating (although not conclusively) that its authors have a bias (other indications of the report having a bias are mentioned elsewhere in my comments in what is not mentioned in the report).

    As mentioned above (in my initial comment), the report acknowledges that there are times when it considers it to have been not “possible” for the examiner to have been working, when in fact cases were indeed checked out, and work activities were apparently occurring.

    As the time sheets are signed under penalty of perjury, and since studies seemed have shown that most people when they sign that they are abiding by a code of ethics (even when they do not know what the code of ethics is that they are agreeing to), they are usually honest about what they report (unless they are operating in a culture of corruption), consequently likely many of the unaccounted hours (although probably not all) can indeed be attributed to flaws in the methodology and/or honest human errors.

    The report doesn’t attempt to compute how many hours could have been worked beyond those that the Examiners claimed to have been worked, which would give us a better idea of how inaccurate its methodology is.

    The report doesn’t attempt to figure out how many hours examiners worked beyond those required (which may be in-part determinable from the hours that were claimed to have been worked).

    For example, when I was an examiner, I worked somewhere around 240 hours per year beyond those required (which is way more than the 22/27 hours per year that went unaccounted for in this report).

    According to the report’s methodology, if an examiner was approved for working from home, and is reading a reference – to determine whether to use the reference in a rejection – but with their computer off, the time would not be counted, even though the examiner was indeed working.

    There was no effort to determine the culture at the Patent Office – is it one of honesty or not? If the culture is one of honesty, then likely the 22/27 hours per year are attributable to flaws in the methodology and/or honest human errors. If the culture is one of corruption, then likely even the report’s high estimates are way too low (after all, the data only indicates when the examiner’s clocked in or turned on their computers – for all we know every examiner turned around and went home after clocking in or went to sleep after turning on their computer).

    For context, the US PTO is a money making operation and make way more money than may have been lost due to inaccuracies in the time reported.

    When I was an examiner, I met another examiner from a small town in the South who told me that when he was hired, he was so proud and honored to have been hired by the US government, because that was the attitude of the people where he came from. Of course, once he saw the way government workers are viewed in DC, that all changed. Although the culture this Examiner came from may have been unusual, what we need is an environment at the US Patent and Trademark Office, Congress, and the country that fosters the type of idealism, patriotism, commitment to the job that that patent examiner came to work with – not the kind of nonsense that is likely behind this report that killed that examiner’s view of what it means to be a government employee.

        1. Absolutely correct temprand.

          But maybe read the report yourself and see that his ideas
          (1) go away
          (2) makes them wrong and
          (3) rebuts and discredits them

          For a quick example, the USPTO is NOT a “money making operation” – and cannot be – by law.

            1. Except not, Atari Man. Take a look at the USPTO budgeting process and its charter as an executive agency.

              Too loose with the description of “money making” certainly does not help either the topic, nor anyone else here.

                1. Does not matter Atari Man, as you are simply not correct in either regard.

                  Do you not understand the way THIS executive agency operates?

  3. “The data suggest that USPTO’s production goals are out of date and do not reflect current efficiencies.”

    That last point is not at all supported by the data collected by the OIG, which failed to even consider the underreporting of examining hours by examiners. The fact of the matter is that though there are examiners who produce work products faster than expected, there are certainly many examiners who produce work products slower than expected and work unreported, unpaid overtime to make their baseline quotas. Not all examiners are equally efficient or skilled.

    Moreover, at least in my technology, the examiner’s job has become significantly more difficult with the advent of electronic search tools. When it was just the art you could get your hands on to consider (and prior to KSR), it was simple to say that an invention was novel and non-obvious. Now the references on an IDS for a single case are often more voluminous than the entirety of a 1976 (or 1986) shoe box for an entire class of art, and the results of a electronic text search or subclass search are far greater still. That’s not to mention the explosion in the complexity of the claims under examination and the thorny issues around 35 USC 101 that were not even imagined at the time the production quotas were set.

    (FYI: This time is not being claimed. I’m actually on leave all day today.)

    1. a,

      An interesting point, that if indeed valid, is also part of the problem with the (internal) Office “quality” and efficiency metrics.

      If indeed there is a substantial portion of examiners underreporting so as to simply “make the cut,” then this perpetuates the unreasonableness that ALL examiners have to deal with pertaining to an unrealistic allotment of time in order to perform what applicants have paid for.

      The “better” path would be to fully and accurately report ALL time that was expended (and I you need “protection” for the so-called underperformance due to unrealistic times, then THAT is what you have a union for.

    2. See also the preferred thought experiment on the main blog concerning the normal distribution and the fact that the “players” acting on the low side of the distribution are not going to be the same players acting on the high side of the distribution at:

      link to

  4. It seems to me the report makes a big deal out of essentially nothing especially in regards the examiners that received productivity awards for exceeding their quota. The Examiners have a quota that is tied to the number of hours that the examiner claims were worked (or at least that is the way it was when I was an examiner). If the examiner says she/he worked more hours she/he needs to examine more applications to meet their quota. If the report is correct, and if the examiners really skipped work, at times they said they were working, they still met their quota, so I am not sure I see the harm.

    Additionally, at least at a glance, the report seems like it may be flawed. The report acknowledges that cases were checked in and/or out of the system outside of the hours that the examiners claimed to have been working, and continues to state that these are discrete events and no attempt was made to figure out what hours the examiners may have been working that was outside of the hours when the report considered it “possible” for the examiners to be working, but clearly at least some activity was occurring outside of those hours. Although the report states that that time is only .3%, as the report acknowledges elsewhere regarding the palm records, the .3% are apparently discrete events (checking a case in or out), which would likely be only a small fraction of the time that the examine actually worked (and as I mentioned before the report acknowledges that it did not try to reconstruct that time). On average, we are talking about 22 hours (of the 9 month period) to 27 hours per year (of the 15 month period) per examiner studied, and it would seem like a fair amount of that may be accountable as the examiner possibly recording the wrong time of day, but the correct number of hours.

    I think that the numerous ways of quantifying what supposedly could be accomplished with the hours that supposedly were not worked is inappropriate, when it is not so clear that these hours were indeed not worked.

    I think that to know what is truly happening with the examiner’s work-hours, the PTO needs tighter controls on record keeping. However, how many engineers and scientists actually sign in and sign out every day worked or have that tight of controls over the records of exact when they worked? Examiner’s pay may be competitive with that of an engineer having the same background, but not is definitely not very competitive with that of IP professional, and therefore examiner’s should have a work environment that is similar to that of a scientist/engineer. Do we really want to treat examiners as workers that cannot be trusted? – that does not seem like it would be good for the quality of the work that the examiners produce nor for their productivity.

    Although it is popular among Republican law makers (whom do not necessarily have stellar ethics records and whom do not always have exactly stellar attendance records, either) to bash government employees, perhaps a better tactic would be to see what can be done to better support the examiners and other public employees to do their job more efficiently, so as to create an environment fosters a sense of loyalty and dedication rather than fostering sense of being mistrusted, outrage, and being mistreated. Generally, when you treat your employees better, they do better work.

    1. I really hate this type report. I do agree with you that it’s really a lot of hoopla on about nothing. Let’s look at the math. 8.8 (or 18.3) million potential fraud money in comparison with the USPTO budget of 3,224 million. That’s equal to 0.27% (or 0.56%) of the USPTO budget. I do agree with the notion that we need to combat waste. But, I fear this is more political fodder than any thing else.

      One thing that people keeps forgetting is that the “unsubstantiated hour” does not equal fraud. There are maybe other explanation on why this is the case. It’s ashamed that the report does not go into detail. It leaves a lot to a person’s imagination.

      1. Once again the logical fallacy with attempting to p00h-p00h the problem by spreading out the millions stolen over the entirety of the budget or over all examiners…

        Wake up people.

    2. Fraud IS a big deal, and the chorus of examiners seeking to minimize thee fact that this IS a big deal is merely a sign of how decrepit the Office culture has become.

      1. Does anyone else find these numbers highly suspect? OIG is claiming 7.5 million in bonuses were paid to 415 examiners over a 15 month period. That works out to $18,795. Even if all of them are getting the gainsharing and the Tier II workflow bonuses, I’m thinking that number is significantly inflated.

          1. I did read it. Anon, they took supposedly conservative estimates on hours (which are actually debatable), but this wasn’t claimed overtime they’re talking about. My post was about the bonuses part of the report, i.e. gain sharing and workflow bonuses. Even if you assume that all 415 examiners they’re talking about were primaries, and they hit the highest bonuses possible, it’s pretty close to mathematically impossible for those 415 examiners to have received 7.5 million in bonuses. I’d have to sit there and run the numbers on a GS-14, step-10 primary to see, but I’m fairly certain that at least that number was inflated. That’s not the only one that seems to inflate the dollar amounts we’re talking about, but I digress.

            I’d also point out that they made absolutely no attempt to calculate the number of voluntary overtime hours examiners most examiners work at least occasionally to meet their quotas. When you look at the report in depth it’s pretty clear that a) the authors never sat down with any examiner to find out what the job entails, and b) the conclusions where decided before any supposed investigation took place.

            The quality of office actions is already a significant issue, and if the ignorant recommendations suggested in this report were implemented resulting in less time being spend per application, the resulting further drop in quality is guaranteed.

            1. You MIS read the report.

              The report is a large scale summary and does not say that the sum total of hits for the bonuses were STRICTLY limited to the smaller number of examiners. Read the report again – note the twenty-some percentage figure that is used at the beginning of the section.

              As to the voluntary overtime hours – you too are mixing apples and oranges. See the link provided to the main blog and the example provided about the distribution curve. You are attempting to mix the two very different tails of that distribution curve to somehow “even out” the two very different “offenses.”

              You really need to stop trying to spread out the fraudulent actions in order to make those actions less fraudulent. It simply does not work that way.

              Further, as your other examiner friends also state on the main blog, there are two metrics in play here – time and quality, and while they are indeed interrelated, you cannot merely assume as you do that you have the situation that you think that you do. More likely, what you have is a steeper learning curve with those at the start naturally having poor quality (but earnestly trying in most cases) and then you have more “seasoned” players who game the system who just don’t give a rip about “quality” but have “enough” not to worry about the schlock that they put out – for these people, “time” already being misrepresented and gamed has little to do with any type of one-to-one correlation with quality.

Comments are closed.