15 thoughts on “Adjusting to Alice: USPTO patent examination outcomes after Alice Corp v. CLS Bank International

  1. 3

    While its nice to be able to get more patents allowed for clients, this graph is a rather significant red herring as to the problem with 101. The Federal Circuit made clear that they owe no deference to the patent eligibility guidelines issued by the USPTO that are followed by Examiner and the PTAB. They even found one of the USPTO Examples ineligible when it was listed as eligible. Getting a patent means very little if the courts will just turn around and rule the claims ineligible.

    1. 3.1

      They even found one of the USPTO Examples ineligible when it was listed as eligible

      Cleveland Clinic case — notably on an example from BEFORE the 2019 Patent Eligibility Guidelines were published.

      1. 3.1.1

        Definitely true, but it puts into doubt all of the examples the USPTO has generated whose facts don’t exactly parallel a specific court decision. They’ve already proven they can’t create reliable examples and the courts will give them zero deference, so I’m very leery about relying on them ever (and I’ve heard that from other patent attorneys too). The PEG will work with an Examiner to get claims allowed, but that won’t mean much if the claims aren’t defensible if challenged or asserted.

        1. 3.1.1.1

          whose facts don’t exactly parallel a specific court decision.

          I posit that it is worse: the court actions — and the veritable Gordian Knot of 101 jurisprudence — puts into doubt all examples the USPTO has generated whose facts DO exactly parallel a specific court decision.

          It is NOT the fault of the USPTO in attempting to generate reliable examples.

          It is the fault of the Court (and the flow down to the courts) that have created a situation of irreconcilable conflictions, and it is VERY MUCH dependent on the panel draw one reasons which piecemeal parsing of decisions are put together to justify WHATEVER Ends are reached.

          In essence, what the Court has done is unleash a torrent of Constitutional violations: separation of powers (absolutely incorrect maladjustment of Common Law law writing to REwrite 101), void for vagueness (because the extensive use of improper Common Law law writing has created irreconcilable conflicts), prospective advisory opinions (based on the notion that something should be ineligible because of what MAY happen in the future), and on and on.

          The problem is NOT that the Office is trying to deal with this mess.
          The problem is that the Court has generated this mess.

    2. 3.2

      It seems like you’re assuming that the purpose of the Iancu test was to improve the patent system and the strength of patents rather than to simply produce more grants. I think the Iancu test has achieved exactly what they set out to achieve.

      /to the tune of The Time Warp, from Rocky Horror Picture show/

      Iancu: It’s just a jump to Diehr’s precedent…

      IPO Chorus: And then a step to Rich’s intent!

      Iancu: With examiners hands tied…

      IPO Chorus: You bring the grants up nice!

      Let’s do the time-warp again!
      Let’s do the time-warp again!

      1. 3.2.1

        Ben,

        You say far more than you realize with your jibe here.

        And you wonder how I can connect you to the same Ben that upvoted nearly everything that like-minded anti-software Malcolm posted in the days in which DISQUS was used on this site…

    1. 2.1

      Your ‘math’ skills and conclusions drawn from your graph reading say far more about you than you may realize…

  2. 1

    Why does this image need to be animated?!

    The definition of “examination uncertainty”:

    Our second patent examination outcome metric,
    called “Section 101 first action examination uncer-
    tainty,” captures the variation across examiners in
    the proportion of rejections for patent-ineligible
    subject matter. This metric is calculated using data
    for each examiner within specific technologies at the
    first action stage of patent examination. To compute
    this measure, we calculated the rate of first office
    action rejections for subject matter eligibility for each
    examiner in a USPC technology and for a specified
    time period. That rate is defined as the number of first
    office actions containing a rejection for patent subject
    matter eligibility divided by the overall number of
    first office actions by that examiner in the USPC and
    time period. The variance was computed across those
    examiner rates in each USPC using a half-year time
    periods (January–June; July–December for Alice, and
    Feb.–July; Aug.–Jan. for the Berkheimer memorandum
    and 2019 PEG). The Section 101 first action examina-
    tion uncertainty metric for each interval of time is an
    average of the variance across the USPCs in the Alice-
    affected technologies and likewise, an average of the
    variance across USPCs in the control technologies.

    1. 1.1

      Ben, both of your questions are exactly the same two (in exactly the same order) that came to mind when I first saw this graphic. I thank you for providing the “uncertainty” definition, although I am not really sure that I understand what that Y-axis is meant to represent even after I have now read the definition.

      Do these numbers measure anything actually meaningful?

      1. 1.1.1

        >> I am not really sure that I understand what that Y-axis is meant to represent even after I have now read the definition.

        They’re basically looking at every USPC class, and aggregating the difference between %_101_rejection_for_examiner_i and %_101_rejection_for_USPC_on_average, and then averaging the metric across all USPC classes. As Paul alludes to below, this is basically a measure of examiner consistency within USPC class.

        I understand why Iancu likes the metric, but I am not confident about how meaningful it is. I’m not sure if we should expect a given USPC grouping of examiners to have similar percentages of 101 rejections over a 6 month period. I suppose it depends in large part in how many examiners are in a given USPC grouping on average, but I have no idea what that number is.

      2. 1.1.2

        This measure could mean that the new applications are distributed in such a way that the applications that have claims whose eligibility is more questionable are preferentially given to Examiners that tend not to make 101 rejections, ans vice versa.

        1. 1.1.2.1

          I do not see how one would arrive from that interpretation from the data present.

          Are you including some (unpublished) additional information to make that leap, PiKa?

    2. 1.2

      If the vertical axis of this chart is “the variation across examiners in the proportion of rejections for patent-ineligible
      subject matter” that sounds merely like an examiner consistency measure. That is, it is not a measure of the % of applications with first action 101 rejections, which is what patent application persecuted prosecutors would be more interested in?

Comments are closed.