Automated Analysis of 101 Eligibility

This is something I hadn’t seen.  You go to this web page, and it allows you to enter a claim and get a prediction — based upon, it says, analysis of more than 30,000 applications and office actions — whether your claim is eligible, or not.

Update:  the creator of the service has authored a piece explaining the analytics, and is available here.

21 thoughts on “Automated Analysis of 101 Eligibility

  1. I wrote as about a complicated diagnostic claim as I could think up (multivariant regression analysis of 3 biomarkers compared to a negative control to detect a type of cancer ) and got 97% patent ineligible. So the program seems to strip the claims down to a relatively small number of key terms and go from there (like an Examiner). Just sayin’….

    1. Interesting result, a few things could be going on. Without seeing the claim, it sounds similar to many claims that are rejected because they are doing analysis without any linkage to the physical world. See for example the “PageRank patent,” No. 6285999, the claims of which the tool characterizes as ineligible, presumably because all that is going on is “scoring” … (link to alice.cebollita.org:8000)

      Also, the training set was pretty heavily skewed towards computers/engineering-related cases, and less in the medical/biotech spaces. Not sure if the smaller number of training examples would make a difference in your case, but possible …

  2. When I pasted the claims of the latest case against the Chicago Transit /16-1233.Opinion.

    I got the result that it’s ELIGIBLE.

    WRONG!

    1. Since it seems to be a favorite sport here to discredit the project based on a single data point, I am going to weigh in. First of all, you didn’t actually check all four of the claims analyzed in that case, because if you had, you would have seen the following results:

      003 patent, claim 14 – Eligible
      617 patent, claim 13 – Eligible
      816 patent, claim 1 – Not eligible
      390 patent, claim 1 – Not eligible

      What I find interesting is that the tool came to the same result as the dissent in that case, which characterized the first two claims as patent eligible. In other words, the tool (like Judge Linn) reached different conclusions for the two groups of claims based on differences in their language.

      Of course, the dissent is just that, so in this case, the tool only got it right 50% of the time. That’s not great, but it’s also a small sample set. I know it’s not as fun as shouting in all caps, but feel free to read the paper if you want to see how it performs on a large set of test claims.

  3. A perfect example of GIGO.

  4. Who would be careless enough to upload claims to a website before filing with the PTO?

  5. I tried the Enfish claim which was eligible, and then I added the word “financial” at the end and it was then ineligible, LOL.

    1. LOL

      Nobody could have predicted that “artificial intelligence” would make a joke out of a 101 analysis.

      1. Reminds me: we still need Prof. Crouch or one of his academic friends to do a write up on the rise, fall, and re-zombification of the Mental Steps Doctrine.

        (They can use my fave word anthropomorphication as much as they want)

  6. Just a quick check:

    Benson Claim 8: 44% (not eligible)
    Flook: Claim 1: 100% eligible
    Diehr: 100% eligible

  7. Remember, all, you’re looking at a predictor based on prior office actions/issuances… so the possibilities are (a) the 101 actions of the Office are incomprehensible, or (b) the data or algorithm/black box analysis is…

    Sigh… if only Congress had fixed this in 1946.

    1. Congress can still fix this. But the proposals I’ve seen so far (AIPLA, IPO) have (IMO) zero chance of passing (is Congress really going to allow patents on golf swings?) I support the patentability of software but even I cannot support those proposals. Are you aware of any proposed changes to 101 that can be taken seriously?

      1. I agree patent leather. Someone needs to step up and make a proposal that has a chance of passage.

  8. I tried “A drink comprising gin and vermouth” and passed. Then I tried ”
    “A drink comprising gin and vermouth and a computer.” Failed. Then I tried ”
    A drink comprising gin and vermouth and a circuit.” The score shot up to better than the first one! Moral: When James Bond asks for “a martini, real, not abstract,” the bartender is likely to toss in a circuit or two.

  9. Computers replace federal judges. Ha.

  10. Hahah, what a joke.

    A system comprising:
    A processor configured to compare two numbers and output a result.

    77% Eligible.

    1. Why would that be ineligible under 101?

      Maybe you, like the courts, sometimes confuse 101 with 102 and 103.

      1. Particularly because that claim could easily apply to a hardware logic gate, and I don’t think anyone has suggested that such circuitry would be ineligible subject matter.
        Current 101 jurisprudence seems to be based in “I know in my gut that this is obvious or not novel, but finding prior art is hard, so I’m going to just declare that this is routine and conventional in the art.”

        I wonder if anyone has tried attacking a 101 rejection under the Administrative Procedure Act as a conclusion that is “arbitrary” and “capricious”. Probably not going to be a successful argument, even if it is entirely true.

        1. and I don’t think anyone has suggested that such circuitry would be ineligible subject matter.

          Definitely not true.

          See Alice, wherein claims stipulated by both sides to be “directed to” hardware (i.e., falling safely within the statutory class of hardware) nonetheless were deemed by the Court to be “abstract.”

          This is not a new thought sitting on the table of discussion.

  11. Cool, and the beauty is that the analysis methods seem to be very similar to the USPTO’s methods (Sarcasm). For example, in the linked article, the author states:

    “How can we rely on a system that first reduces an English language sentence, with all of its syntactic structure and semantic import, to a bag of stemmed tokens, and then feeds those tokens to a mathematical model that computes a thumbs up or down, without any understanding of language and without any explicit instruction in the rules of the Alice test?”

    Not sure that the USPTO or the court system does any better here.

    1. Not sure that the USPTO or the court system does any better here.

      If you’re “not sure” then you have no business getting anywhere near the patent system.

      Truly the patent maximalsits are the least intelligent attorneys on the planet, bar none.

Leave a Reply

Your email address will not be published. Required fields are marked *

 Notify me of followup comments via e-mail.

You can click here to Subscribe without commenting

Add a picture