New Checksum Generator: Patent Eligible

by Dennis Crouch

KPN v. Gemalto M2M (Fed. Cir. 2019)

Judge Stark (D.Del) sided with the defendants in this case — finding KPN’s patent claims ineligible under 35 U.S.C. 101. U.S. Patent No. 6,212,662.  On appeal, the Federal Circuit has reversed — holding that the claims are not directed toward an abstract idea:

Rather than being merely directed to the abstract idea of data manipulation, these claims are directed to an improved check data generating device that enables a data transmission error detection system to detect a specific type of error that prior art systems could not.

Slip Op.

KPN is a large Dutch telecom.  The claims here are directed to a “device” to create “check data” (checksum or hash) for error checking digital signals as they pass through the lines.

The basic approach here is to create a checksum before transmitting the data and then recreate the checksum at the other end — if the two match then we know the data  was (probably) not corrupted.

The invention here is directed improving chance that the data was “probably not corrupted” by somewhat randomly mixing-up the original data before calculating the checksum.  That approach helps catch systematic errors undetected by a less nuanced checksum algorithm.  The claims use a “varying device” to do the mixing-up that changes bit-positions relative one another and requires that the mixing be varied “in time.”  Although the claims do not spell-out the meaning of “in time” – I assume that we’re talking about something like a pseudo-random number generator function that uses the current time as its repeatable initiating point.

Note here, all that the claims require are two devices working in tandem to (1) mix-up (vary) the incoming data in a way that changes over time; and (2) generate check data.

The district court dismissed the case – finding that the claims were directed toward the abstract idea of “reordering data and generating additional data.”  The court noted that the specification might have disclosed an eligible invention, but that the claims were drafted at too abstract a level.

On appeal, the Federal Circuit reversed — focusing on the court’s regular distinction between inventions that improve computer capabilities vs those that invoke computers as tools.

An improved result, without more stated in the claim, is not enough to confer eligibility to an otherwise abstract idea. . . . To be patent-eligible, the claims must recite a specific means or method that solves a problem in an existing technological process.

Here, the court concluded that requiring mixing-up of data that varied in-time “recites a specific implementation of varying the way check data is generated that improves the ability of prior art error detection systems to detect systematic errors.”

[T]he claimed invention is … directed to a non-abstract improvement because it employs a new way of generating check data that enables the detection of persistent systematic errors in data transmissions that prior art systems were previously not equipped to detect.

The defendant also argued that the claims were ineligible because the result was simply data — the claims did not require any particular use. “According to Appellees, without this last step tying the claims to a ‘concrete application,’ the claims are doomed to abstraction.”  The Federal Circuit rejected that argument — holding that the claims are directed toward an improved tool — a checksum creator.  The claims need not “recite how that tool is applied in the overall system.”

Rather, to determine whether the claims here are non-abstract, the more relevant inquiry is “whether the claims in th[is] patent[ ] focus on a specific means or method that improves the relevant technology or are instead directed to a result or effect that itself is the abstract idea and merely invoke processes and machinery.” Quoting McRO; Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350 (Fed. Cir. 2016) (finding claims to be directed to a patent-ineligible abstract idea because “the focus of the claims [wa]s not on such an improvement in computers as tools, but on certain independently abstract ideas that use computers as tools”).

Id.

= = = =

One element of the decision that I see as important is that the court focused on what the patentee claimed to be their improvement within the patent document:

The claims sufficiently capture the inventors’ asserted technical contribution to the prior art by reciting how the solution specifically improves the function of prior art error detection systems.

Note that in this case, the specification identifies the prior art approach and its deficiencies and then explains how the invention is an improvement over the prior art. That approach had fallen out of favor in U.S. patent practice appears to be what saved-the-day for the patentee in this case.

 

= = = = =

Dependent Claim 2 is at issue:

1. A device for producing error checking based on original data provided in blocks with each block having plural bits in a particular ordered sequence, comprising:

a generating device configured to generate check data; and

a varying device configured to vary original data prior to supplying said original data to the generating device as varied data;

wherein said varying device includes a permutating device configured to perform a permutation of bit position relative to said particular ordered sequence for at least some of the bits in each of said blocks making up said original data without reordering any blocks of original data.

2. The device according to claim 1, wherein the varying device is further configured to modify the permutation in time.

 

61 thoughts on “New Checksum Generator: Patent Eligible

  1. 11

    Because it will fun to watch the inevitable (and unfortunately delayed) de-@th of these junky claims, let’s see if we can get the blog’s resident “experts” on “information theory” to talk frankly and clearly about what is being claimed here.

    My first attempt to bring iDan back down to earth failed because iDan wanted to pretend that these incredibly junky claims didn’t apply to (or cover) situations where relatively small amounts information are transmitted (note: I’m assuming these claims don’t cover error detection in information related to real estate availability or banking information because, as we all know from the CAFC’s insightful jurisprudence, that kind of information is very, very special). The fact is the claims cover the application of very basic logic to the transmission of data — any data — in “blocks”. How do we know that? Because we can read.

    So let’s try again to see if we can get our resident expert iDan to educate the readers out there about the salient facts. iDan (or anyone else) could start by simply:

    1) Showing us all how to apply these claim elements to the transmission of the following information:

    010 010 (already divided into blocks for convenience)

    2) Provide an example of how two of the closest prior art error detection methods would logically detect an error in this transmitted information

    3) Explain how the claimed method, as applied to this data, represents an improvement over those prior art methods.

    Reminder again: these claims aren’t limited to any amount of data, or any rate or accuracy of error detection.

    Thanks in advance, “experts”! This is a very serious blog for very serious people, after all. It’s not just another echo chamber where rich bros gather together to weep and screech about how it’s sooooooo unfair to humanity that their logic claims are getting tanked under 101.

    1. 11.1

      Mal’s bumping this topic up because he’s hoping no one who reads his comments actually scrolls down. For example, he says here:
      My first attempt to bring iDan back down to earth failed because iDan wanted to pretend that these incredibly junky claims didn’t apply to (or cover) situations where relatively small amounts information are transmitted

      But of course, anyone who scrolls down will see my post saying:
      Certain error checking schemes can’t detect systematic errors, even for packets as short as two bits.

      Now, I could leave the libe1 there, but I like pointing out that Mal’s technical knowledge is nonexistent, so:
      So let’s try again to see if we can get our resident expert iDan to educate the readers out there about the salient facts. iDan (or anyone else) could start by simply:

      1) Showing us all how to apply these claim elements to the transmission of the following information:

      010 010 (already divided into blocks for convenience)

      One form of error checking would be to NOR the 1st and 2nd bit, then NAND the result with the 3rd bit, with a result of 1 for the first block. This would be referred to as “check data” (e.g. “a generating device configured to generate check data”).
      For the second block, the device can rotate the bits by one to the right (resulting in 001) (e.g. “a varying device configured to vary original data… configured to perform a permutation [modified in time] of bit position relative to said particular ordered sequence for at least some of the bits in each of said blocks making up said original data without reordering any blocks of original data.”), and make the same calculation. The check data for the second block would be 0.
      The recipient can calculate the same XORs and generate corresponding check data, compare it, and if there’s a difference, can detect that the data has been corrupted.

      2) Provide an example of how two of the closest prior art error detection methods would logically detect an error in this transmitted information

      One example would be to do an OR of the first and second bits, then an OR of the result with the third bit. This would give a result of 1. If the second bit is corrupted (e.g. high value is corrupted by noise), the result would be 0, allowing detection of that error.
      Another example would be to AND the first and second bits and then AND the result with the third bit. That would give a result of 0. If the low bits are corrupted (e.g. low values corrupted by noise), the result would be 1, allowing detection of that error.

      3) Explain how the claimed method, as applied to this data, represents an improvement over those prior art methods.

      Flipping the value of the first bit will result in the same initial values for both of those prior art methods (e.g. 1 for the ORs method, and 0 for the ANDs method).

      However, for the permutation and check data I describe above, the check data for the first block will be unchanged, but the check data for the second block will be different.

      Accordingly, this implementation, which is covered by the claims, can detect an error that is undetectable by an unpermutated version, and by both of those example prior art versions, and is thus a patentable improvement in the technology.

      Reminder again: these claims aren’t limited to any amount of data, or any rate or accuracy of error detection.

      Good thing I used your data and provided 0% and 100% accuracy rates, eh?

      Thanks in advance, “experts”! This is a very serious blog for very serious people, after all.

      Happy to help. And since this is a very serious blog for serious people, I’ll expect you to apologize for your false claims about what I said, and admit that the above shows that you were completely wrong.

      1. 11.1.1

        I’ll expect you to apologize for your false claims about what I said, and admit that the above shows that you were completely wrong.

        I’m kidding of course. More likely is that Mal will just never post again in this thread.

        1. 11.1.1.1

          AiD,

          You mean, Malcolm will conduct himself in his typical “drive-by monologue” mode (and — no doubt — continue to post the same screed on an upcoming thread as if no one ever took issue with drive-by monologue previously)…

          Shockers.

      2. 11.1.2

        iDan, repeating his earlier non-response: Certain error checking schemes can’t detect systematic errors, even for packets as short as two bits.

        That’s nice. “Certain machines” can’t detect anything at all, much less errors. That statement is true of the vast majority of machines! But how on earth (or why) should that fact – in 2019 – affect the patentability of a detection machine? Surely you’re not suggesting that this “new machine” enables the world for the first time to detect every data transmission error with 100% accuracy. Or was the rubicon just crossed?

        The point is that your statement about “certain machines” is completely non-responsive to the issue I was perfectly clear about resolving, i.e, what were the existing prior art methods for detecting errors in the transmission of data, including transmission in blocks — at the lower limits encompassed by these claims (which are extremely low!) — and exactly what is the alleged “improvement” in error detection at those low limits.

        Surely you are not going to tell me that the prior art failed to disclose any methods for detecting errors that occurred during the transmission of two bits of information? Oh wait … reading further in your post I see that you’ve again chosen to describe two error detection methods that completely fail to result in detecting any error. Those are the closest prior art examples that you can think of for detecting an erroneous data transmission? Really? And both result in 0% ability to detect errors in transmission of the data that I provided in my example. That’s… convenient.

        One example would be to do an OR of the first and second bits, then an OR of the result with the third bit. Another example would be to AND the first and second bits and then AND the result with the third bit.

        LOL Feel the essence of the structure, folks. Very very serious stuff here! How could anyone characterize these logical manipulations of bits as an abstraction? I mean, how dare they.

        Let’s aside the laughs about Federal Circuit judges arguing about this stuff with two “zealous” attorneys. Okay let’s not. LOLOLOL! Yes, we definitely need so much more of this in our patent system. It’s a very healthy situation to have judges wading into the “non-obviousness” of “permuting” strings of “bits” for the purpose of … comparing the moved “bits”. Nothing quite smells as wonderful as comparing abstract legal concepts to abstract logical concepts. Plus iDan has blessed us all with the silly jargon that appears nowhere in the claim and is neither helpful or necessary to understand what is actually “happening” when this “device” (actually three “devices”) is … doing what it does LOL (i.e., some math, and some moving of bits).

        Okay, with the laughs out of the way, let’s return to iDan’s miserable attempt “claim construction”:

        One form of error checking would be to NOR the 1st and 2nd bit, then NAND the result with the 3rd bit, with a result of 1 for the first block. This would be referred to as “check data” (e.g. “a generating device configured to generate check data”).

        Nothing at all new there, of course. We can all agree on that much.

        For the second block, the device can rotate the bits by one to the right (resulting in 001) (e.g. “a varying device configured to vary original data… configured to perform a permutation [modified in time]

        Modified in time is a limitation in claim 2. There is no such limitation in claim 1. Let’s discuss claim 1 first, okay? Thanks.

        [perform a permutation]… of bit position relative to said particular ordered sequence for at least some of the bits in each of said blocks making up said original data without reordering any blocks of original data.”), and make the same calculation. The check data for the second block would be 0.

        Okay, so three distinct devices: the checksum generating device, the varying device, and somewhere inside the varying device there is a distinct sub-device that “permutes” (a subset of “varying”) some bits. So that’s it. And that’s an improvement over, say, a machine that simply sends, say, two or three copies of the same six bits of information and let’s you compare the two copies (just one example of many) because … why? What’s the difference in error rate detection there?

        This was tossed in by iDan at the end of his “claim construction”:

        The recipient can calculate the same XORs and generate corresponding check data, compare it, and if there’s a difference, can detect that the data has been corrupted.

        None of that is in the claim, or required to infringe the claim. Those steps do seem necessary, however, in order to achieve any “improvement” over the prior art error checking methods. Prior art bit processing machines, after all, were perfectly capable of sending “permuted” data, along with checksum info. In fact, for many decades now computers have been capable of doing all kinds of far more complicated manipulations of data. So … how is this an “improved computer”? (Answer: it’s not).

        iDan’s major achievement here is to show that the district court did a far better job of understanding the technology than the Federal Circuit. Congrats?

        Like I said: the point here is to help everyone understand the (LOL) “technology” that is claimed. It’s possible that iDan helped but it’s pretty clear that his love for these claims (surely not the self-interested kind of love LOL!) and his fondness for jargon has obscured his ability to discuss the claims in a clarifying manner. Any other takers out there? This is a remarkable and fundamental improvement in error detection “technology” that simply MUST be patent-worthy, after all. Or so we’ve been told. So let’s hear from someone who can write better than iDan. Remember: the claims aren’t limited to large chunks of data …

  2. 10

    Dennis: The basic approach here is to create a checksum before transmitting the data and then recreate the checksum at the other end — if the two match then we know the data was (probably) not corrupted.

    According to iDan, below, the method results in a 0% error rate over the best known alternative error-checking schemes which achieved nothing at all.

    So either one of you doesn’t understand the (LOL) “technology” (it’s “information theory” — which is exactly like rocket science except there’s no rocket and no science), or (far more likely) iDan is obfuscating the issues because he doesn’t want to recognize the ridiculous breadth of these incredibly junky claims (gee, I wonder why).

    1. 10.1

      According to iDan…

      Actually, that’s according to the inventors, the court, and anyone who reads either the patent or the decision and understands the technology. That explains why you’re not included, Mal.

      1. 10.1.1

        News flash for iDan: the number of times in which “the inventors” and “the court” have completely botched the basics to achieve a desired result … is a very big number. It’s especially common when the claim is directed to the (LOL) “technical” “art” of applying logic to data (and it’s not difficult to understand why …).

        The “you don’t understand the tech so s-h-u-t up” rhetoric you employ is just you being an obfuscating @-h-0-le. These claims aren’t complicated. Just read them. It’s doing math and moving some bits around. You were asked to break the manipulations down so we could see how simple the manipulations are and instead you l-@-rded up your description with jargon.

        The purpose of the claim (although not achieved by the claim, as written) is to check for errors. You really can’t expect anyone to accept the proposition that NO PRIOR ART METHOD was capable of checking for errors in the transmission of information of the sort I described above (i.e., “010 010”). So get over yourself, get off your high horse, and discuss the issues honestly. It’s not my fault that the claims were written absurdly broadly and it’s not my fault that the “inventors” and the “court” want to pretend that they aren’t written absurdly broadly.

        Try also to keep in mind that “comparing permuted data” to permuted data or to non-permuted data is something so fundamental and basic that you and I can literally invent dozens of new ways of doing it in a matter of minutes. And “in certain contexts” every one of those methods will be “useful.” Are we “promoting progress” when we claim ownership of those manipulations? Or are we just being selfish p-r-1-c-ks? I’m quite sure I know the answer. Maybe look in the mirror, put down the patent pipe, and see if you can figure the answer out for yourself.

        1. 10.1.1.1

          The “you don’t understand the tech so s-h-u-t up” rhetoric you employ is just you being an obfuscating @-h-0-le.

          And yet, it’s true. See, e.g., your statement that the examples I provided “result in 0% ability to detect errors in transmission of the data that I provided in my example.” That’s incorrect.

          You were asked to break the manipulations down so we could see how simple the manipulations are and instead you l-@-rded up your description with jargon.

          Actually, you asked for specific examples of error detection techniques that were improved by this invention. I provided them. That you think logic gates are “jargon” is your problem, not mine.

          1. 10.1.1.1.1

            See, e.g., your statement that the examples I provided “result in 0% ability to detect errors in transmission of the data that I provided in my example.” That’s incorrect.

            That statement was merely repeating what you yourself wrote, iDan.

            you asked for specific examples of error detection techniques that were improved by this invention.

            No. I asked for the following:

            Provide an example of how two of the closest prior art error detection methods would logically detect an error in this transmitted information [six bits, organized into two blocks]

            Try again, please. The game you’re playing is pretty transparent. I guess the one positive thing about that transparency is that it helps show everyone again how absurd it is to inject this kind of abstract g@ rbage into the patent system in the first place. Congrats? Maybe just a little bit is due. After all, it’s not the first time someone has played your game here in defense of ridiculously broad abstract “innovations” like those in claim 1.

            1. 10.1.1.1.1.1

              See, e.g., your statement that the examples I provided “result in 0% ability to detect errors in transmission of the data that I provided in my example.” That’s incorrect.

              That statement was merely repeating what you yourself wrote, iDan.

              Nope. I said it results in 0% ability to detect a specific type of error. You turned that into “any” error. Amusingly, you also claimed that if something wasn’t 100% perfect and couldn’t be improved on in any way, that it was unpatentable. That rules out any industry, from pharmaceuticals (“why, they can only cure some diseases, and not others with 100% effectiveness?! Balderdash!”) to machines (“Only 50 mpg for your efficient car? Why not 500?!”) to material science (“Only as strong as steel while a tenth the weight? Why not ten times as strong?!”). S00per serious person, you are.

              Provide an example of how two of the closest prior art error detection methods would logically detect an error in this transmitted information [six bits, organized into two blocks]

              Try again, please.

              Asked and answered, with explicit examples. That you don’t understand them isn’t my problem.

              1. 10.1.1.1.1.1.1

                Asked and answered, with explicit examples. That you don’t understand them isn’t my problem.

                Almost correct. The lack of understanding should not be your problem, but you (mysteriously) continue to make it your problem by choosing to engage. When someone is arguing in such transparently bad faith, the only sane response is to disengage.

                You have carried your point by now, and all good faith readers will acknowledge as much. You have nothing left to prove here.

                1. Where is the “bad faith, Greg?

                  I mean other than iDans l-a-m-e attempt to construe the claim to limit it to error detection in certain contexts where there are no such limitations.

                  This junk claim and the junkier opinion is actually a great poster child for expunging logic patents from the system.

                  That’s why we are going to talk about it. If you some real insights, share them.

            1. 10.1.1.1.2.1

              You do realize that claims are read in light of the specification in view of a Person Having Ordinary Skill In The Art, eh?

  3. 9

    This is not the first time Chief Judge Stark has been reversed by the Fed Cir on 101. The District of Delaware ought to take this, yet again, as a clear message that too many cases are being dismissed on the pleadings due to an alleged “abstract idea.”

  4. 8

    This is the right result here. Who knows whether the disclosure here adequately enabled the full scope of the claims, but these claims are surely §101-eligible.

    1. 8.1

      these claims are surely §101-eligible.

      Very compelling, Greg!

      Who knows whether the disclosure here adequately enabled the full scope of the claims

      Exactly what part of this “improvement claim” is “inventive” over the prior art, Greg? You’re a very serious person.

    2. 8.2

      Who knows whether the disclosure here adequately enabled the full scope of the claims

      I’m sure the disclosure includes examples of working code that functions in every major operating system that existed at least as of the filing date. And numerous examples proving substantial improvements over each of the best-known error checking algorithms in the prior art.

      1. 8.2.1

        examples of working code

        I’m sure that you know that such is not — nor ever has been — actually required under patent law to meet the requirements as viewed by a Person Having Ordinary Skill In The Art.

        examples proving substantial improvements over each of the best-known error checking algorithms in the prior art

        LOL – you are aware right that Jepson claiming (much less having such as you [merely] want in the specification), is NOT how the law is written, eh?

        That you know this and that you can control your feelings as to what you would rather the law to be — well, those are two very different things, eh Malcolm?

        1. 8.2.1.1

          Bildo: such is not — nor ever has been — actually required under patent law

          So … no claim has ever been invalidated on the basis that the specification failed to provide a single example of a working embodiment?

          That’s your assertion?

          LOL

          I mean, **maybe** that’s true in the “computer arts”, where numerous exceptions to patent law have been created because the practitioners in those arts were (and remain) so incompetent that they could barely write a complete sentence and whose entire disclosure consisted of describing ancient prior art advertising schemes taking place “over the Internet.” Isn’t that your “specialty”, Bildo?

          LOL

          In any event, I’m not interested in discussing the super special exceptions granted to the worst lawyers on earth and their equally intellectually crippled clients (Rich Whitey needs those special exceptions, after all!). All I was doing is responding to Greg’s comment, which evinced a level of credulousness that I found highly amusing.

          1. 8.2.1.1.1

            So … no claim has ever been invalidated on the basis that the specification failed to provide a single example of a working embodiment?

            That’s your assertion?

            That is SO clearly not my assertion that I have to wonder if you even recognize your moving of the goalposts so entirely as to be non-responsive to what I actually stated.

            Of course you are “note interested” in THAT – and that you retreat (in extreme haste) to play the “R” card is ALSO not that surprising.

            Come back when you actually care to be on topic.

    3. 8.3

      Who knows whether the disclosure here adequately enabled the full scope of the claims”

      The question is who cares and the answer is too few to get 112(a) enforced. In a perfect world 101 wouldn’t be a grotesque bandaid for 112(a). In this world, without that bandaid the “stakeholders” will prevent the 112(a) wound from being treated.

      1. 8.3.1

        In this world, without that bandaid the “stakeholders” will prevent the 112(a) wound from being treated.

        Thank you for your feelings Ben.

        I would posit that it is simply MORE likely that the actual proper application of 112(a) is simply not how you would like 112(a) to be applied.

  5. 7

    The problem with the USSC’s idiotic “abstract idea” jurisprudence is that the Fed Cir could have ruled either way in this case based on what side of the bed they got out of this morning and no one would have been any the wiser.

    1. 7.2

      It gets worse – the clear presence of contradictory Common Law law scrivening can only be reflected in a hazy and “couched” directive in the Executive branch administrative agency, and thus, in the hands of examiners (untrained in legal thinking), the “which side of the bed” is even more problematic.

  6. 6

    “Note that in this case, the specification identifies the prior art approach and its deficiencies and then explains how the invention is an improvement over the prior art. That approach had fallen out of favor in U.S. patent practice appears to be what saved-the-day for the patentee in this case.”

    I mean, the fact that patent eligibility in software is based on how an invention improves computer-related technology would seem to imply that U.S. software specifications should be drafted to identify the prior art and how the invention improves the prior art. A patentee should not worry about admitted prior art if a patent can be easily invalidated under 101.

    1. 6.1

      The problem doesn’t even have to be drafted as admitted prior art: “Implementations not utilizing the methods and systems described herein suffer from…”

  7. 5

    Another easy one in my scheme.

    Is it a method? yes

    Is the useful result of the method only an item of information? Yes

    does the utility arise from human consumption of the information? No, the signal processing equipment consumes the information.

    Eligible.

    Probably obvious as sunrise to PHOSITA, as many of these are…. but maybe not.

    The district courts characterization WAS pretty weak….

    1. 5.1

      Is my understanding of checksum too rudimentary:?

      01011100 Checksum 4 (4 bits valued at 1)

      Call that a block and vary the order of the bits

      11001100 Checksum is still 4

      S0, what is it the invention?

      1. 5.1.1

        It is a bit rudimentary, Les. That’s a fine definition of a simple checksum, but you’ll note that if there’s an error that causes two of those bits to flip (say, from 01011100 to 10011100), the checksum will still come out at 4, even though the data is corrupted. More complex error checking algorithms can detect that sort of error. However, they may be vulnerable to other errors. This invention addressed one type of those “other errors” – systematic errors that cause the same corruption in successive data blocks. By applying varying permutations to each block, they can detect systematic errors.
        Hence, improvement in the technology.

        1. 5.1.1.1

          I understand that there are more sophisticated error checking algorithms. There are parity checks and I have a vauge recollection of Hammond coding. But that might not be related to error checking. In any event, I don’t think these other techniques are called checksums, are they? In any event, how does anything involving a summation arrive at a different result from changing the order of the elements being summed?

    2. 5.2

      Another easy one in my scheme.

      “Easy” or not with your scheme is not the issue with your scheme.

      Your issue remains the lack of patent fundamentals (starting with utility in the patent sense of the word).

    3. 5.3

      Yes Martin this Could have been claimed as a “method” to pass the usual first step of a 101 analysis to see if it fits one of the allowed 101 categories. But, it wasn’t – it was claimed as “A device.” Does ignoring that bother any of the 101 statutory literalists on this blog?
      [P.S. Many of us will strongly agree that improved data transmission error detection systems are important and should be patentable subject matter if novel, unobvious and 112 enabled. [Even perhaps via or within chips, since insulative spacings and bit charges keep shrinking?]

      1. 5.3.1

        [i]t wasn’t [claimed as a method] – it was claimed as “A device.” Does ignoring that bother any of the 101 statutory literalists on this blog?

        I cannot speak for any other statutory literalist around here, but from a §101 perspective, this bothers me not even one little bit.

        If you invent a machine, describe a machine, enable a machine, and then (clearly and definitely) claim a composition of matter, your claim will fail and deserve to fail, but under §112, not §101.

        Not every grounds for invalidity must be adjudicated under §101, no matter how strenuously some might try to squeeze the whole of patent validity into that single statutory section.

        1. 5.3.1.1

          In other words, there might be a problem with claiming the wrong category of invention, but not a §101 problem. If one has claimed a new and useful quid falling into one of the four categories, that is really as much as §101 requires. Such a claim ought not to fail §101, even if it may turn out—on further inspection—to fail §112.

      2. 5.3.2

        Does ignoring that bother any of the 101 statutory literalists on this blog?

        You seem to be pre-supposing that “101 literalists” should be bothered, but it is not apparent (at all) why you would think so.

        Any “101 literalist” worth their salt can easily recognize that Congress granted the authority/responsibility of defining the innovation to the applicant and it is quite common to “scriven” the innovation across several of the statutory categories.

        You seem to be wanting this to be a bug while it is instead a feature.

      3. 5.3.3

        Does ignoring that bother any of the 101 statutory literalists on this blog?

        Depending on how the method claim is drafted, it would have read on activities that can be practiced in the mind. So, an apparatus claim could have helped the patentee avoiding an invalidity under 101.

        1. 5.3.3.1

          [A]n apparatus claim could have helped the patentee avoiding an invalidity under 101.

          What do you mean “could have”? The patentee did assert an apparatus claim, and did (eventually) avoid §101-invalidation.

          1. 5.3.3.1.1

            I just meant that I think claiming an apparatus has helped the patentee, but I did not see a clear passage in the CAFC opinion confirming my thinking.

      4. 5.3.4

        Many of us will strongly agree that improved data transmission error detection systems are important and should be patentable subject matter if novel, unobvious and 112 enabled

        That’s the crux of the political and philosophical problem here Paul. I tend to think MPEG, encryption, & machine consuming algos of many sorts are the sorts of things that the patent system was intended to protect.

        That’s why I believe that the natural, practical compromise is the universe of difference between a person using information for their human purposes, and a non-human using information for their machine purposes. The former is absolutely abstract and the latter can never be.

        No human mind=no abstraction.

        Nothing about this compromise is forbidden by current or customary patent law- in fact, various doctrines such as printed matter are essentially eligibility tests and walk right up to my scheme.

  8. 4

    inventions that improve computer capabilities

    Nothing about this junk patent improves “computer capabilities”.

    Computers could be instructed to compare two sets of data and follow instructions based on the results of that comparison long before this g@-rb@ge application was filed. What’s the distinction here? The source of the data.

    Even the CAFC decision contradicts itself:

    the solution specifically improves the function of prior art error detection systems.

    Really? Prior art “error detection systems” couldn’t accurately determine whether two original “blocks” of bits

    001
    010

    were transmitted accurately? Somehow I doubt that. Note the breadth of these claims. But I’m guessing that the CAFC oral arguments were filled with discussions about streaming high resolution 3D videos of twenty hour space flights and the like.

    1. 4.1

      Prior art “error detection systems” couldn’t accurately determine whether two original “blocks” of bits

      001
      010

      were transmitted accurately?
      Nope (for much more complicated and longer sequences of bits). Error detection has been a long running problem in computer science.
      Your question is like incredulously asking why any alleged disease diagnosing system or method is new because we’ve known for millennia how to tell if someone was dead or not. It doesn’t even begin to approach the problem and really highlights your lack of bona fide scientific or technical experience.

      1. 4.1.2

        for much more complicated and longer sequences of bits).

        The claim isn’t limited to any bit length other than “a plurality”. That includes “two”. So please try again.

        Your question is like incredulously asking

        I’m not “incredulous” about anything. It’s your answer that is “incredulous” because you seem incapable of accepting the simplicity of the abstract concept that is being claimed here.

        Error detection has been a long running problem in the art of communicating via coded information.

        Fixed for improved accuracy and to highlight the abstraction at issue, which is not limited to “computing using a digital computing machine”.

        Error detection has been a long running problem throughout human history.

        Fixed again to make it even better. Note that this fact is not relevant to the question of whether a claim directed to a logic-based method of “error detection” is eligible or not.

        Error detection has been a long running problem in computer science.

        Again, nobody has disputed this (and nobody ever will). On the contrary, it was implicitly acknowledged in my original comment. Error detection will always be a “problem” in every field as long as the communication, transmission, receipt or coding of information is involved somewhere in the chain of events that begins with “desired result” and ends with “actual result”.

        The “technical” issue (if we may call it that) is exactly what “error rates” were observed in the prior art during the transmission of information lengths of the sort that I noted in my comment, and what “improved” error rates are observed when this allegedly inventive (LOL) method is implemented instead on those information lengths.

        You’re a very serious person so I’m sure you’ve thought about this already and you will have a well-written thoughtful response. Certainly you know enough patent law to understand the issues that arise when claims are written very broadly (as these were).

        Looking forward to the reply. The worse thing that could happen is for claims like these to not be granted, of course, in which case nobody will invest the enormous amount of money and research it takes to come up with such “inventions” … like comparing one piece of data to another for the purpose of determining accuracy. Why, that’s almost as amazing as a table cell that references another table cell … which references yet another table cell (I might be able to innovate further but, golly, it would require millions of dollars and so much risk).

        1. 4.1.2.1

          The claim isn’t limited to any bit length other than “a plurality”. That includes “two”. So please try again.

          I’ll happily try making fun of you again. You’re still exposing your ignorance of information theory. Certain error checking schemes can’t detect systematic errors, even for packets as short as two bits. That you don’t understand the technology doesn’t make the claim automatically abstract.

          The “technical” issue (if we may call it that) is exactly what “error rates” were observed in the prior art during the transmission of information lengths of the sort that I noted in my comment, and what “improved” error rates are observed when this allegedly inventive (LOL) method is implemented instead on those information lengths.

          Uh, 100% and 0%? That’s clear to anyone who understands what this patent is discussing, including the learned Judges.

          The worse thing that could happen is for claims like these to not be granted, of course, in which case nobody will invest the enormous amount of money and research it takes to come up with such “inventions” … like comparing one piece of data to another for the purpose of determining accuracy.

          No, the worst thing that could happen is that claims like this are held to be unpatentable, in which case people will lock down their inventions with restrictive contracts, keeping everything as trade secrets, exactly the sort of thing the patent system was created to address.

          1. 4.1.2.1.2

            For Malcolm, it is ALWAYS the “worst thing ever” for patents to be granted.

            That should be clear to everyone given Malcolm’s rants over the last 14 and 3/4 years.

  9. 3

    DC That approach [puffery in the background section regarding the alleged improvement over the prior art] had fallen out of favor in U.S. patent practice appears to be what saved-the-day for the patentee in this case.

    “Saving the day” is a bit different than saving the patent, of course.

    The CAFC’s reliance on McRo (?!) as an example of an instance of an eligible (LOL) claim that “improves the relevant technology” is a bit dishonest. As everyone reading this blog knows (or should know), McRo’s claims were invalidated on summary judgment as non-enabled cr-@p. In other words, McRo didn’t actually invent what they claimed to have invented (or if they did, they never bothered to disclose that invention). On top of that, like the instant claims here, McRo’s claims were also painfully obvious. There was no “improvement” of any “technology” … unless you believe that using a computer to implement basic logical steps that animators had relied on for almost a century is an “improvement”. And if that’s the case, well, I have a super awesome “app” to sell you that will dramatically improve the life of you AND your dog.

    I haven’t looked yet but probably the CAFC cited Enfish, too (claims to a super awesome “improvement” that just had to be eligible … turned out to be obvious, suprising nobody with the slightest familiarity with the technology [insert sad trombone noise].

  10. 1

    Instant comment: somewhat uncomfortable for the Fed Ct to hold that the invention is abstract, given the wrapper at the EPO and given the notion held by some US observers, that “useful arts” has a wider reach than the EPO’s “technical character”.

    The EPO granted patent 751 643 and LG (represented by Fish & Richardson) opposed it under all of the statutory grounds offered by Art 100 of the EPC. Three months later, LG withdrew its opposition.

    1. 1.1

      Instant comment: somewhat uncomfortable for you to be pretending that some type of comparative law approach in any way dictates the proper application of US Sovereign law.

      EPO-anything comfort-driven is not, should not, and cannot be a driver for application of US patent law.

      1. 1.2.1

        Agreed, but think on this.

        LG got what it wanted, freedom to operate. That’s the usual reason to withdraw. Because once you have got your licence, it makes sense to let the patent survive.

        Ask yourself, how come patentee folded so fast, before the EPO could express any opinion on the claims.

        1. 1.2.1.1

          Are there facts missing from your story MaxDrie?

          You appear to want to make this as some type of total win by the challenger and a total loss by the patentee (or at least AI between the patentee and LG), but the patentee’s patent surviving because the challenger withdrawing and the patentee obtaining licensee fees from the challenger do not necessarily paint such a “win/loss” situation on its face.

          1. 1.2.1.1.1

            Who said anything about LG having to pay any fees for a licence? Not me. The typical “deal”, the “consideration” for the grant of a licence, is withdrawal of the opposition. Ever heard of a “free” licence, anon?

            The essence of the deal is a win-win outcome. LG gets its freedom to operate and patentee is freed of the burden of fighting the opposition.

            But of course there are “facts missing”. I don’t have access to any of the relevant facts. My comments are just to offer a plausible explanation of such facts as we both have.

            1. 1.2.1.1.1.1

              Who said anything about…

              As I said, MaxDrei, your post seemed to omit facts necessary for a full consideration of your conclusion.

              Was there a free license? Unknown.

              So yes, while this additional point you posit may be present, it also is NOT KNOWN to be present.

              “Plausible” is not so when you posit conclusions and do not even recognize that you need additional facts to MAKE the conclusion plausible. That’s just slinging brown stuff on the wall to see what sticks.

              As noted elsewhere, merely “giving reasons” does NOT make such to be reasonable.

Comments are closed.