by Dennis Crouch
A new petition for writ of certiorari focuses attention again on patent eligibility and the law-fact interplay. Real Estate Alliance Ltd. v. Move, Inc., SCT Docket No. 18-252.
The original focus of patent law is to “promote the Progress of . . . useful Arts.” In that vein, patents have long been awarded for inventions with concrete and practical uses — and barred to invention claims that are merely abstract ideas.
In Mayo and Alice, the Supreme Court defined a two-step process for determining when a claimed invention is patent eligible:
Step 1: Ask whether the patent claims are directed to a patent ineligible concept, such as a law of nature, natural phenomenon, or abstract idea.
Step 2: If so, ask whether the claimed invention includes an “inventive concept” sufficient to transform the ineligible concept into a patent eligible invention.
In Step 2, the Supreme Court also suggested an inquiry into whether the claims present “something more” beyond a combination of “well understood, routine, and conventional” elements.
In this case, Real Estate Alliance Ltd. focuses on this second part of the Alice test and the “proper role of fact-finding.” The question presented is:
Is whether an ordered combination of elements in a patent claim is “well-understood, routine and conventional” to a skilled artisan in the relevant field under Alice step two a question of fact?
In this particular case, the courts have seen this issue as a question of law and have not really considered any hard evidence. The patent at issue is directed to a user interface that shows the geographic location of for-sale properties — using a zoomable interface. Although this idea might seem well understood today — the application claims priority back to 1986 — graphics were not so easy back then. (See Conan – my favorite game back then). U.S. Patent No. 5,032,989.
The difficulty for the patentee – I expect – is that the patent claims just seem so obvious. Consider representative claim 1 below:
1. A method using a computer for locating available real estate properties comprising the steps of:
a) creating a database of the available real estate properties;
b) displaying a map of a desired geographic area;
c) selecting a first area having boundaries within the geographic area;
d) zooming in on the first area of the displayed map to about the boundaries of the first area to display a higher level of detail than the displayed map;
e) displaying the zoomed first area;
f) selecting a second area having boundaries within the zoomed first area;
g) displaying the second area and a plurality of points within the second area, each point representing the appropriate geographic location of an available real estate property; and
h) identifying available real estate properties within the database which are located within the second area.
display a map; zoom in on a selected region; display an overlay with points. Those are all in the prior art, so why go to 101 rather than 102 or 103?
At least two reasons:
(1) judicial economy – 101 is easier to apply than 103 in many instances because the Supreme Court created (out of thin air) a bunch of “tests” that must be performed under 103.
(2) 1o3 is easy to avoid if you de-fang subject matter eligibility principles; all you need to do is recite, e.g., the address of a new house or its new ownership or its new status and voila no more 103 problem. Note that this problem has already been recognized by the courts for eons which is why we have an entire body of case law dedicated to ignoring non-structural (i.e., abstract) limitations when evaluating otherwise structural claims for obviousness.
As the “count” filter appears stuck (for some), let’s try an abbreviated version:
“Ease” is never a proper “means” to whatever desired “ends.”
As to the Court pulling things out of thin air, you might take a gander at the Gordian Knot of the 101 mess. And by that, I mean an inte11ectually honest look.
As to your “already recognized,” are you ready yet for the simple set theory explication?
Yeah, that’s what I thought.
Maybe the claims should have been held as obvious. Maybe. But this ineligible stuff is just nonsense.
Notice too that the companies/people that are pushing this ineligible nonsense are part of a huge international companies (or getting their money from them) that have become near monopolies. MM’s claim to be part of the left and standing up for the little guy is a joke. MM is MAGA man to be sure.
The fact is that most people in the USA now are paid to process information –not move things about like in the iron age–and to say that processing information should not be eligible for patentability is to say that we should not have patents.
Plus, Martin, the whole point of innovation is NEW things. The 1952 Patent Act implicitly allowed patent eligibility for patents by not saying they weren’t eligible. New things are meant to be patent eligible. Innovation. Progress.
I have to mention that all the database operations described in the claim can be found individually and in ordered combinations in Chris Date’s textbooks entitled Introduction to Database Systems Vol 1 & 2 (1981 & 1983).
I did not have much opportunity to experiment with the Tektronix graphics terminals at Harvard when I was an undergrad from 74-78, but I certainly saw maps drawn on them with zoom-in capability in lectures before I graduated.
If I had been the examiner, I would certainly have given the claims completely valid obviousness rejections even by the obviousness standards of March 19, 1986.
Industrial rubber-curing presses were first developed in the 1870s.
The general-purpose computer dates back to the 1940s.
The Arrhenius equation dates back to 1889.
Would you believe that somebody had the nerve to file a patent application on the combination of those things in 1975? Some wiseguy tried to patent the use of a general-purpose computer to calculate the Arrhenius equation and to control a rubber-curing press.
All of the individual components were known. All of them functioned exactly according to their conventional uses:
* The rubber-curing press cured the rubber just as it always does.
* The Arrhenius equation was used to model rubber curing times, which was its original use in 1889.
* The computer computed things, because that’s what computers do.
But get this: They allowed the patent.
That examiner really dropped the ball on the obviousness analysis. Somebody should have given that examiner a bunch of treatises from the 1880s to prove that each of those elements was really, really old.
David Stein just discovered that the PTO makes mistakes and sometimes those mistakes lead to Supreme Court decisions that are difficult to understand.
Diehr’s “invention” was always dubious. The Supreme Court’s opinion in Diehr was poorly written.
But one aspect of the decision (the only aspect) was never in doubt: the mere recitation of ineligible subject matter in a claim does not render that claim ineligible. Somewhere along the line some short-sighted lawyers and judges decided (for themselves, without any clear reasoning) that Diehr stood for the proposition that you can’t look at the individual elements of a claim (for any reason!) when you are determining the claim’s eligibility. That misguided concept was chucked in the trash by the Supreme Court in Prometheus v. Mayo, a decision that will never, ever be overturned.
Malcolm’s “the mere recitation of ineligible subject matter in a claim does not render that claim ineligible.” needs to be bookended with the late Ned Heller’s “Point of Novelty” (for eligibility) to truly see how BOTH have/had it so very wrong.
And yes, Malcolm, eligibility is to the claim as a whole – no matter how much you want to employ a parse and ignore protocol.
Your beloved “Mayo” is the decision that is self-conflicting.
And by their own ham-fisted writing and NOT throwing out Diehr — as you screamed for — the Court has left itself in a Gordian Knot of its own scrivining.
It’s too bad your lack of inte11ectual honesty blinds you to the plain facts here. You continue to celebrate an Ends, purposefully blinding yourself to the fact that the Means to those Ends is a broken scoreboard.
In 1973-1974 successful computer control of 50-60 rubber vulcanization molds was certainly non-obvious.
The system Diehr put together was continuously monitoring and responsible for opening the molds automatically — tasks which were challenges back in 1973-74. This type of industrial electronic control was rare back then. Related industries did not use it because it was perilous to allow computers to control too many devices when a failure might result in a massive economic loss.
The big developer of electronic control back then was AT&T which wanted to switch from electromechanical switches to fully electronically controlled switching, in which the mechanicals were eliminated.
Diehr had no way of eliminating the mechanicals, had to deal with temperature extremes, and a fairly high interrupt rate for those days. Also, Diehr did not know it could be done.
The AT&T engineers that built the 1ESS (1965) knew that a transistor was itself a non-mechanical switch as were the triodes and pentodes before the transistor. It only stood to reason that with sufficient use of transistors, it should be possible to eliminate the mechanical component of telephone switching completely.
In the 70s successful computer control of industrial processes was still speculative. AT&T only began deploying more 1ESS’s in 1975. Inventors still had to invent much of the basics of digital industrial control.
“Of 50-60…”
Rubber is cured once – NOT 50 to 60 times.
Diehr parallelized the control of 50-60 separate rubber molds.
Rubber remains only cured once.
If “the claim” is to curing 50-60 pieces together at one time (it is not), then your statement may carry some weight.
It’s not in the claims.
(that was my point, Ben)
Back in the 1973-4 time frame because of cost, it was not obvious to try to achieve this sort of electronic control over the industrial vulcanization process unless 50-60 devices could be controlled from a single controller. The MITS Altair 8800 personal computer (the first) did not appear (as a kit) until 1975, and it used an Intel 8080. I doubt it had the speed or memory capacity to achieve the targeted capacity industrial control. As I remember, neither did this PC have sufficient I/O capacity, nor was it sufficiently rugged for the industrial environment.
The claims use a/an in the sense of one or more. Independent claim 5 refers to a plurality.
By the obviousness standards of the 70s the claimed invention probably passes 103. Even if we applied the standards of KSR v. Teleflex, the Diehr claims are probably non-obvious. Automobile mechanicals are highly constrained as the KSR decision indicates, and there already had been a good deal of work in positioning sensors on pedals. Diehr seems to have made the very first effort at digital computer control of the vulcanization process, and he seems to have been the first to overcome a lot of problems. He did not have a body of knowledge to increase the likelihood of success. Now he might have found useful knowledge in a related area, but I doubt it. Diehr’s claims are probably okay as written even if I don’t like the punctuation.
“Even if we applied the standards of KSR v. Teleflex, the Diehr claims are probably non-obvious. Automobile mechanicals are highly constrained as the KSR decision indicates,”
Actually KSR stood for the opposite and the “but automobiles are highly constrained” was a losing position.
(but I do appreciate you going that extra mile and showing the plurality aspect in at least a dependent claim)
The constraints of the auto environment and the existence of knowledge base for the placement of sensors was a winning position for the side arguing obviousness in KSR v. Teleflex.
Diehr was not nearly so constrained, and there was no existing base of knowledge for the digital control of rubber molding at the time.
Guidance from KSR v. Teleflex might have helped Diehr argue for non-obviousness if such guidance had existed.
No Joachim – you have the winner and loser interchanged.
Further Joachim, while you provide claim 5, the case as decided by the Court was for claim 1 – and there is NO sense of plurality there.
I “get” what you want to do with the “plurality” aspect, but that is not the point at law.
…the bigger takeaway from your comments though is a truism that the anti’s refuse to acknowledge:
software is patent equivalent to firmware and is patent equivalent to hardware.
The late Ned Heller used to play that game, always missing the point that in order to “just use,” the machine FIRST had to be changed and configured with the different software.
Malcolm plays this same game by his refusal to use the actual proper patent doctrine of inherency. He wants to ignore that critical first step and actual change to a machine that happens when software – as a patent manufacture – is ADDED to a pre-existing machine. He (purposefully) mistakes the fact that software has been engineered to be a quick-exchange in place of dedicated hardware and instead wants to treat ALL software as “functionally equivalent.”
The fallacy of that position is evident to anyone that gives a damm.
I appreciate the explanation. And you’ve just resoundingly proven my point.
Your understanding of the significance and merit of Diehr’s combination is informed by your understanding of the background. People who do not have that background can easily wave away your explanation – for any of the following reasons:
“The background doesn’t matter, because the invention looks simple to me.”
“The solution here involves conventional programming of a computer. A first-year computer science student could implement it in a weekend.”
“The individual parts are known, and old, and all of them function as originally intended.”
“It doesn’t matter that a computer can provide better control here, such as by calculating faster or more accurately – because that’s what computers do.”
“The claims merely recite the calculation of an equation by a computer and the use of that equation in a particular field.”
“The background isn’t in the claims or even the specification, and so is entitled to no weight.”
“The calculation here is not new. And the invention is merely its application in a particular field of use. Field-of-use limitations do not confer patent-eligibility on a generic idea.”
Joachim, all of those criticisms could have been leveled at the Diehr patent, even – indeed, especially – when supplemented by your explanation. You object based on your technical understanding of the context.
Not coincidentally – many of these same arguments are routinely leveled at software inventions. Many of us object for exactly the same reason: based on our technical understanding of the context. By the same token, many of the responses you would present as to why those arguments do not apply to Diehr are the same responses we would present for why those arguments are not valid for many software inventions.
Below, at 7.2.2.24, I’ve presented a fictitious specification for this invention that places it in the context of circa-1986 software. My explanation, like yours here for Diehr, addresses (a) important technical limitations in both existing software user interfaces in this field, and (b) important technical details in implementing the invention using circa-1986 hardware, and (c) a set of implementation-specific features that could be discussed for several aspects of the UI.
Just as you can explain why Diehr’s claimed invention was not an obvious combination of known parts – I have explained above, in 7.2.2.24, the possible significance of the database UI in this particular case. That’s my full response to your comment.
our technical understanding
There is nothing “technical” recited in this claim, David.
Stop smoking your own crack and step out of the bubble into the real world where lawyers like me routinely make mincemeat out of question-begging know-nothings like you.
Shorter version: grow up. Computers weren’t new in 1986. Not everyone was born yesterday. Some of us have been programming for a long, long, long time.
Malcolm,
Why do you keep on insisting that “programming” is some single item?
It’s as if you want to treat one program to be automatically exactly identical to another program, without care as to any different functional relationship.
That’s simply an inappropriate attempted view. That’s like saying people have been doing chemistry for centuries, so all “doing chemistry” must be the same and there can be no innovation in doing chemistry.
👍
I have both backgrounds because I am trained as Electronics Engineers were back in the 60s and 70s.
I was building digital electronics systems as a hobby when I was a cub scout. At prep school I got to write programs for the IBM 7090 series computers. As an undergrad I received formal training in engineering. Then I worked on a PET system at MGH and the 4ESS (and a little on the 5ESS) at Bell Labs.
I understand what you are saying about the real estate database system, but the real estate content looks to me like an issue of printed matter doctrine.
Computer Aided Design predates my cub scout hobbies, but parts libraries and databases certainly existed when I was a cub scout. The CAD systems used vector graphics terminals and could zoom in on parts. Scaling and orientation are all effectively calculable mathematical operations (pencil and paper calculations).
Tektronix supplied software libraries for all these operations in the 1974-8 time frame.
If I were the examiner, I would have certainly considered the arguments, but it would have been a hard sell.
I know it appears I am harder from the standpoint of obviousness on the Diehr than I am on Tornetta 5,032,989, but Diehr’s implementation problem was simply much harder than Tornetta’s. CAD systems very similar to Tornetta’s real estate system were known to have existed in 1986 and had existed for at least a decade.
Now it is possible I am being too easy on Diehr because I understand the problems of the PET scanner (1974-8), which also required the building of special probes and a lot of custom mechanicals, but I don’t think I’m too far off.
If I had a copy of Saucedo and Schiring’s Introduction to Continuous and Digital Control Systems, I would certainly review it, for it has editions if I am not mistaken from 1968 through 1986.
The “printed matter” doctrine would cover an attempt to patent any kind of storage medium that encodes real-estate data.
Please read my fictitious specification again. Nothing in my post below is about the actual numbers. It’s about the challenges with the user interface for a meaningful, user-friendly presentation of data of that type. It’s also about the technical challenges in implementing that type of user interface on circa-1986 hardware.
The invention is a specific UI that is well-adapted for presenting a certain type of data. “Printed matter” doesn’t apply.
Today UIs with a well defined structure seem patentable, and I could probably justify that position. The display output in this claim seems to be three maps, which are not really so structured. Maps are usually copyrightable and not patentable. The rest of the method seems to consist of standard effective calculable database operations (pencil and paper calculations). Was this combination patentable back in March 1986? I could be wrong, but I think the patent prosecutors at Bell Labs would not have spent much time on this application even if they would have copyrighted the UI.
David Stein’s hypothetical application with more structure in the claims would have been closer to the sort of application that was prosecuted by Bell Labs during the middle 80s.
I don’t think it’s fair to judge Bell Labs’ patent efforts as the norm for patenting in the 1980’s. That’s like holding every basketball player to the LeBron James, as if he’s the standard of competency.
Bell put forth a singularly strong, concerted, enduring – and most importantly, well-funded! – effort, the kind that elevates the game. Would Bell Labs have drafted claims like this? No, but that’s an awfully demanding yardstick for 1980’s patenting. I suspect that the only patents issued in the 1980’s that meets that standard… are those filed by Bell Labs.
“In 1973-1974 successful computer control of 50-60 rubber vulcanization molds was certainly non-obvious.”
What about unsuccessful computer control? Moderately successful?
“What about unsuccessful computer control? Moderately successful?”
Probably not made public all that much.
Your point is way more compelling the way you leave out computer controlled curing processes from the 1960s.
US3718721:
“A method for controlling the state of cure of at least a part of a curable article during curing in a mould, in which a temperature sensing probe is inserted into a predetermined site in the article and the temperature of the site is monitored as a function of time. The state of cure of the site is computed from the temperature measurements and heating is discontinued when a predetermined state of cure has been reached, of which the following is a specification.”
I am having some difficulty with commenting.
That said, Gould 3,718,721 does not look like a computer-controlled method. It looks more like a better method of determining cure-state.
For better or worse, automatically opening the mold at the right moment is not an effectively calculable operation, or maybe I should say it this way.
One cannot open a door with a pencil and paper.
Clearly you haven’t watched enough MacGyver.
The first two results in Gould for “computer”:
“the local temperature at said site is monitored at set intervals of time and the corresponding increments of cure summated for example by means of a digital computer.”
“At the end of the cure cycle i.e. after the article has cooled sufficiently the mould is opened either manually or automatically. The latter may be carried out by actuation by the computer of a suitable device operably attached to the mould to open the mould”
I read those paragraphs, and I had the impression that Gould was envisioning something like the catalytic cracker of Flook. The computer might output an indication that curing had been achieved at which point the operator could manually open the door or give the computer a command to open the door. I have to admit that I would have liked to have seen the phrase “autonomous digital control” in both Gould’s and also Diehr’s specifications to tell me that the computer was doing the opening. (I always prefer precise and accurate words in a written description.)
Applying BRI (we are talking about examination) to Diehr’s and to Gould’s respective claim sets in light of the respective written descriptions and respective drawing sets (Gould only has one drawing, which shows probe placement), Diehr’s computer really directly (and autonomously) controls opening (closing is manual) while Gould’s computer provides indications and seems to be under operator control.
People did not trust computers as much back then as they do today. Sometimes I think they were more correct in that time period than we are today.
It is remarkable the contortions you’ll go through to defend Diehr as non-obvious while asserting the map-zooming-database patent of the post is obvious.
Did you notice claim 2? “A method according to claim 1 in which the number of cure units is computed using the Arrhenius equation”.
“The computer might output an indication that curing had been achieved at which point the operator could manually open the door or give the computer a command to open the door”
But Gould says “the mould is opened either manually or automatically. The latter may be carried out by actuation by the computer of a suitable device operably attached to the mould to open the mould, when the temperature sensing probe registers a suitable temperature. Please don’t hurt your back.
The applicant is his own lexicographer.
When I read Diehr’s written specification and look at the diagrams, automatically seems to mean autonomously (or at least not manually). Look at Diehr’s second drawing.
Gould says the automatic opening of the mold “may be carried out by actuation by a computer of a suitable device operably attached to the mould to open the mould, when the temperature sensing probe registers a suitable temperature.”
The modality appears to be deontic (possibility).
Maybe there could be a button on the consul which activates a motor when it is pressed by the operator. Or maybe the computer has an RS-232 asynchronous terminal (back then) to which the operator types open and the computer actuates the motor.
The written description does not say, the automatic opening of the mold “will be carried out by actuation by a computer of a suitable device operably attached to the mould to open the mould, when the temperature sensing probe registers a suitable temperature.”
The modality would also be deontic, but this time necessity or requirement would be indicated (and thus maybe no operator involvement).
To tell the truth, even “will” is somewhat ambiguous.
In this case I’d probably be inclined to look at the prosecution record.
You’re reading a broad disclosure to exclude scope because of its breadth.
You’re injecting hypotheticals to distract from the full scope of the plain meaning of “actuation by the computer … to open the mould, when the temperature sensing probe registers a suitable temperature”.
I am uninterested in watching further contortions. Your sophistry is wasted in prep/pros.
I reread Gould’s written description. Gould has two computers. A digital computer that does the math and an analogue computer that controls the vulcanization hardware. You are so hot to show that Gould anticipated or rendered Diehr obvious that you missed an important aspect of Gould’s invention.
Back in those days one used an analogue computer when one could not get sufficient horse power out of a digital computer.
Gould had to use a digital and an analogue computer.
Diehr invented a way to use a single digital computer.
Diehr solved a problem that Gould could not. In those days Diehr’s invention was almost certainly non-obvious.
It is funny how little smarty pants people like Ben just think they are so smart. Did little smarty pants Ben ever do anything in his life but criticize others? Probably not.
Little smarty pants Ben is just so smart that everything to little smarty pants Ben is obvious. Little smarty pants Ben hasn’t figured out what hindsight reasoning means, but we are hopeful that he will before retirement.
Pointing out that there is prior art vastly more relevant than “treatises from the 1880s” isn’t hindsight reasoning. You seem to have given yourself a concussion with that knee-jerk.
Except there are gaps in the claim you posted and Diehr claim. And you have a pattern of acting as if every claim you see is just so obvious ’cause little smarty pants Ben is just so smart.
What. A. Joke. Go out and do real development (like me) and see how hard it is and then come back and talk to us.
1) By definition every reference used in an obviousness rejection has “gaps”.
2) I never said that Diehr’s claims were obvious in view of that reference or any other. I was simply pointing out the strawman Stein was thrashing.
Ben,
You badly missed the point from David, as the more time from the earlier references speaks more to his point than closer references.
While your references may be more impactful for a different point (due to both temporal and contextual proximity), THAT was not the point with providing the further timed items.
In attempting to be a smart-@$$, you left out the smart.
Why do you bother to post MM and Martin? It is all just information. Why are you paid to process information? Why are the judges paid to process information?
The fact is that most people in the USA now are paid to process information –not move things about like in the iron age–and to say that processing information should not be eligible for patentability is to say that we should not have patents.
Plus, Martin, the whole point of innovation is NEW things. The 1952 Patent Act implicitly allowed patent eligibility for patents by not saying they weren’t eligible. New things are meant to be patent eligible. Innovation. Progress.
And I think that both Martin and MM should squarely answer the question of why they bother to post (data is just data according to them) and why should people be paid for information processing.
The ignorance is just astounding.
It is also interesting the game they play which they got from the SCOTUS. And that is to make up a word and then not provide a definition for it, but say that your claims don’t meet the requirement for this new word.
What. A. Joke. The powerful have decimated the patent system, but don’t ever think that you won the hearts and minds of innovators. We know what you did and how you did it. You are just barbarians–brutal and ignorant.
Night Writer, there’s an aphorism about the internet. It’s as old as the internet. It’s about feeding, or more specifically not feeding, certain… behaviors.
Noise filters are very handy devices in electrical engineering. As it happens, they’re also very useful mechanisms for internet discussions. And unlike many denigrated software inventions, this type of noise filter you actually can implement completely in your mind.
Think of PatO as a communication channel that’s contaminated by noise on a certain frequency. Think of a bandstop or notch filter that selectively excludes that frequency. Implement it by looking at the information to the left of each post, and deliberately skip over those that reflect a certain… frequency.
I’ve been using this technique for years – literally years. It has substantially improved the signal-to-noise ratio of this communication channel.
Alas, David, that may work for you but does nothing but increase noise on the channel.
Look at the likes of the spawn from the inte11ectual DIShonesty of Malcolm. Casual readers (including, unfortunate MANY examiners) come away from unrebutted statements feeling as though ‘they must be true since I read it on Patently-O, and Patently-O says it is the leading source of patent law.”
As I recall as well, David, it was YOUR interacting with Malcolm that provided that prized admission against his interests of him knowing and understanding the controlling law regarding the important exceptions to the judicial doctrine of printed matter.
Think: had you had your noise filter on then, we all would have been deprived of some of the best evidence of Malcolm’s explicit inte11ectual dishonesty.
I get your point, anon – but my capacity (and appetite) for conversation fluctuates. I’m willing to fight those battles when those quantities are, well, at high tide.
Currently, I’m running about average, which means that I selectively engage users who have a different, interesting viewpoint and who can provide a good conversation about it – people like RandomGuy, Greg DeLassus, Joachim Martillo – and Ned Heller was certainly included. I often don’t get around to responding to people who share my viewpoint, like you and NWPA, just for lack of time.
The point being that the internet aphorism (feeding or not feeding) is not appropriate.
Yes, I do understand that an appetite for having a conversation (an actual dialogue) may vary.
But engaging in dialogue is the important aspect. Otherwise what happens is the “Driveby Monologue model which leads to the aphorism of “Internet Shoutdown,” wherein dialogue is avoided and pertinent counter points are simply ignored (by the sAme ones).
By the way, your “list” includes a spectrum of those willing to engage. Greg will typically NOT engage past his “comfort” of his view not being challenged, Random is not capable of understanding counterpoints and thus typically only engages up to his “viewpoint limit,” and the late Ned Heller was famous for disappearing rather than accept conclusions against his interests (even as he might engage further into dialogue than others). Only Joachim on your list has shown the ability to absorb a counterpoint from dialogues and grow.
And yes, I do recognize that you and I are in the same choir (we are “cohorts” per Malcolm), so I do not expect you to have to respond to counterpoints in certain dialogues. But that is not the same as those practicing Driveby Monologuing.
Well, I have decided that I would like to be admitted to practice law. (I would like to be able to argue my own patent application case before SCOTUS. Because I don’t have an interest in the application any more, I can’t represent myself pro se.)
That desire means that I have to absorb a lot of material and that I have to find a job in state where I can take the bar exam without attending law school. The job has to be remote because I can’t leave Boston (child-custody issues). For the opportunity to read the law, I would give a very low hourly rate. I do have unique attributes that would make me a valuable asset to any legal team despite my age.
I am an inventor, an engineer (several fields), and a patent agent.
I have a good background in history of science, history of engineering, philosophy, and linguistics.
I seem to be developing increasing hyperthymesia, which is technology focused.
Why not go to law school? Most direct route.
I read faster than most people. The hyperthymesia was less then than it is now, but on the whole undergraduate study was a waste of time. I particularly hated lectures. Such a low information transfer rate! I don’t know how even slow learners tolerate them. I don’t think I have the ability to tolerate them today. Reading the law seems to be a better solution if I did not have to live in Boston.
True David.
If indeed true, Night Writer, can we expect at most one post per thread from you?
(I certainly hope not)
It’s been awhile since I posted the whole enchilada, but this looks like a good thread for it.
First and foremost, this is not a mainly legal problem.
It’s a political problem, and the political question can be stated like this:
The 1952 act is silent as to the eligibility of information inventions. Do we want to allow patenting of logic and instructions under the statutory category of processes, or do we not?
If we don’t want to do that, as MM advocates, we should immediately say so in statue and common law, and just stop doing it.
If we do want to do that, it leads to a second question.
It very much appears that we do want to do that, because too many judges, lawyers, and patent-system stakeholders find the purposes of a patent system aligned with patenting certain new, useful, and disclosed information inventions.
What those certain inventions are remain entirely in the eye of the beholder, and those beholders are usually PTAB or District Court judges. This situation is unsustainable without a new and useful (!) way to handle these kinds of inventions.
I propose to modify Section 100(b) to read: a process which results in information consumed by human beings may not be patented, excepting processes improving information processing without regard to the particular content or meaning of the processed information.
It seems simple enough to me. If a human mind processes something, that thing is abstract. That should solve the statutory interpretation of “Process” simply and cleanly. No human mind, no statutory abstraction.
From link to merriam-webster.com:
The Crisscrossing Histories of Abstract and Extract
Abstract is most frequently used as an adjective (“abstract ideas”) and a noun (“an abstract of the article”), but its somewhat less common use as a verb in English helps to clarify its Latin roots. The verb abstract is used to mean “summarize,” as in “abstracting an academic paper.” This meaning is a figurative derivative of the verb’s meanings “to remove” or “to separate.”
We trace the origins of abstract to the combination of the Latin roots ab-, a prefix meaning “from” or “away,” with the verb trahere, meaning “to pull” or “to draw.” The result was the Latin verb abstrahere, which meant “to remove forcibly” or “to drag away.” Its past participle abstractus had the meanings “removed,” “secluded,” “incorporeal,” and, ultimately, “summarized,” meanings which came to English from Medieval Latin.
Interestingly, the word passed from Latin into French with competing spellings as both abstract (closer to the Latin) and abstrait (which reflected the French form of abstrahere, abstraire), the spelling retained in modern French.
The idea of “removing” or “pulling away” connects abstract to extract, which stems from Latin through the combination of trahere with the prefix ex-, meaning “out of” or “away from.” Extract forms a kind of mirror image of abstract: more common as a verb, but also used as a noun and adjective. The adjective, meaning “derived or descended,” is now obsolete, as is a sense of the noun that overlapped with abstract, “summary.” The words intersected and have separated in modern English, but it’s easy to see that abstract applies to something that has been summarized, and summarized means “extracted from a larger work.”
As to the second critical meaning- as a summary or condensation or outline, abstraction is a patentability issue connected with a 103 or 112 inquiry. If an invention is not claimed as an invention, but rather as an abstraction of an invention or of a concept that itself is not inventive, you can’t haz a patent.
Eligibility for abstraction relates to each major part of the Patent Act- both separately and in combination depending on the facts. When the claims recite no actual invention because they are aspirational or functional, that can be a 112 problem, or at least require a 112 quasi-analysis. When the claims recite no actual invention because they are beyond obvious (i.e. utterly conventional) it’s a 103 problem, or at least a 103 quasi-analysis.
Only in the instance where claims recite no actual invention because they lay outside the four statutory categories should a 12(b)6 be the proper method of determining eligibility- i.e. as a pure matter of law.
The law says the inventor must say what the invention is. Equity demands at some point that the accused have their say as to what the invention is too. An adversarial procedural construction step is required. The invention should be construed in a Markman like process to resolve this non-statutory abstraction.
Just as the words of claims are matters of law that must be construed when in dispute, what the invention actually is must be a matter of law, construed when in dispute. Like claim construction, this inquiry is a mix of fact and law, and by allowing motions, tutorials, and limited testimony, a fairer, cheaper process is in play that needs to happen if eligibility cannot be contested via IPR.
So far, nobody has substantially explained why my scheme is not better than what we have, out of step with current law, or unworkable for any other reason.
Step right up!
Try understanding utility first – ALL utility ends up being the type of thing that you want to constrain. ALL utility is measured by its human effect.
This is NOT a “political problem.”
Further, you botch the Merriem Webster treatment as to “summary,” when you fail to realize that most all** claims climb at least a rung or two on the ladders of abstraction.
** it is rare indeed when an excessively particular picture claim is present – most all claims (to get what you want) would clearly fail as most all claims can easily be made into more exacting picture claims.
You continue to want to play in an area without a clue as to the domain, Marty.
You have deluded yourself into thinking that your special “pet theory” has insight into either innovation or the patent realm, when all that it is IS your pet theory.
MS: the political question can be stated like this:
The 1952 act is silent as to the eligibility of information inventions. Do we want to allow patenting of logic and instructions under the statutory category of processes, or do we not?
If we don’t want to do that, as MM advocates, we should immediately say so in statue and common law, and just stop doing it.
This is the best solution to the problem although arguably the First Amendment could get us to the same place (as it nearly already has and might get all the way, eventually).
nobody has substantially explained why my scheme …unworkable
Well, it’s not clear exactly what it is that you’re trying to protect (from being declared ineligible) or why. I don’t see why it should matter whether data is “consumed by human beings” or consumed by a proxy for a human being (e.g., a computer). It’s still data. Does your eligible claim become ineligible if it can be shown that a computer can take the non-human-consumable data and turn into data that can be consumed by a human? Just one of many many questions that one might ask if the proposition wasn’t so strange to begin with.
The way to promote progress in the so-called computer arts is to grant patents on improved programmable computing machines that are distinguished from prior art programmable computing machines on the same basis that new machines have always been distinguished from prior art machines: by reciting objective defined differences in their physical structures.
Granting patents on the recited “new functionalities” of programmable machines is a farce that does nothing for progress. Instead it craters the patent system by lowering its credibility, makes the rich game players even richer, and turns a useful tool into a liability. Just expunge it all from the system and the system will return to something resembling normal instead of a free-for-all for a certain class of patent attorney who spends all of his/her time scrambling around trying to figure out which are the latest, hottest “magic words” to use in order to get a junky ineligible do-it-on-a-computer claim through the system. Because that’s really all that certain class of attorneys does. And they’re quite open about it.
Translation: “Wah.”
Consumed by a proxy….
Anthropomorphication
All of my points are made in depth in this paper:
link to papers.ssrn.com
I am trying to eliminate all of the mischief engendered by patenting information meaningful to people. Stock trading software. Diagnostic correlations. Genetic instructions. Map displays. Technical diagrams.
Does your eligible claim become ineligible if it can be shown that a computer can take the non-human-consumable data and turn into data that can be consumed by a human?
No, what matters is which acts would infringe.
The analysis is analogous to finding proximate cause. If human consumption is not where the utility of the method arises, then the point of the method must be something else.
If the method can be used without human consumption of the information, it may be eligible (but still not patentable.)
For instance, a speedometer is a machine used to tell a person how fast they are going. A new, non-obvious construction of a speedometer should be completely eligible. A piece of software using common sensors configured to tell the speed, and claimed as a method, should not be eligible- and it should not take an Aliceinquiry to make that determination.
anon your impossible-to-understand point that all utility arises from human consumption has no weight. Consuming a hamburger and consuming a factoid and consuming the benefit of a snow plow are totally different relationships.
Utility has but one final measurement.
That you do not understand this does not change this.
Utility has but one final measurement
Can you run that thru some kind of doofus translator so I can understand it?
Do you mean that legally utility is a binary yes/no? That no matter the character of some given purported utility, the patent law must ignore it?
Nonsense- equity would not allow for that, and there is no reason that the nature of an invention’s utility could not be a matter of consideration for eligibility or patentability. Section 287(c) is essentially entirely about the character of utility excluded from infringement damages as a policy preference by the Congress.
Modifying 100(b) as I suggest is fully in accord with how utility is understood today.
Translating for a doofus such as yourself wont help.
Your statements are nonsense.
You cannot claim that your idea is in accord with how utility is understood when you evidence such a lack of understanding of the term.
And you keep on wanting to use the term “equity,” but you don’t appear to understand that either. Equity is not a catch-all “change” or allowance to avoid ‘Law.’ It is not your feeling of “but that’s not fair.”
You continue to confuse what you feel the law (and equity) should be as somehow being what law (and equity) are.
Martin, I instinctively recoil from your proposal because it has an “except” in it. That tells me that you haven’t got your finger on the exact watershed between eligibility and non-eligibility. Your test, if implemented, would decrease rather than increase the amount of legal certainty as to eligibility, I suspect.
The definition needs to be robust and future-proof. Who knows where the borderline between human and computer will be, in the future.
Patents promote the progress of useful arts. Outside the useful arts they impede progress. I want to know what is the flaw you see in the caselaw of the Boards of Appeal of the EPO, established in thousands of cases, during 40 years of civil law evolution, which requires the subject matter claimed to exhibit “technical character” and for the contribution to the art to be a solution in technology to a problem in technology. Clever solutions, indisputably non-obvious, still fail at the EPO when the problem they solve is not one within technology or (to use the words of the Patents Clause as intended) the “useful arts”.
If SCOTUS were to drop onto that line it would open up potential to solve all the egregious problem of uncertainty in the USA as to the patentability of any particular piece of computer-based innovation. tough though, for a Supreme Court (and its acolytes, officers and servants) to acknowledge that a mere Patent Office has a clearer vision of where to draw the line in the sand.
MaxDrei,
As usual (and clearly intentional), you fail yet again with your attempt to make equal our Sovereign’s “Useful Arts” and the term “technical.”
Your “not one within technology or (to use the words of the Patents Clause as intended) the “useful arts”.” is false.
It does not — cannot — become true no matter how many times you repeat it.