In the end, the Patent Office was able to ramp-up production enough to end FY2016 with the most utility patents issued in any fiscal year in history – 304,500 utility patents! About 53% of the patents have a foreign origin.
In the end, the Patent Office was able to ramp-up production enough to end FY2016 with the most utility patents issued in any fiscal year in history – 304,500 utility patents! About 53% of the patents have a foreign origin.
“In the end, the Patent Office was able to ramp-up production enough…”
I wonder how much overtime was involved…
(and how much of that was actually worked)
😉
Question for the Swinging Pendulum Fan Club:
Are we in some kind of new “reject reject reject” era now? So when we get out of this era in, say, 2018 we’re going to see “catching up” and a quick spike up to 400k-450k patents/year, and steadily upwards from there?
Place your bets, o serious ones.
It would be wonderful if we did.
Or are you saying that the mere idea of even properly granted patents (anyway you want to define that) as ever increasing on a year to year basis is somehow “bad”…?
Do you see yet why your views ARE anti-patent?
The editing here serves to protect the guilty.
Malcolm’s as1n1ne comments should be left to stand as they show just how much he is indeed anti-patent.
It would be wonderful if we did.
Because the system is doing such an excellent job with the glut of logic patents now. What’s another million over a five years? What could go wrong?
Oh, and the pendulum isn’t ever “swinging back.” And, yes, telling your clients otherwise is probably malpractice.
Lol – look what was rescued from the commode…
And to answer the question that you directly asked, yes, in fact we are in such a “Reject Reject Reject era” – courtesy of the “magic” of the “Gist/Abstract” sword.
“anon” yes, in fact we are in such a “Reject Reject Reject era” – courtesy of the “magic” of the “Gist/Abstract” sword.
So in the future we won’t be able to “gist away” terms like “electric bicycle horn preference data” and “high definition video likeability index attribute”? Because of their different “structures”?
Bets on that?
What do you think, Dennis?
Yay – another rescue from the commode.
If you eliminate the Junk Bubble from 2008-2014 (the period of max ins@nity before effects of Mayo/Alice) then and draw a line from the prior slope, you would see that the PTO would otherwise have reached the present rate of grant sometime around 2025.
Heckuva job, Kappos.
Fyi, more appealed claims tanked under 101 today by the CAFC, including a precedential opinion:
The claim recites a method of changing one description of a level sensitive latch (i.e., a functional description) into another description of thelevel sensitive latch (i.e., a hardware component description) by way of a third description of that very same level sensitive latch (i.e., assignment conditions). …
The limited, straightforward nature of the steps involved in the claimed method make evident that a skilled artisan could perform the steps mentally. The inventors of the Gregory Patents confirmed this point when they admitted to performing the steps mentally themselves …
Query what percentage of the logic instructions that human beings write for computers was first “performed mentally” (or with the assistance of a pencil and paper, or with the assistance of a calculator, or — oh noes! — with the assistance of a super shiny computer) by a human being.
Probably pretty close to 100%.
Animating cartoon character mouths, of course, is the big exception. There was nothing logical about that prior to computers. It was totally “subjective”. Or so the CAFC would have everyone believe.
LOL,
You have your math wrong.
Primarily because you are trying to take the slope from the “bite” out of patent grants from the “Reject Reject Reject” era (wherein the grant rate fell from the historical rate of about 70% down to near 35% iirc – with NO external driver) this of course is known to you – in part because I know that I have informed you.
Funny how selective you are with the facts, eh Malcolm?
When was this “era” you’re referring to when the PTO actually managed to not rubber stamp all the “do it on a computer” junk that was being shoved at it?
It’d be fun to take a look at the awesome claims that were sooooooooo unfairly rejected during that era.
What do you suppose we’d find?
Dennis, do you have any thoughts on that subject? Have you ever looked into it?
And the trifecta of the commode rescues…
From same decision referred to above (Synopsis Inc), the panel drives a truck through McRo’s “limited rules” test:
While preemption may signal patent ineligible subject matter, the absence of complete preemption does not demonstrate patent eligibility.” Ariosa Diagnostics, Inc. v. Sequenom, Inc.
“Where a patent’s claims are deemed only to disclose patent ineligible subject matter under the Mayo framework, as they are in this case, preemption concerns are fully addressed and made moot.”
Is everybody keeping up? LOL
Wrong thread (as if that has ever mattered for thread h1j@cking…).
And for the lighter part of a lesson given to you time and time again, Google “XKCD” and “tasks.”
As much as you may feel otherwise, [Old Box] simply does not have all future improvements “already in there.”
MM, the method (a way of inferring a specific circuit for HDL compilers that use high level language to generate circuit designs) did not result in an inventive circuit, and was performable entirely mentally. Importantly, the method did not require a computer or any improvement in computer processing.
Pure logic.
SYNOPSYS, INC. v. 26 MENTOR GRAPHICS CORPORATION
link to cafc.uscourts.gov
Exactly. Pure logic. Today’s Synopsis decision seems like the sort of verdict on which we should all be able to agree. Even people like myself (that is to say, people who do not approve of the Alice case) should be comfortable with the idea that if you really can practice the whole claimed method in your head, then it is not 101-eligible.
Usually, the challenger says “you can do this whole method in your head” and the patentee responds “no, you cannot, because the claims require a computer.” Then both sides have to argue about whether do-it-on-a-computer is enough to get through the 101-filter. Here, however, there is no do-it-on-a-computer limitation. The claims really do read on doing the method in one’s head. Even before Alice this was not 101-compliant.
One wonders how the patent attorney who prosecuted this one got it past the examiner. Was this a very smooth talking attorney, or just a very lackadaisical examiner?
Even people like myself (that is to say, people who do not approve of the Alice case) should be comfortable with the idea that if you really can practice the whole claimed method in your head, then it is not 101-eligible.
“Can” or “did”.
Who’s the relevant actor here whose brainpower we need to compare versus the computer? The inventor? The “PHOSITA”? The targeted consumer? The best mathematician in the world? Rain Man?
Or just what the invention – as claimed (you know, the claim as a whole) – details.
Gee, that kind of means that the “Gisting” that the Court wants to do is not in accord with what Congress actually wrote…
Considering “claims as a whole” doesn’t mean that every word in a claim must be treated equally.
All it means is that you can’t point to some element (e.g., “tire”) and say “Everything with a tire is obvious. End of inquiry!”
I think that’s pretty straightforward but I’m happy to explain it in more detail if you need more detail.
And the commode continues its outpourings…
if you really can practice the whole claimed method in your head, then it is not 101-eligible.
What if you can do it all in your head except for one calculation that you need a calculator for?
Try answering your own question (while remembering your absurd accusation from just last week).
Importantly, the method did not require a computer
If the claims had recited a computer (thereby “requiring” it) the result would not have changed.
What if the inventors had dropped “a computer” in the claim and put in their spec that their invention made it possible to perform 1000s of these new logic determinations per second, something that no human or computer had ever done before?
Now it’s an eligible claim? Or is it just “eligible enough” to get past summary judgment (which is really all the East Texas vermin care about)?
“If the claims had recited a computer (thereby “requiring” it) the result would not have changed.”
Absolutely false.
And just last week Malcolm accused me of making up a “strawman” of arguing about claims “TOTALLY in the mind,” and yet here, HE is doing exactly that – taking a claim “TOTALLY in the mind” and trying to make it equal to a claim that is NOT “TOTALLY in the mind.”
Complete lack of anything resembling ethics with that one.
I wrote: “If the claims had recited a computer (thereby “requiring” it) the result would not have changed.”
And “anon” replied Absolutely false.
You think that reciting “on a computer” would have changed the result in this case? Why?
Do you really need that explained to you?
Do you fail to see how your accusation of just last week is also a failure?
Please go ahead and explain how reciting “on a computer” in the claim would have changed the result here.
Or maybe Dennis can explain it to everyone when he gets around to writing up this precedential case.
And more commode flow.
As far as I can tell, the advancement in cartoon mouth art was to use multiple sounds to pick the shape. For example mapping ‘fu’ to two different mouth shapes depending on if it is preceded by ‘sna’ or by ‘tau’. The court decided that mouth shape picking was the technical field and there were lots of ways to accomplish it and using multiple sounds was an advancement.
the advancement in cartoon mouth art was to use multiple sounds to pick the shape. For example mapping ‘fu’ to two different mouth shapes depending on if it is preceded by ‘sna’ or by ‘tau’.
Because prior to that nobody knew that the shapes your mouth makes when you say “fu” depends on what your mouth is doing before and after?
Here’s a news flash: the shapes your mouth makes when you say words also depends on whether you (or the “character”) is tired or cold. It also varies depending on whether you are shouting or whispering, or whether you are talking to a baby or person hard of hearing, or whether you are on stage or have food in your mouth. This is incredibly “technical” stuff!
The court decided that mouth shape picking was the technical field and there were lots of ways to accomplish it
But lo and behold every on of those ways involves collecting information and using to make a “determination”. Nobody could have predicted that!
Dennis, has the PTAB reducing its large prior backlog of ex parte appeals had any significant effect on these recent year issuance numbers? [Even claims rejected on appeal may lead to prompt issuance of patents on unrejected claims of the application.]
Good question Paul.
This also leads to consideration of each of the “decks of the Titanic” that “in process” applications may be stored upon.
For example, I am curious as to what the state of those applications that have been “put on the back burner” when the Office refocused the efforts of its examiners for quicker “first decisions,” and lessened the emphasis on “gravy train” processing.
Oops – add the following to 2):
…as a numerator. This would serve as a more meaningful normalization effect if people are then going to discuss enforcement of those obtained property rights.
Please disregard the placement here of this comment – comment now placed inline with the conversation below.
What does a chart of patents issued to American inventors per year on a per capita basis look like? Is the slope increasing relative to the growth in the American population?
The number of patents issued per year has risen at a rate of about 5% per year for the past 30 years (28 years shown in the chart). During that same time, US growth in population has been just about 1% annually. So, patents are increasing per capita. Focusing only on U.S. origin patents, in FY1988 there were about 1.6 patents per 10,000 Americans while in FY2016 there were about 4.4 patents per 10,000.
As would be expected.
Since innovation begets innovation, a nonlinear growth should be a welcome sign, n’est ce pas?
That’s only counting one kind of person. How about the corporations? They’re just folk like you and me, what has the rate of increase in the number of corporations been over 30 years?
Patents (typically) inure STILL to inventors and not to the juristic person of corporations.
I would love to see two things:
1) a chart of the number of possible claims (not just patents) out there for which a lawsuit could be filed (would have to take away the lapsed items and add in the roughly six years of potential past damage items).
2) then with that chart as the denominator, use the actual number of distinct claims** that HAVE been brought in a lawsuit.
I would lay good money that such a chart would show a declining trend – much opposite the angst against enforcing such personal property rights.
**the reason for “distinct” is to clear away some of the artificial spike induced by the AIA.
Do you also get to divide the number of claims by three for “computer configured to”, “method” and “system for” all reciting the same elements?
Oops, that was meant to be in response to 1.1.1.
Your implication being more litigation means our patent system is working better?
Quite the opposite, patentcat.
The takaway from a declining ratio would be that there is less litigation.
Maybe instead of seeing “anon” and reacting defensively, you take the time to understand what you are jumping at.
For clarity’s sake, the declining ratio is what the takeaway is: there is less litigation in the properly normed data.
So are you suggesting there are no systemic problems with our patent system (e.g. people utilizing poor quality patents for litigation trolling) because normalized lawsuits are down?
Also, to be comprehensive, these statistics would need to take into account people paying up without going to trial, yes? I’m assuming these aren’t counted under litigation.
I am NOT suggesting that there are ZERO problems.
I AM suggesting that the the tail of the flea wagging the dog IS going on.
And it is going on for very real and calculated reasons to make the patent system weaker.
Do I need to remind you of who exactly coined the term “Patent Tr011,” and exactly why they did so?
Actually, I’ve heard conflicting etymologies. What is the origin story to which you are referring?
Also, words change meaning over time. Sure, some use the term to refer to any NPEs, but more use it to refer to the practice of yielding patents of dubious validity, and taking advantage of the fact that defending is much costlier than settling.
The “etymology” story has to do with the smear job by a Big Corp for the sole benefit of Big Corp.
The “tie” to shakedown has long been part of that dust-kicking, but as the effort by Ron Katznelson with his request of the Executive Branch yellow-paper on “Tr011s” showed, the term is very much a propaganda term.
Hmm, it seems the report is biased, as it dies not consider the benefits of the patent system, only the costs, as required, but I’m not sure rk isn’t engaging in some propogandizing of his own.
Your statement once again reflects a basic anti-patent bias.
Check your animus my friend.
And you, yours ( wrt biases).
Except not – but thanks anyway.
You also seemed to have gotten side tracked on the “entymology” and avoided the point that preceded that: the larger point about the declining ratio.
Was that on purpose?
And no, I didn’t get sidetracked, you didn’t answer whether or not the normalized metric you invented is actually a good metric (see 1.2.2.1.2).
Also, you have no evidence that this metric is actually declining, no?
Was this on purpose?
I DID answer – it is a good metric.
What part of answer did you not understand?
And no, you did not return to the main point – just as I indicated.
And my wanting the data shown is there for a reason – you AGAIN are jumping without thinking here, patentcat.
No, it’s a metric that’s subject to many systematic errors.
Nonsense patentcat.
Show me a metric that is more on point.
Again, you’re assuming there is a metric that is on point. I’m not sure that one exists.
But for the reasons I’ve listed below, your metric is flawed.
You want to throw the baby out with the bath water.
By your “reasoning,” the information itself is flawed and should be thrown out. My metric only serves to put that information in proper light – no matter how “flawed” you may think it is, it remains better than even just the raw data.
Also, what do you mean by a weaker patent system? Are you referring to one where many poor quality (likely obvious, e.g.) patents are granted?
Weaker as in the strength of patents are weaker.
And no – your version is just more kool-aid of a desired response to the perception that “too many bad patents have been granted.”
The answer to ANY such “bad granting” is better work by YOU examiners, and NOT degrading the strength of granted patents.
So even though there are bad examiners who incorrectly issue(d) bad patents, you want the badly issued patents to be strong.
Well, that seems like the kind of system that would encourage bad actors/gaming of the system.
But I’m sure you see that as a feature, not a bug… Plus something something “ends don’t justify the means” to make it seem like you have some philosophical/principled reason for opposing such measures.
I want the system to be strong AND the bad examining fixed at the root level.
This is in stark contrast to ig noring the root level and weakening ALL patents.
Your “suppositions” do not fit, and you are trying to cast stones at a blameless position.
Once again – examine your animus.
I agree, if there were no problems at the root level (i.e. perfect examination), we could have strong patents. I’d like to have such a system.
But it’s simply not economical based on the current fee/rule structure.
And until we have such a system, we can’t have strong patents without encouraging bad actors.
And you, yours (wrt biases).