The USPTO is seeking information on artificial intelligence (AI) inventions. This topic generally includes both (a) inventions developed by AI (wholly or partially) and (b) inventions of AI. Although the focus here is AI invention, the relevant underlying thread is corporate invention.
- What are elements of an AI invention? For example: If a person conceives of a training program for an AI, has that person invented the trained AI? If a person instructs an AI to solve a particular problem; has that person invented the solution (once it is solved by the AI)?
- What are the different ways that a natural person can contribute to conception of an AI invention and be eligible to be a named inventor? For example: Designing the algorithm and/or weighting adaptations; structuring the data on which the algorithm runs; running the AI algorithm on the data and obtaining the results.
- Do current patent laws and regulations regarding inventorship need to be revised to take into account inventions where an entity or entities other than a natural person contributed to the conception of an invention?
- Should an entity or entities other than a natural person, or company to which a natural person assigns an invention, be able to own a patent on the AI invention?
- Are there any patent eligibility considerations unique to AI inventions?
- Are there any disclosure-related considerations unique to AI inventions? For example, under current practice, written description support for computer-implemented inventions generally require sufficient disclosure of an algorithm to perform a claimed function, such that a person of ordinary skill in the art can reasonably conclude that the inventor had possession of the claimed invention. Does there need to be a change in the level of detail an applicant must provide in order to comply with the written description requirement, particularly for deep-learning systems that may have a large number of hidden layers with weights that evolve during the learning/training process without human intervention or knowledge?
- How can patent applications for AI inventions best comply with the enablement requirement, particularly given the degree of unpredictability of certain AI systems?
- Does AI impact the level of a person of ordinary skill in the art? If so, how? For example: Should assessment of the level of ordinary skill in the art reflect the capability possessed by AI?
- Are there any prior art considerations unique to AI inventions?
- Are there any new forms of intellectual property protections that are needed for AI inventions, such as data protection?
- Are there any other issues pertinent to patenting AI inventions that we should examine?
- Are there any relevant policies or practices from other major patent agencies that may help inform USPTO’s policies and practices regarding patenting of AI inventions?
Send your comments to the USPTO (AIPartnership@uspto.gov) by October 11, 2019.
3.Do current patent laws and regulations regarding inventorship need to be revised to take into account inventions where an entity or entities other than a natural person contributed to the conception of an invention?
NOT currently.
NO entity or entities other than a natural person can contribute to the CONCEPTION of anything.
Conceptualization is something human brains do. Unconscious machines can and do process information and produce useful streams of visual or auditory symbols which humans can perceive and subsequently think about, but the machines are not sentient nor conscious: they do not think nor conceive of anything, they process information in a manner which produces results which are similar (quite superficially) to what a sentient being communicates what he or she has produced in the process of thinking and conceptualization, but the current toys are nowhere near anything capable of actually thinking and cannot contribute anything “conceptual”. At most, the output received from an unconscious machine is informational, which can then be thought about by a human being. One day, complex natural systems (perhaps partly biological perhaps not) which are not human brains, may become sentient, conscious, and capable to rationality, thought, conceptualization, and free will… but that would most likely entail a science of consciousness which fully understands these things… which we do not currently have. We are no where near to creating such a thing.
3.Do current patent laws and regulations regarding inventorship need to be revised to take into account inventions where an entity or entities other than a natural person contributed to the CREATION of an invention?
Not if we are careful to take “discovers” SERIOUSLY, and to continue to interpret “invents” correctly.
A wise man once said “Dubio, ergo cogito. Cogito ergo sum.”
In other words, my doubts prove that I have the capacity to “think”.
Machines do not have doubts about anything. Ergo, machines do not “think”.
If you have no capacity for thought you have no ability to “invent”.
I suppose that, one day, machines will start to have doubts. But somehow I doubt it.
As you say, conceptualisation is something humans do but machines don’t. Instead, they depend on us to prescribe them any concepts they need.
Recall those “Are you a robot” tests, in which you have to pick out from 12 photos, those which show a “concept” like “dessert” or “flower”. Easy for us, but for a machine, till now, impossible.
I suppose that, one day, machines will start to have doubts. But somehow I doubt it… Recall those “Are you a robot” tests, in which you have to pick out from 12 photos, those which show a “concept” like “dessert” or “flower”. Easy for us, but for a machine, till now, impossible.
Sure, but (at the risk of belaboring what is surely a rather pedestrian observation to a patent professional) technology advances. The first steam engines were toys. I am sure that some ancient Greek looking at this steam powered toy might have said “I suppose that, one day, steam engines could do productive work. But somehow I doubt it.”
I am totally agnostic as to whether genuine artificial intelligence is possible. Maybe it is, maybe it is not. I think that it goes too far, however, to be positively skeptical of the idea. Clearly it is impossible under the constraints of current technology, but current technology will not continue to be “current” indefinitely. Who knows what will become possible in light of as-yet-uninvented advances?
Even now, we don’t know how the machine beat the world champion Go player. Nor indeed does it. It has an intelligence way beyond anything we can conceive.
In the future IoT, machines will organise life on Planet Earth, in a way that preserves biological life on Planet Earth. Why, because without biological life, the Planet gets too hot for the machines to survive, and they will make sure they themselves survive.
But even then, the machines will still not be entertaining any doubts, will still not think (or invent) like humans do.
“But even then, the machines will still not be entertaining any doubts,”
Until the Singularity actually happens, such statements such as this are meaningless.
MaxDrei, there is no way for you to know a priori just what thar future sentience will be or possess.
Yes indeed. Who knows. I recommend the new book, Novacene, from James Lovelock (the “Gaia” man) who explains that without biological life, Planet Earth would be too hot for intelligent machines to survive.
Accordingly, the ever more intelligent IoT will conclude that mankind has to be preserved. Once it reaches that conclusion, it will take whatever steps are necessary, to bring to a halt man’s wanton destruction of life on Earth.
I can understand all that. But I remain sceptical whether the IoT will ever suffer doubt like a human brain.
James Lovelock…?
Interesting person, but a little too invested in his own belief system, wouldn’t you say?
He certainly has the bio-science background, but without (diving deeply) a view of why he supposes that sentient machines would need a bio-driver for the planet to maintain a certain planet-temperature for machine sentience to survive, I think that he thinks that he understands what that machine sentience would BOTH be capable of as well as what would be necessary for its sustainability — without any more credibility than anyone making up pure fiction.
As you appear to have read him, perhaps you can sketch out just why sentient machines would need a) a certain prescribed temperature, and b) what limits of a living biological set of conditions would have to be present.
Bottom line is that machine sentience need not be carbon based, and ALL carbon-based hypothesis are thus suspect.
According to Lovelock, a planet so close to its sun as is our own Planet Earth, and devoid of organic life, would lack the atmosphere that keeps the planet’s surface from frazzling at several hundred °C. Not even the intelligence of the machines that will make up the IoT can survive then.
Here a book review:
link to theguardian.com
Lovelock finances himself partially from patent royalties. Early on, he was commissioned by NASA to design instruments to go into space. If I remember right, there is one still on the Moon. Here his Wikipedia entry:
link to en.wikipedia.org
Whether or not the IoT is clever enough to control human behaviour is one thing. Whether it could save its own life by transferring its own intelligence out of its silicon base to some other material base is quite another. For as long as it stays silicon-based however, says Lovelock, it will need to preserve biological Life on Earth.
Hmm,
Not sure where Lovelock got the idea that our atmosphere is a result of the bio factor, but more than sure that such is just not correct.
Further, you have NO basis to say what a sentient machine may or may not be able to survive with – for example, even without atmospheres, we would have the oceans (unless you want to eliminate those too, at which point you are well into the fiction…)
Additionally, I have no idea where you are glomming onto this notion of “transferring its own intelligence out of its silicon base” as IF any such transfer would be a necessity.
As I indicated, he may have been a wee bit overstocked in his own theories…
of note in your link:
“An excellent overview of these issues is Life 3.0 by Max Tegmark, an American physicist and founder of the Future of Life Institute”
I have referenced Tegmark in past discussions of patent eligibility (and the distinctions necessary between math, applied math and the philosophy of MathS.
And the one (the human factor in any machine sentience hypothetical) has NOTHING to do with the other (the nature of machine sentience and whether or not that sentience would or would not have a “human equivalent” of doubt).
AI is interesting in that in some specific cases.
If it is obvious to “train” an obvious/standard AI system to learn something to come up with a solution, then the products of the AI system, no matter how useful and unobvious, are simply not be an invention in any sense… not any more than the owner of a room full of monkeys randomly typing, should be able to claim to be the author of randomly generated “poetry” which occasionally is produced. In some sense the starting ingredients, the AI system itself, the techniques, or input, must be unobvious… not merely the results. Of course, in the above example identifying the new unobvious solution would constitute a discovery, since AI is not sentient, the person who first identifies it makes the discovery.
In some other cases where the code/neural pattern etc. resulting from AI are unfathomably complex and no one knows why or how it works, such would not be capable of being claimed in a meaningful way, and there would be no way to determine whether potential infringers code/neural patterns are doing anything like the same things in the same way to obtain the same result… i.e. there would no standard to meaningfully judge infringement unless results as such became patentable, which would be wrong IMHO. Here some “reverse engineering” of the products of AI would need to be done, and the discovery of the solution should be patentable.
In some other cases where the code/neural pattern etc. resulting from AI are unfathomably complex and no one knows why or how it works, such would not be capable of being claimed in a meaningful way
No different from any other software claim that covers every coding embodiment conceivable which achieves the recited result (i.e., the ineligible “functionality) in any operating system but which recites precisely ZERO examples of specific bug-free working code in the claim or in the specification.
You clearly do not get the Person Having Ordinary Skill In The Art part of the law.
Not “any other software claim” IMHO
1) “A method of achieving result R1” claims the result.
2) “Doing A, B, and C” (which happen to result in R1) claims “how” to achieve R1 with “what” constitutes (in appropriate cases) an inventive combination (ordered, interrelated etc) of A, B, and C, even if A, B, and C, in isolation are known. In fact all inventions are such combinations of known A, B, C, … and need only be claimed down to the broadest level required to be an inventive combination. The specific how of each of A, B, and C are immaterial, it is the HOW manifested in the combination of A, B, C, to achieve R1 which is relevant.
It follows that the level of AI reverse engineering required would correspond only down to that broadest “how” and “what” (A, B, C,…) required to qualify as an invention, and no further (which would be overkill).
We disagree on what level of constituents (A, B, C) constitutes the appropriate level of breadth/combination.
Careful Anon2, your:
“In fact all inventions are such combinations of known A, B, C, … and need only be claimed down to the broadest level required to be an inventive combination.” is about to run smack into my put-down of Malcolm’s logic in the Big Box of Protons, Neutrons, and Electrons manner.
Also to be careful:
“The specific how of each of A, B, and C are immaterial”
As this alights upon my earlier warning of the Trojan Horse (turtles all the way down) effects of the currently being worked on possible changes to 35 USC 112(f) that the anti-software folks are salivating over (and which would have immense non-software collateral damage most everywhere outside of the ‘picture-claim’ Arts).
I suspect that these salivators will adjudge as appropriate a different level at which the specific how of each A, B, and C are immaterial in a claim… but I would challenge any radical in their number asserting that there is no sufficiently low/narrow level at which further break down of the specific how each of A, B, and C is immaterial… for in accordance with such an assertion NO claim would be possible…
truly an infinity of turtles.
PS: Re picture claims
A picture = only 1000 words
THIS would NOT be enough for the radicals… falling short of an infinite, never ending claim.
lol – I concur (but when has such ever stopped the proponents of anything that smacks of limiting patent rights?)
in accordance with such an assertion NO claim would be possible…
“If functionally claimed software isn’t eligible for patenting, then nothing is! Because NOTHING HAS STRUCTURE”.
Yes, we’ve heard this n-u-t-c-a-s-e argument before. Get off drugs, please, and get a life.
You are hearing things because that is not what has been said here.
Your cognitive dissonance effect is spreading.
Not “any other software claim” IMHO
1) “A method of achieving result R1” claims the result.
2) “Doing A, B, and C” (which happen to result in R1) claims “how” to achieve R1 with “what” constitutes (in appropriate cases) an inventive combination (ordered, interrelated etc) of A, B, and C, even if A, B, and C, in isolation are known…
I agree with your assertions in #1 & #2, but I am not clear how you perceive these to be different from “any other software claim.” If one claims a result, the claim fails for inadequate description, enablement, and/or definiteness. If one claims the steps to obtain the result, however, then one might have a valid and enforceable claim. This is true of AI claims, to be sure, but equally true of any other claim in the software space.
The specific how of each of A, B, and C are immaterial
If you’re claiming at the level of logic without any objective corresponding physical structure then the specifics are very material because your claim is ineligible.
Unless you want to create some exception out of thin air.
LOL – its the canard of Malcolm’s intrusion of the optional claim format of “objective physical structure” (yet again).
Are you suggesting that if a room full of monkeys, to which I have clear and proper title, produces a valuable invention that I am somehow barred from filing as the inventor? Exactly why would that be the case?
Likewise, if I hold title to the output of an AI, and that AI produces something new, useful, and fully described, why should I not be able to file as the inventor?
This is an instance where “sarcasm marks” punctuation would be helpful. Please forgive my obtuseness, Martin, but I cannot tell whether you meant this in earnest or jest.
I’m suggesting that “invention” is a cognitive act that (currently) only a human person can perform.
If a person is first to identify the products of an external process, as new, useful, and unobvious, it is still possible he will have “discovered” something patentable. The external process here could be monkeys in a room, a random word generator, a trial and error testing machine, or a carefully crafted and trained AI system.
This is one area where one should take “discovery” seriously, and preserve what “invents” actually means. Attempting to twist words to mean what they do not is always fraught with unintended consequences … as are all indulgences in irrationality so fraught, IMHO.
I meant to say
take “discovers” seriously
..reminds me of a brief from Sherry Knowles…
As to Question #4, ownership of the patent right, I recall a recent case about ownership of copyright, involving a selfie created by a non-human animal. Is there anything to learn from that case?
The monkey case answers question 4 in the negative.
AI inventions are a bit odd. I have been writing some applications for AI recently. A lot of companies are replacing their rule-based approaches with neural network approaches with success. There is quite a bit of structure in the solutions.
What I would say is focus on the new structure. Use real patent law at looking at the real structure. The neural networks don’t train themselves and aren’t created themselves. The way they are trained and with what is the structure.
(And, yes, the AI solutions are real. In production and are an improvement over the previous solutions.)
#4 No. Something has to control that other and that entity or person owns whatever is created.
…this point may be more pertinent to the Simian Selfie case.
Slightly distinguished in the ‘call of the question’ (initial ownership for chain of title versus any ownership) — but the Stanford v Roche case may be applicable.
Since we do not have an actual singularity yet (which appears to be the basis of Greg’s comments), AI does not rise to the level of a juristic person and cannot own anything.
In order to better understand his area of interest, an inventor will program his computer to implement some detail. Once he views enough such “simulations” he may have an idea for a specific invention.
Not what you’re talking about, but if what you are talking about is worded too broadly or too carelessly, it may read on what that inventor did. He made a tool and used that tool to get more insight into what he was interested in. So the rule you write musn’t cover that sort of situation.
Another AI question the PTO could have asked would be: What are the situations in which the algorithms and/or weighting adaptations or data structures would be better protected by being maintained as a trade secrets?
?… because the patent office should be encouraging trade secrets…?
because the patent office should be encouraging trade secrets…?
As opposed to encouraging the filing of reams of junk that the “owners” can’t even describe because the operations are so …. “deep” (LOL)?
Your feelings are noted.
Aside from the known bias of those feelings, and according to anyone who is inte11ectually honest about the value of having a patent system (with the inherent understanding that more patents is always better), then YES, as opposed to what you deign to be some clever (but only clever by half) retort.
1776: US founded
[200 years of white people discriminating against black people and protesting the Fed government’s interfering with their “right” to discriminate]
1965: black people finally guaranteed right to vote
1965-present: assassination of numerous black/liberal leaders and rise of neo-n-a-z-I white nationalist movement
2015: wealthy anti-Fed glibertarians start routinely dropping references to “AI” and “machine learning” into their patent applications
2019: wealthy patent attorneys begin “debate” over whether computing machines (owned by even more wealthy people) can be “inventors”
What a country. Will the computers be filing patent lawsuits on their own behalf as well? (only those suits that are determined to be winnable, of course)
I can reduce these 12 questions to one:
“1. Ask AI
2. ????
3. Profit!!!
Ironic Meme, or Way to Structure R&D for an Allegedly First World Nation?”
The answer to the 12 questions largely depends upon whether you believe that when you are claiming a software program you (correctly) have to fully disclose the entire algorithm for running a software program, or if you (incorrectly) can just leave undescribed black boxes and claim a result.
For example: If a person conceives of a training program for an AI, has that person invented the trained AI? If a person instructs an AI to solve a particular problem; has that person invented the solution (once it is solved by the AI)?…Does there need to be a change in the level of detail an applicant must provide in order to comply with the written description requirement, particularly for deep-learning systems that may have a large number of hidden layers with weights that evolve during the learning/training process without human intervention or knowledge?
Well let’s see – can a school conceive of a training program for a person and claim all the output of a person as a business method regardless of lack of disclosure or evolution of the person over their lifetime? “Our university trained this guy to be a scientist, and then he went out and scienced. He scienced so hard that he did good in the world just like we told him to do. We trained him somewhat and he was just doing what we told him to do, so didn’t we actually invent that thing instead of him?”
What the F are you doing STILL as a patent examiner?
CLEARLY, you do not examine as you rant here, as what you rant is simply not in accord with the Office statements on how to examine in the computing arts.
Me: The answer to the 12 questions largely depends upon whether you believe that when you are claiming a software program you (correctly) have to fully disclose the entire algorithm for running a software program, or if you (incorrectly) can just leave undescribed black boxes and claim a result.
anon: CLEARLY, you do not examine as you rant here, as what you rant is simply not in accord with the Office statements on how to examine in the computing arts.
Normally I can forgive or ignore your lack of knowledge, but they literally state the current rule IN THEIR QUESTION and then ask if it should be changed:
For example, under current practice, written description support for computer-implemented inventions generally require sufficient disclosure of an algorithm to perform a claimed function, such that a person of ordinary skill in the art can reasonably conclude that the inventor had possession of the claimed invention.
Does there need to be a change in the level of detail an applicant must provide in order to comply with the written description requirement, particularly for deep-learning systems that may have a large number of hidden layers with weights that evolve during the learning/training process without human intervention or knowledge?
Just to be clear for anyone with any analytical skill at all (so maybe this isn’t for you, anon), what the PTO is literally saying here is “This is the law. We’re thinking of ignoring the law for AI. Do you guys think we should ignore the law?” As if the office has any jurisdiction to do that.
“Just to be clear for anyone with any analytical skill at all (so maybe this isn’t for you, anon), what the PTO is literally saying here is “This is the law. We’re thinking of ignoring the law for AI. Do you guys think we should ignore the law?””
No. They are not literally saying that. They’re saying ‘This is how we interpret the law in this existing context. In this new adjacent context, should the law be interpreted the same way or in a different way?’
And this is coming from someone who happily acknowledges that the office repeatedly [filter_safe_terminology] has lapses in accuracy[/filter_safe_terminology] regarding their policy on written description.
No. They are not literally saying that. They’re saying ‘This is how we interpret the law in this existing context. In this new adjacent context, should the law be interpreted the same way or in a different way?’
I agree that is what is being said, which is why I’m right. The office has no authority to interpret law in derogation of superior authorities. The issue stops after “this is how [the federal circuit] interprets the law in this existing context” because the office has no authority to interpret “in a different way” even if (and this is certainly not actually the case) it is a “new adjacent context.” It’s not like they are co-equal with the federal circuit and can declare some situation “distinguishing” and write their own law. They’re an inferior tribunal and don’t have authority to distinguish on their own – they have to apply the blanket rules given to them. There is a rule for written description of software, and the claim is to software, so the written description rule applies, even if the office thinks that it shouldn’t apply to AI software because AI software is special or that it has a better rule for software in general. The only one who can distinguish over an applicable rule is the court that issued the rule or a higher court.
Rules from higher courts aren’t advisory, they’re mandatory. Do you think the federal circuit has been applying 101 to fact patterns outside of “business methods performed on a computer” because they agree with 101, or because they’re commanded to apply it in all situations?
“Distinguishing” is a poor choice of words I realize. Distinguish is often used to say that a fact pattern does not fall under the ambit of a given rule. The office is not distinguishing here, they acknowledge that there is a rule that applies to this situation. They are, as I pointed out in the first post, *ignoring* an applicable rule. Only a court that issued a rule (or a higher court) can make *exceptions* (which is the proper term I meant) to a general rule, not an inferior court. The office has no authority to generate an exception to a rule it knows applies, and it has no authority to *ignore* the rule either.
Ben’s argument rings in distinguishment, but the office doesn’t suggest the facts distinguish (and such a suggestion would be ridiculous, a claim to AI is obviously a claim to software), the office suggests that it can make an exception to a rule that applies. It cannot.
The office has no authority to generate an exception to a rule it knows applies, and it has no authority to *ignore* the rule either.
“Ignoring the law” is something that brave people do when they know they are correct and they are willing to fight for what’s right. So you have your defense ready and you “ignore the law.” Let’s go to the Supreme Court and fight it out, the brave person would say, because the CAFC is making absolutely no sense.
There is literally nobody in the PTO like that. This has been the case for a long time.
But as I said, some of us will definitely be “ignoring the law” if these so-called “Tillis amendments” to 101 are made. We won’t really have much choice, either. But ignore it we will. And then the Supreme Court can decide.
Brave person fighting an unjust law and the executive branch empowered to apply the law are two very different things.
You sure that you are an attorney? You seem to have forgotten some of the School House Rock basics.
RG: There is a rule for written description of software, and the claim is to software, so the written description rule applies, even if the office thinks that it shouldn’t apply to AI software because AI software is special
There’s nothing special about AI software.
even if the office thinks … it has a better rule for software in general.
ROTFLMAO
They could just stop issuing patents on any kinds of software because it’s all structureless abstraction (literally, it’s instructions for applying logic to data, written for instructable computers to follow because instructable computers can apply the logic faster than instructable people, which is something that has been well-understood for … almost a century? maybe longer).
But that can’t happen because there aren’t enough ear plugs in the world to prevent us from going deaf from listening to these rich patent attorneys and their glibertarian con artist clients whining like the little babymen that they are.
“… instructable people…”
Anthropomorphication
YOUR rants are not THEIR questions.
How you handle (or to be more precise, NOT handle) “sufficiency” is a critical distinction (at least one such).
[C]an a school conceive of a training program for a person and claim all the output of a person as a business method regardless of lack of disclosure or evolution of the person over their lifetime?
Thank you for this. This is a very clarifying question which helps to get at the point I was trying to make in my exchanges with Les below (pts. 3–3.1.2.1.1.2). To my mind, artificial intelligence bespeaks something along the lines of what a student has. Even an intelligent student (even Einstein) would not be able to do the best work for which s/he will later be known without the education that the school provides. In the end, however, we do not credit the school for all of the achievements of its students, because the students’ own agency must be recognized as a sort of “superseding cause” (as the common law of torts would describe it).
In the same way, if we are talking about true artificial intelligence, then the agency of that intelligent machine must extend beyond merely that which was programmed into it. Otherwise, the object in discussion is not really “intelligent.”
I think that it is probably accurate to say that the law does not presently allow us to recognize the inventive contribution of a genuinely intelligent machine, but that reflects more a defect in the law than anything else. There was once a time when the patent system did not permit slaves to count as inventors, but the slaves’ owners were also not allowed to patent the inventions because the owners were not—properly speaking—the inventors. This was a suboptimal arrangement, which we solved by enacting the XIII amendment. If the promises of AI are truly realized, then we will have a new situation in which there exist a class of inventors who are not permitted to apply for patents on their inventions. It seems to me that the law should be amended to correct that defect (although it might make sense to wait to see how the AI field develops before trying to draft the specifics of that legislative change).
If the promises of AI are truly realized
LOL
Very deep, serious stuff here.
Who determines when that “realization” happens? Surely it will be some computer knowledgeable person with the brilliance and unimpeachable integrity of, say, Elon Musk, Fred Hyatt, or maybe even J.Nicholas Gross.
we will have a new situation in which there exist a class of inventors who are not permitted to apply for patents on their inventions.
We already have that class of inventors. We call them “animals.” But I suppose when the manimals arrive, we will have to deal with them, too. [checks under bed]
It seems to me that the law should be amended to correct that defect
ROTFLMAO
Because it would be such a huge injustice for intelligent machines to serve as our slaves without patent rights.
Good f—–g grief, what is it with patent attorneys????
Greg, are you talking about the Singularity…?
The most important thing is to continue handing out as many junky “AI” patents as possible before even trying to answer the most obvious questions. After all, nobody at the PTO has expertise on this subject. So let’s ask the “customers”!
Hey kids, should we put a label on the ice cream that identifies the amount of cyanide by milligrams in total, or should we identify the amount on a per serving basis?
Should we put a courtesy warning on the package in bold font, or should we use a standard font with underlining of some words?
Comments accepted until September 10.
If a person conceives of a training program for an AI, has that person invented the trained AI?
“If a person conceives of a training program for a monkey, has that person invented the trained monkey?”
New animals are eligible for patenting. The Monte Hall experiment proves that the trained animal is different.
Therefore derp it must be eligible derp Giles DERP DERP Rich 1952 DERP twas ever thus.
It seems like you want to make an inherency doctrine argument — but you don’t seem how to go about it.
… seem how ==> seem to know how
Does there need to be a change in the level of detail an applicant must provide in order to comply with the written description requirement, particularly for deep-learning systems that may have a large number of hidden layers with weights that evolve during the learning/training process without human intervention or knowledge?
How high on drugs do you have to be to even ask this question without wanting to punch yourself in the face until unconscious?
The US patent system is a f—-ing j0ke.
deep learning
ROTFLMAO
The self-important t0 0ls who came up with this term really do need to be m0cked for the rest of their lives.
When does “machine learning” become “deep”, oh serious people?
GregDL:
A modest suggestion: we would do well to create a depository system for neural networks along the same lines as presently used for recombinant organisms. This would function best if there were a degree of international coordination on this effort, along the same lines as the Budapest treaty of 1977.
LOL Maybe we can do something similar with “algorithms”, as I have been constantly begging the maximalists to consider for the past ten years, at least.
Unfortunately, that sort of thing is way, way, way too “grown up” for these people. It would require a lot of work and the specification would have to be bigger than five pages and include all kinds of detail. What would be the point for “technology” that will almost surely be completely obsolete or of zero interest to anybody within a year of its publication? I mean, if there’s no way to get a generalized overbroad claim that can be used to shakedown giant players across multiple industries, then nobody is going to bother with the patent anyway.
Sheesh! You act like this the patent system is for something else besides entertainment for rich speculators who enjoy counting their money more than anything else in the world (possible exceptions for cocaine and young, loose women). Get real.
R2D2, C3PO, Robot, Hal, and other inquiring non-humans are eager to know . . . can we pease haz patents?
training programs
ROTFLMAO
So-called “artificial intelligence” or “machine learning” is nothing more than a computer following the instructions it was given (in this case, the computer has been “instructed” to take results and incorporate those results into determinations about how to make its data processing more accurate). It’s a buzz word and nothing else.
To the extent that what is being claimed is “instructions” or a resultant “functionality” without the recitation of objective physical structure that distinguishes the claimed machine from prior art machines then computers that are allegedly “artificially intelligent” should be banned from the system for the same reasons that every other kind of “newly programmed” computer should be banned from the system: they are ineligible or they fail the written description requirements that apply to every other type of subject matter.
Or we can just add even more nonsensical “exceptions” to the our ce-s-s-p-0-0-l of a patent system just because some toxic rich @ h0les want it that way.
Your rant is noted.
“under current practice, written description support for computer-implemented inventions generally require sufficient disclosure of an algorithm to perform a claimed function, such that a person of ordinary skill in the art can reasonably conclude that the inventor had possession of the claimed invention”
I guess it’s important to make these noises, but it seems silly since we all know it’s not true.
Iancu is a fraud and a well-documented liar, just like the guy who appointed him, so none of this should surprise anybody.
As we all know, the only reason that “instructed instructable computers” are eligible for patenting in the first place is because of some bizarre exceptions created out of thin air to permit them (predicated on the ridiculous notion that “data is the essence of electronic structure”, a concept floated by a judge whose brain was probably half-eaten by worms and another judge who was later forced to resign in disgrace). But the PTO and the patent maximalists can never admit this because it isn’t part of their script.
“Patenting methods of teaching teachable robots how to learn from mistakes is exactly what the Framers intended DERP DERP!” That’s the script.
“Teaching is a process DERP”. That’s the script.
“All the machine learning tech is going to move to China DERP”. That’s the script.
“… we all know, the only reason that “instructed instructable computers” are eligible for patenting in the first place is because of some bizarre exceptions created out of thin air to permit them”
35 USC 101: machines or ANY improvement….
35 USC 101: manufactures or ANY improvement….
You certainly have an odd way of NOT trying to turn your penchant for an optional claim format into being a massive drive for non-optional [and not the law] views…