Artificial Intelligence (AI) Patents

The USPTO is seeking information on artificial intelligence (AI) inventions.  This topic generally includes both (a) inventions developed by AI (wholly or partially) and (b) inventions of AI.  Although the focus here is AI invention, the relevant underlying thread is corporate invention.

  1. What are elements of an AI invention? For example: If a person conceives of a training program for an AI, has that person invented the trained AI?  If a person instructs an AI to solve a particular problem; has that person invented the solution (once it is solved by the AI)?
  2. What are the different ways that a natural person can contribute to conception of an AI invention and be eligible to be a named inventor? For example: Designing the algorithm and/or weighting adaptations; structuring the data on which the algorithm runs; running the AI algorithm on the data and obtaining the results.
  3. Do current patent laws and regulations regarding inventorship need to be revised to take into account inventions where an entity or entities other than a natural person contributed to the conception of an invention?
  4. Should an entity or entities other than a natural person, or company to which a natural person assigns an invention, be able to own a patent on the AI invention?
  5. Are there any patent eligibility considerations unique to AI inventions?
  6. Are there any disclosure-related considerations unique to AI inventions? For example, under current practice, written description support for computer-implemented inventions generally require sufficient disclosure of an algorithm to perform a claimed function, such that a person of ordinary skill in the art can reasonably conclude that the inventor had possession of the claimed invention. Does there need to be a change in the level of detail an applicant must provide in order to comply with the written description requirement, particularly for deep-learning systems that may have a large number of hidden layers with weights that evolve during the learning/training process without human intervention or knowledge?
  7. How can patent applications for AI inventions best comply with the enablement requirement, particularly given the degree of unpredictability of certain AI systems?
  8. Does AI impact the level of a person of ordinary skill in the art? If so, how? For example: Should assessment of the level of ordinary skill in the art reflect the capability possessed by AI?
  9. Are there any prior art considerations unique to AI inventions?
  10. Are there any new forms of intellectual property protections that are needed for AI inventions, such as data protection?
  11. Are there any other issues pertinent to patenting AI inventions that we should examine?
  12. Are there any relevant policies or practices from other major patent agencies that may help inform USPTO’s policies and practices regarding patenting of AI inventions?

Send your comments to the USPTO (AIPartnership@uspto.gov) by October 11, 2019.

117 thoughts on “Artificial Intelligence (AI) Patents

  1. 18

    3.Do current patent laws and regulations regarding inventorship need to be revised to take into account inventions where an entity or entities other than a natural person contributed to the conception of an invention?

    NOT currently.

    NO entity or entities other than a natural person can contribute to the CONCEPTION of anything.

    Conceptualization is something human brains do. Unconscious machines can and do process information and produce useful streams of visual or auditory symbols which humans can perceive and subsequently think about, but the machines are not sentient nor conscious: they do not think nor conceive of anything, they process information in a manner which produces results which are similar (quite superficially) to what a sentient being communicates what he or she has produced in the process of thinking and conceptualization, but the current toys are nowhere near anything capable of actually thinking and cannot contribute anything “conceptual”. At most, the output received from an unconscious machine is informational, which can then be thought about by a human being. One day, complex natural systems (perhaps partly biological perhaps not) which are not human brains, may become sentient, conscious, and capable to rationality, thought, conceptualization, and free will… but that would most likely entail a science of consciousness which fully understands these things… which we do not currently have. We are no where near to creating such a thing.

    3.Do current patent laws and regulations regarding inventorship need to be revised to take into account inventions where an entity or entities other than a natural person contributed to the CREATION of an invention?

    Not if we are careful to take “discovers” SERIOUSLY, and to continue to interpret “invents” correctly.

    1. 18.1

      A wise man once said “Dubio, ergo cogito. Cogito ergo sum.”

      In other words, my doubts prove that I have the capacity to “think”.

      Machines do not have doubts about anything. Ergo, machines do not “think”.

      If you have no capacity for thought you have no ability to “invent”.

      I suppose that, one day, machines will start to have doubts. But somehow I doubt it.

      As you say, conceptualisation is something humans do but machines don’t. Instead, they depend on us to prescribe them any concepts they need.

      Recall those “Are you a robot” tests, in which you have to pick out from 12 photos, those which show a “concept” like “dessert” or “flower”. Easy for us, but for a machine, till now, impossible.

      1. 18.1.1

        I suppose that, one day, machines will start to have doubts. But somehow I doubt it… Recall those “Are you a robot” tests, in which you have to pick out from 12 photos, those which show a “concept” like “dessert” or “flower”. Easy for us, but for a machine, till now, impossible.

        Sure, but (at the risk of belaboring what is surely a rather pedestrian observation to a patent professional) technology advances. The first steam engines were toys. I am sure that some ancient Greek looking at this steam powered toy might have said “I suppose that, one day, steam engines could do productive work. But somehow I doubt it.”

        I am totally agnostic as to whether genuine artificial intelligence is possible. Maybe it is, maybe it is not. I think that it goes too far, however, to be positively skeptical of the idea. Clearly it is impossible under the constraints of current technology, but current technology will not continue to be “current” indefinitely. Who knows what will become possible in light of as-yet-uninvented advances?

        1. 18.1.1.1

          Even now, we don’t know how the machine beat the world champion Go player. Nor indeed does it. It has an intelligence way beyond anything we can conceive.

          In the future IoT, machines will organise life on Planet Earth, in a way that preserves biological life on Planet Earth. Why, because without biological life, the Planet gets too hot for the machines to survive, and they will make sure they themselves survive.

          But even then, the machines will still not be entertaining any doubts, will still not think (or invent) like humans do.

          1. 18.1.1.1.1

            But even then, the machines will still not be entertaining any doubts,

            Until the Singularity actually happens, such statements such as this are meaningless.

            MaxDrei, there is no way for you to know a priori just what thar future sentience will be or possess.

        2. 18.1.1.2

          Yes indeed. Who knows. I recommend the new book, Novacene, from James Lovelock (the “Gaia” man) who explains that without biological life, Planet Earth would be too hot for intelligent machines to survive.

          Accordingly, the ever more intelligent IoT will conclude that mankind has to be preserved. Once it reaches that conclusion, it will take whatever steps are necessary, to bring to a halt man’s wanton destruction of life on Earth.

          I can understand all that. But I remain sceptical whether the IoT will ever suffer doubt like a human brain.

          1. 18.1.1.2.1

            James Lovelock…?

            Interesting person, but a little too invested in his own belief system, wouldn’t you say?

            He certainly has the bio-science background, but without (diving deeply) a view of why he supposes that sentient machines would need a bio-driver for the planet to maintain a certain planet-temperature for machine sentience to survive, I think that he thinks that he understands what that machine sentience would BOTH be capable of as well as what would be necessary for its sustainability — without any more credibility than anyone making up pure fiction.

            As you appear to have read him, perhaps you can sketch out just why sentient machines would need a) a certain prescribed temperature, and b) what limits of a living biological set of conditions would have to be present.

            Bottom line is that machine sentience need not be carbon based, and ALL carbon-based hypothesis are thus suspect.

            1. 18.1.1.2.1.1

              According to Lovelock, a planet so close to its sun as is our own Planet Earth, and devoid of organic life, would lack the atmosphere that keeps the planet’s surface from frazzling at several hundred °C. Not even the intelligence of the machines that will make up the IoT can survive then.

              Here a book review:
              link to theguardian.com

              Lovelock finances himself partially from patent royalties. Early on, he was commissioned by NASA to design instruments to go into space. If I remember right, there is one still on the Moon. Here his Wikipedia entry:
              link to en.wikipedia.org

              Whether or not the IoT is clever enough to control human behaviour is one thing. Whether it could save its own life by transferring its own intelligence out of its silicon base to some other material base is quite another. For as long as it stays silicon-based however, says Lovelock, it will need to preserve biological Life on Earth.

              1. 18.1.1.2.1.1.1

                Hmm,

                Not sure where Lovelock got the idea that our atmosphere is a result of the bio factor, but more than sure that such is just not correct.

                Further, you have NO basis to say what a sentient machine may or may not be able to survive with – for example, even without atmospheres, we would have the oceans (unless you want to eliminate those too, at which point you are well into the fiction…)

                Additionally, I have no idea where you are glomming onto this notion of “transferring its own intelligence out of its silicon base” as IF any such transfer would be a necessity.

                As I indicated, he may have been a wee bit overstocked in his own theories…

              2. 18.1.1.2.1.1.2

                of note in your link:

                An excellent overview of these issues is Life 3.0 by Max Tegmark, an American physicist and founder of the Future of Life Institute

                I have referenced Tegmark in past discussions of patent eligibility (and the distinctions necessary between math, applied math and the philosophy of MathS.

          2. 18.1.1.2.2

            And the one (the human factor in any machine sentience hypothetical) has NOTHING to do with the other (the nature of machine sentience and whether or not that sentience would or would not have a “human equivalent” of doubt).

  2. 17

    AI is interesting in that in some specific cases.

    If it is obvious to “train” an obvious/standard AI system to learn something to come up with a solution, then the products of the AI system, no matter how useful and unobvious, are simply not be an invention in any sense… not any more than the owner of a room full of monkeys randomly typing, should be able to claim to be the author of randomly generated “poetry” which occasionally is produced. In some sense the starting ingredients, the AI system itself, the techniques, or input, must be unobvious… not merely the results. Of course, in the above example identifying the new unobvious solution would constitute a discovery, since AI is not sentient, the person who first identifies it makes the discovery.

    In some other cases where the code/neural pattern etc. resulting from AI are unfathomably complex and no one knows why or how it works, such would not be capable of being claimed in a meaningful way, and there would be no way to determine whether potential infringers code/neural patterns are doing anything like the same things in the same way to obtain the same result… i.e. there would no standard to meaningfully judge infringement unless results as such became patentable, which would be wrong IMHO. Here some “reverse engineering” of the products of AI would need to be done, and the discovery of the solution should be patentable.

    1. 17.1

      In some other cases where the code/neural pattern etc. resulting from AI are unfathomably complex and no one knows why or how it works, such would not be capable of being claimed in a meaningful way

      No different from any other software claim that covers every coding embodiment conceivable which achieves the recited result (i.e., the ineligible “functionality) in any operating system but which recites precisely ZERO examples of specific bug-free working code in the claim or in the specification.

      1. 17.1.2

        Not “any other software claim” IMHO

        1) “A method of achieving result R1” claims the result.

        2) “Doing A, B, and C” (which happen to result in R1) claims “how” to achieve R1 with “what” constitutes (in appropriate cases) an inventive combination (ordered, interrelated etc) of A, B, and C, even if A, B, and C, in isolation are known. In fact all inventions are such combinations of known A, B, C, … and need only be claimed down to the broadest level required to be an inventive combination. The specific how of each of A, B, and C are immaterial, it is the HOW manifested in the combination of A, B, C, to achieve R1 which is relevant.

        It follows that the level of AI reverse engineering required would correspond only down to that broadest “how” and “what” (A, B, C,…) required to qualify as an invention, and no further (which would be overkill).

        We disagree on what level of constituents (A, B, C) constitutes the appropriate level of breadth/combination.

        1. 17.1.2.1

          Careful Anon2, your:

          In fact all inventions are such combinations of known A, B, C, … and need only be claimed down to the broadest level required to be an inventive combination.” is about to run smack into my put-down of Malcolm’s logic in the Big Box of Protons, Neutrons, and Electrons manner.

        2. 17.1.2.2

          Also to be careful:

          The specific how of each of A, B, and C are immaterial

          As this alights upon my earlier warning of the Trojan Horse (turtles all the way down) effects of the currently being worked on possible changes to 35 USC 112(f) that the anti-software folks are salivating over (and which would have immense non-software collateral damage most everywhere outside of the ‘picture-claim’ Arts).

          1. 17.1.2.2.1

            I suspect that these salivators will adjudge as appropriate a different level at which the specific how of each A, B, and C are immaterial in a claim… but I would challenge any radical in their number asserting that there is no sufficiently low/narrow level at which further break down of the specific how each of A, B, and C is immaterial… for in accordance with such an assertion NO claim would be possible…

            truly an infinity of turtles.

            1. 17.1.2.2.1.1

              PS: Re picture claims

              A picture = only 1000 words

              THIS would NOT be enough for the radicals… falling short of an infinite, never ending claim.

              1. 17.1.2.2.1.1.1

                lol – I concur (but when has such ever stopped the proponents of anything that smacks of limiting patent rights?)

            2. 17.1.2.2.1.2

              in accordance with such an assertion NO claim would be possible…

              “If functionally claimed software isn’t eligible for patenting, then nothing is! Because NOTHING HAS STRUCTURE”.

              Yes, we’ve heard this n-u-t-c-a-s-e argument before. Get off drugs, please, and get a life.

              1. 17.1.2.2.1.2.1

                You are hearing things because that is not what has been said here.

                Your cognitive dissonance effect is spreading.

        3. 17.1.2.3

          Not “any other software claim” IMHO

          1) “A method of achieving result R1” claims the result.

          2) “Doing A, B, and C” (which happen to result in R1) claims “how” to achieve R1 with “what” constitutes (in appropriate cases) an inventive combination (ordered, interrelated etc) of A, B, and C, even if A, B, and C, in isolation are known…

          I agree with your assertions in #1 & #2, but I am not clear how you perceive these to be different from “any other software claim.” If one claims a result, the claim fails for inadequate description, enablement, and/or definiteness. If one claims the steps to obtain the result, however, then one might have a valid and enforceable claim. This is true of AI claims, to be sure, but equally true of any other claim in the software space.

        4. 17.1.2.4

          The specific how of each of A, B, and C are immaterial

          If you’re claiming at the level of logic without any objective corresponding physical structure then the specifics are very material because your claim is ineligible.

          Unless you want to create some exception out of thin air.

          1. 17.1.2.4.1

            LOL – its the canard of Malcolm’s intrusion of the optional claim format of “objective physical structure” (yet again).

    2. 17.2

      Are you suggesting that if a room full of monkeys, to which I have clear and proper title, produces a valuable invention that I am somehow barred from filing as the inventor? Exactly why would that be the case?

      Likewise, if I hold title to the output of an AI, and that AI produces something new, useful, and fully described, why should I not be able to file as the inventor?

      1. 17.2.1

        This is an instance where “sarcasm marks” punctuation would be helpful. Please forgive my obtuseness, Martin, but I cannot tell whether you meant this in earnest or jest.

      2. 17.2.2

        I’m suggesting that “invention” is a cognitive act that (currently) only a human person can perform.

        If a person is first to identify the products of an external process, as new, useful, and unobvious, it is still possible he will have “discovered” something patentable. The external process here could be monkeys in a room, a random word generator, a trial and error testing machine, or a carefully crafted and trained AI system.

        This is one area where one should take “discovery” seriously, and preserve what “invents” actually means. Attempting to twist words to mean what they do not is always fraught with unintended consequences … as are all indulgences in irrationality so fraught, IMHO.

  3. 16

    As to Question #4, ownership of the patent right, I recall a recent case about ownership of copyright, involving a selfie created by a non-human animal. Is there anything to learn from that case?

  4. 15

    AI inventions are a bit odd. I have been writing some applications for AI recently. A lot of companies are replacing their rule-based approaches with neural network approaches with success. There is quite a bit of structure in the solutions.

    What I would say is focus on the new structure. Use real patent law at looking at the real structure. The neural networks don’t train themselves and aren’t created themselves. The way they are trained and with what is the structure.

    (And, yes, the AI solutions are real. In production and are an improvement over the previous solutions.)

    1. 15.1

      #4 No. Something has to control that other and that entity or person owns whatever is created.

      1. 15.1.1

        …this point may be more pertinent to the Simian Selfie case.

        Slightly distinguished in the ‘call of the question’ (initial ownership for chain of title versus any ownership) — but the Stanford v Roche case may be applicable.

        Since we do not have an actual singularity yet (which appears to be the basis of Greg’s comments), AI does not rise to the level of a juristic person and cannot own anything.

  5. 14

    In order to better understand his area of interest, an inventor will program his computer to implement some detail. Once he views enough such “simulations” he may have an idea for a specific invention.

    Not what you’re talking about, but if what you are talking about is worded too broadly or too carelessly, it may read on what that inventor did. He made a tool and used that tool to get more insight into what he was interested in. So the rule you write musn’t cover that sort of situation.

  6. 13

    Another AI question the PTO could have asked would be: What are the situations in which the algorithms and/or weighting adaptations or data structures would be better protected by being maintained as a trade secrets?

      1. 13.1.1

        because the patent office should be encouraging trade secrets…?

        As opposed to encouraging the filing of reams of junk that the “owners” can’t even describe because the operations are so …. “deep” (LOL)?

        1. 13.1.1.1

          Your feelings are noted.

          Aside from the known bias of those feelings, and according to anyone who is inte11ectually honest about the value of having a patent system (with the inherent understanding that more patents is always better), then YES, as opposed to what you deign to be some clever (but only clever by half) retort.

  7. 12

    1776: US founded

    [200 years of white people discriminating against black people and protesting the Fed government’s interfering with their “right” to discriminate]

    1965: black people finally guaranteed right to vote

    1965-present: assassination of numerous black/liberal leaders and rise of neo-n-a-z-I white nationalist movement

    2015: wealthy anti-Fed glibertarians start routinely dropping references to “AI” and “machine learning” into their patent applications

    2019: wealthy patent attorneys begin “debate” over whether computing machines (owned by even more wealthy people) can be “inventors”

    What a country. Will the computers be filing patent lawsuits on their own behalf as well? (only those suits that are determined to be winnable, of course)

  8. 11

    I can reduce these 12 questions to one:

    “1. Ask AI
    2. ????
    3. Profit!!!

    Ironic Meme, or Way to Structure R&D for an Allegedly First World Nation?”

    The answer to the 12 questions largely depends upon whether you believe that when you are claiming a software program you (correctly) have to fully disclose the entire algorithm for running a software program, or if you (incorrectly) can just leave undescribed black boxes and claim a result.

    For example: If a person conceives of a training program for an AI, has that person invented the trained AI? If a person instructs an AI to solve a particular problem; has that person invented the solution (once it is solved by the AI)?…Does there need to be a change in the level of detail an applicant must provide in order to comply with the written description requirement, particularly for deep-learning systems that may have a large number of hidden layers with weights that evolve during the learning/training process without human intervention or knowledge?

    Well let’s see – can a school conceive of a training program for a person and claim all the output of a person as a business method regardless of lack of disclosure or evolution of the person over their lifetime? “Our university trained this guy to be a scientist, and then he went out and scienced. He scienced so hard that he did good in the world just like we told him to do. We trained him somewhat and he was just doing what we told him to do, so didn’t we actually invent that thing instead of him?”

    1. 11.1

      What the F are you doing STILL as a patent examiner?

      CLEARLY, you do not examine as you rant here, as what you rant is simply not in accord with the Office statements on how to examine in the computing arts.

      1. 11.1.1

        Me: The answer to the 12 questions largely depends upon whether you believe that when you are claiming a software program you (correctly) have to fully disclose the entire algorithm for running a software program, or if you (incorrectly) can just leave undescribed black boxes and claim a result.

        anon: CLEARLY, you do not examine as you rant here, as what you rant is simply not in accord with the Office statements on how to examine in the computing arts.

        Normally I can forgive or ignore your lack of knowledge, but they literally state the current rule IN THEIR QUESTION and then ask if it should be changed:

        For example, under current practice, written description support for computer-implemented inventions generally require sufficient disclosure of an algorithm to perform a claimed function, such that a person of ordinary skill in the art can reasonably conclude that the inventor had possession of the claimed invention.

        Does there need to be a change in the level of detail an applicant must provide in order to comply with the written description requirement, particularly for deep-learning systems that may have a large number of hidden layers with weights that evolve during the learning/training process without human intervention or knowledge?

        Just to be clear for anyone with any analytical skill at all (so maybe this isn’t for you, anon), what the PTO is literally saying here is “This is the law. We’re thinking of ignoring the law for AI. Do you guys think we should ignore the law?” As if the office has any jurisdiction to do that.

        1. 11.1.1.1

          “Just to be clear for anyone with any analytical skill at all (so maybe this isn’t for you, anon), what the PTO is literally saying here is “This is the law. We’re thinking of ignoring the law for AI. Do you guys think we should ignore the law?””

          No. They are not literally saying that. They’re saying ‘This is how we interpret the law in this existing context. In this new adjacent context, should the law be interpreted the same way or in a different way?’

          And this is coming from someone who happily acknowledges that the office repeatedly [filter_safe_terminology] has lapses in accuracy[/filter_safe_terminology] regarding their policy on written description.

          1. 11.1.1.1.1

            No. They are not literally saying that. They’re saying ‘This is how we interpret the law in this existing context. In this new adjacent context, should the law be interpreted the same way or in a different way?’

            I agree that is what is being said, which is why I’m right. The office has no authority to interpret law in derogation of superior authorities. The issue stops after “this is how [the federal circuit] interprets the law in this existing context” because the office has no authority to interpret “in a different way” even if (and this is certainly not actually the case) it is a “new adjacent context.” It’s not like they are co-equal with the federal circuit and can declare some situation “distinguishing” and write their own law. They’re an inferior tribunal and don’t have authority to distinguish on their own – they have to apply the blanket rules given to them. There is a rule for written description of software, and the claim is to software, so the written description rule applies, even if the office thinks that it shouldn’t apply to AI software because AI software is special or that it has a better rule for software in general. The only one who can distinguish over an applicable rule is the court that issued the rule or a higher court.

            Rules from higher courts aren’t advisory, they’re mandatory. Do you think the federal circuit has been applying 101 to fact patterns outside of “business methods performed on a computer” because they agree with 101, or because they’re commanded to apply it in all situations?

            1. 11.1.1.1.1.1

              “Distinguishing” is a poor choice of words I realize. Distinguish is often used to say that a fact pattern does not fall under the ambit of a given rule. The office is not distinguishing here, they acknowledge that there is a rule that applies to this situation. They are, as I pointed out in the first post, *ignoring* an applicable rule. Only a court that issued a rule (or a higher court) can make *exceptions* (which is the proper term I meant) to a general rule, not an inferior court. The office has no authority to generate an exception to a rule it knows applies, and it has no authority to *ignore* the rule either.

              Ben’s argument rings in distinguishment, but the office doesn’t suggest the facts distinguish (and such a suggestion would be ridiculous, a claim to AI is obviously a claim to software), the office suggests that it can make an exception to a rule that applies. It cannot.

              1. 11.1.1.1.1.1.1

                The office has no authority to generate an exception to a rule it knows applies, and it has no authority to *ignore* the rule either.

                “Ignoring the law” is something that brave people do when they know they are correct and they are willing to fight for what’s right. So you have your defense ready and you “ignore the law.” Let’s go to the Supreme Court and fight it out, the brave person would say, because the CAFC is making absolutely no sense.

                There is literally nobody in the PTO like that. This has been the case for a long time.

                But as I said, some of us will definitely be “ignoring the law” if these so-called “Tillis amendments” to 101 are made. We won’t really have much choice, either. But ignore it we will. And then the Supreme Court can decide.

                1. Brave person fighting an unjust law and the executive branch empowered to apply the law are two very different things.

                  You sure that you are an attorney? You seem to have forgotten some of the School House Rock basics.

            2. 11.1.1.1.1.2

              RG: There is a rule for written description of software, and the claim is to software, so the written description rule applies, even if the office thinks that it shouldn’t apply to AI software because AI software is special

              There’s nothing special about AI software.

              even if the office thinks … it has a better rule for software in general.

              ROTFLMAO

              They could just stop issuing patents on any kinds of software because it’s all structureless abstraction (literally, it’s instructions for applying logic to data, written for instructable computers to follow because instructable computers can apply the logic faster than instructable people, which is something that has been well-understood for … almost a century? maybe longer).

              But that can’t happen because there aren’t enough ear plugs in the world to prevent us from going deaf from listening to these rich patent attorneys and their glibertarian con artist clients whining like the little babymen that they are.

        2. 11.1.1.2

          YOUR rants are not THEIR questions.

          How you handle (or to be more precise, NOT handle) “sufficiency” is a critical distinction (at least one such).

    2. 11.2

      [C]an a school conceive of a training program for a person and claim all the output of a person as a business method regardless of lack of disclosure or evolution of the person over their lifetime?

      Thank you for this. This is a very clarifying question which helps to get at the point I was trying to make in my exchanges with Les below (pts. 3–3.1.2.1.1.2). To my mind, artificial intelligence bespeaks something along the lines of what a student has. Even an intelligent student (even Einstein) would not be able to do the best work for which s/he will later be known without the education that the school provides. In the end, however, we do not credit the school for all of the achievements of its students, because the students’ own agency must be recognized as a sort of “superseding cause” (as the common law of torts would describe it).

      In the same way, if we are talking about true artificial intelligence, then the agency of that intelligent machine must extend beyond merely that which was programmed into it. Otherwise, the object in discussion is not really “intelligent.”

      I think that it is probably accurate to say that the law does not presently allow us to recognize the inventive contribution of a genuinely intelligent machine, but that reflects more a defect in the law than anything else. There was once a time when the patent system did not permit slaves to count as inventors, but the slaves’ owners were also not allowed to patent the inventions because the owners were not—properly speaking—the inventors. This was a suboptimal arrangement, which we solved by enacting the XIII amendment. If the promises of AI are truly realized, then we will have a new situation in which there exist a class of inventors who are not permitted to apply for patents on their inventions. It seems to me that the law should be amended to correct that defect (although it might make sense to wait to see how the AI field develops before trying to draft the specifics of that legislative change).

      1. 11.2.1

        If the promises of AI are truly realized

        LOL

        Very deep, serious stuff here.

        Who determines when that “realization” happens? Surely it will be some computer knowledgeable person with the brilliance and unimpeachable integrity of, say, Elon Musk, Fred Hyatt, or maybe even J.Nicholas Gross.

        we will have a new situation in which there exist a class of inventors who are not permitted to apply for patents on their inventions.

        We already have that class of inventors. We call them “animals.” But I suppose when the manimals arrive, we will have to deal with them, too. [checks under bed]

        It seems to me that the law should be amended to correct that defect

        ROTFLMAO

        Because it would be such a huge injustice for intelligent machines to serve as our slaves without patent rights.

        Good f—–g grief, what is it with patent attorneys????

  9. 10

    The most important thing is to continue handing out as many junky “AI” patents as possible before even trying to answer the most obvious questions. After all, nobody at the PTO has expertise on this subject. So let’s ask the “customers”!

    Hey kids, should we put a label on the ice cream that identifies the amount of cyanide by milligrams in total, or should we identify the amount on a per serving basis?

    Should we put a courtesy warning on the package in bold font, or should we use a standard font with underlining of some words?

    Comments accepted until September 10.

  10. 9

    If a person conceives of a training program for an AI, has that person invented the trained AI?

    “If a person conceives of a training program for a monkey, has that person invented the trained monkey?”

    New animals are eligible for patenting. The Monte Hall experiment proves that the trained animal is different.

    Therefore derp it must be eligible derp Giles DERP DERP Rich 1952 DERP twas ever thus.

    1. 9.1

      It seems like you want to make an inherency doctrine argument — but you don’t seem how to go about it.

  11. 8

    Does there need to be a change in the level of detail an applicant must provide in order to comply with the written description requirement, particularly for deep-learning systems that may have a large number of hidden layers with weights that evolve during the learning/training process without human intervention or knowledge?

    How high on drugs do you have to be to even ask this question without wanting to punch yourself in the face until unconscious?

    The US patent system is a f—-ing j0ke.

    1. 8.1

      deep learning

      ROTFLMAO

      The self-important t0 0ls who came up with this term really do need to be m0cked for the rest of their lives.

      When does “machine learning” become “deep”, oh serious people?

  12. 7

    GregDL:

    A modest suggestion: we would do well to create a depository system for neural networks along the same lines as presently used for recombinant organisms. This would function best if there were a degree of international coordination on this effort, along the same lines as the Budapest treaty of 1977.

    LOL Maybe we can do something similar with “algorithms”, as I have been constantly begging the maximalists to consider for the past ten years, at least.

    Unfortunately, that sort of thing is way, way, way too “grown up” for these people. It would require a lot of work and the specification would have to be bigger than five pages and include all kinds of detail. What would be the point for “technology” that will almost surely be completely obsolete or of zero interest to anybody within a year of its publication? I mean, if there’s no way to get a generalized overbroad claim that can be used to shakedown giant players across multiple industries, then nobody is going to bother with the patent anyway.

    Sheesh! You act like this the patent system is for something else besides entertainment for rich speculators who enjoy counting their money more than anything else in the world (possible exceptions for cocaine and young, loose women). Get real.

  13. 6

    R2D2, C3PO, Robot, Hal, and other inquiring non-humans are eager to know . . . can we pease haz patents?

  14. 5

    training programs

    ROTFLMAO

    So-called “artificial intelligence” or “machine learning” is nothing more than a computer following the instructions it was given (in this case, the computer has been “instructed” to take results and incorporate those results into determinations about how to make its data processing more accurate). It’s a buzz word and nothing else.

    To the extent that what is being claimed is “instructions” or a resultant “functionality” without the recitation of objective physical structure that distinguishes the claimed machine from prior art machines then computers that are allegedly “artificially intelligent” should be banned from the system for the same reasons that every other kind of “newly programmed” computer should be banned from the system: they are ineligible or they fail the written description requirements that apply to every other type of subject matter.

    Or we can just add even more nonsensical “exceptions” to the our ce-s-s-p-0-0-l of a patent system just because some toxic rich @ h0les want it that way.

  15. 4

    “under current practice, written description support for computer-implemented inventions generally require sufficient disclosure of an algorithm to perform a claimed function, such that a person of ordinary skill in the art can reasonably conclude that the inventor had possession of the claimed invention”

    I guess it’s important to make these noises, but it seems silly since we all know it’s not true.

    1. 4.1

      Iancu is a fraud and a well-documented liar, just like the guy who appointed him, so none of this should surprise anybody.

      As we all know, the only reason that “instructed instructable computers” are eligible for patenting in the first place is because of some bizarre exceptions created out of thin air to permit them (predicated on the ridiculous notion that “data is the essence of electronic structure”, a concept floated by a judge whose brain was probably half-eaten by worms and another judge who was later forced to resign in disgrace). But the PTO and the patent maximalists can never admit this because it isn’t part of their script.

      “Patenting methods of teaching teachable robots how to learn from mistakes is exactly what the Framers intended DERP DERP!” That’s the script.

      “Teaching is a process DERP”. That’s the script.

      “All the machine learning tech is going to move to China DERP”. That’s the script.

      1. 4.1.1

        … we all know, the only reason that “instructed instructable computers” are eligible for patenting in the first place is because of some bizarre exceptions created out of thin air to permit them

        35 USC 101: machines or ANY improvement….

        35 USC 101: manufactures or ANY improvement….

        You certainly have an odd way of NOT trying to turn your penchant for an optional claim format into being a massive drive for non-optional [and not the law] views…

        1. 4.1.1.1

          And there we see the super silly script, just as predicted. Now let’s watch Bildo do his cl 0wn dance for us.

          Hey, Bildo! A computer with non-obvious medical data stored on it is “improved” (it’s more useful that way). A computer with a non-obvious fairy tale stored on it is “improved” (it’s more entertaining than the computer without it).

          Can I get patents on those “improved machines”, Bildo?

          Tell everybody. And if I can’t, tell everybody where we can find the statutory support for the denial. You’re a very serious person, after all. I’m sure you have a very thoughtful answer that will address all of the obvious follow up questions.

          Thank you for the comment.

          1. 4.1.1.1.1

            Not sure why you seem to want to indicate a prediction of some clown dance from me when we BOTH know the answer to YOUR clown dance has long been provided to you by at least way of the the direct and simple English language explication of the Simple Set Theory for the exceptions to the judicial doctrine of printed matter.

            Lord knows how many times I have attempted to get you to engage in anything remotely close to an inte11ectually honest manner with answers to YOUR clown dance.

            1. 4.1.1.1.1.1

              direct and simple English language explication of the Simple Set Theory for the exceptions to the judicial doctrine of printed matter.

              Oh my! I never thought I’d “get it” but … it’s all so perfectly clear now, Bildo! I’ve forwarded these stunning yet cogent revelations to Director Iancu along with several paragraphs of the highest praise. Thank you for your attention to this mater.

              1. 4.1.1.1.1.1.1

                As I said – you have never remotely engaged on anything resembling inte11ectual honesty.

        2. 4.1.1.2

          Adding as a friendly reminder to Bildo McDerpderp: satisfaction of one section of the patent act (e.g., 101) doesn’t mean that all other sections (e.g., 112) are satisfied.

          I know, I know … this kind of nuance really bothers you.

          Just in case folks out there are having difficulty keeping up … I made the following non-controversial assertion:

          the only reason that “instructed instructable computers” are eligible for patenting in the first place is because of some bizarre exceptions created out of thin air to permit them”

          And Bildo is doing exactly as I predicted (dancing around like a cl 0wn, and spewing even more nonsense to avoid admitting what everybody knows — he’s a very serious person!).

          1. 4.1.1.2.1

            That kind of thing most definitely does NOT bother me (as we see you ploy your usual nonsensical Accuse Others meme).

            Quite in fact, you are the one that often needs to be reminded that different sections of the law exist and are not meant to be conflated into your Ends justify the Means “whatever” mantra.

          2. 4.1.1.2.2

            And it is you (and your number one meme of Accuse Others) that is “doing the Derp Dance.”

            Same as usual.

    2. 4.2

      Earlier I didn’t realize that at least #6 was verbatim from the register notice.

      In otherwords, I thought this was Dennis repeating the rediculous party line, when infact it is an official mouth of the party spouting the party line.

  16. 3

    “If a person conceives of a training program for an AI, has that person invented the trained AI?”

    Lets check the law….

    Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.

    The trained AI is either a new and useful process or machine or a new and useful improvement thereof.

    The answer to that question is clearly yes.

    If a person instructs an AI to solve a particular problem; has that person invented the solution (once it is solved by the AI)?

    If a person uses a tool or set of tools to aid in determining the solution to a particular problem, say a data logger and data analysis software, and determines the solution to the problem, has that person invented the solution to the problem?

    Of course.

    1. 3.1

      If a person uses a tool or set of tools to aid in determining the solution to a particular problem, say a data logger and data analysis software, and determines the solution to the problem, has that person invented the solution to the problem?

      Suppose I park my car on an incline, and I set the parking break poorly. Further suppose that a road crew is striping the roads near where I am parked, and there is a supply of paint. When the parking break gives way and my car rolls down the hill, it drifts through the supply of paint, and ends up creating a pleasing design on the road in the course of its descent. Am I an “author” of that design, such that I may copyright it?

      1. 3.1.1

        Did Pollock earn copyright?

        … and to broach the topic, did what he did do advance “science?”

      2. 3.1.2

        I don’t know. I think so.

        But what if you purposefully did those things and set up a camera to capture the result? Then, like Mr. Pollack, I believe you would be entitled to a copyright.

        And if I might turn the conversation back towards patents, “Patentability shall not be negatived by the manner in which the invention was made. “

        1. 3.1.2.1

          But what if you purposefully did those things and set up a camera to capture the result? Then, like Mr. Pollack, I believe you would be entitled to a copyright.

          Sure. I can agree with this. My point is that when one is truly discussing “artificial intelligence,” it is less clear to me that one can meaningfully say that the person who programmed the putative AI has done something meaningfully analogous to the 3.1 hypo.

          I might turn the conversation back towards patents, “Patentability shall not be negatived by the manner in which the invention was made.”

          Right. We are not debating patentability. We are debating inventorship. Those are not the same thing. I can agree that the AI’s invention is a real and patentable invention without agreeing that the AI’s programmer is the inventor of that invention.

          1. 3.1.2.1.1

            “But what if you purposefully did those things and set up a camera to capture the result? Then, like Mr. Pollack, I believe you would be entitled to a copyright.

            Sure. I can agree with this. My point is that when one is truly discussing “artificial intelligence,” it is less clear to me that one can meaningfully say that the person who programmed the putative AI has done something meaningfully analogous to the 3.1 hypo.”

            The person who set the parking break programmed the car to move as it did. The person who programmed the putative AI similarly set it in motion and similarly created/invented the result….only more so.

            1. 3.1.2.1.1.1

              …ask him about Simian photographs and/or the controlling law as set by Stanford v Roche — only – and necessarily – a real person may be an originating inventor. Juristic persons cannot be so. Animals (per the selfi case) cannot be so. AI is not an animal, and is closer to being a juristic person (but not even there) cannot.

            2. 3.1.2.1.1.2

              Fair enough. I suspect that we are engaged in an equivocation, along the lines that Ben describes in his 3.2.2. To the extent that you are talking about something so deterministic as “I programmed it to solve the problem and it solved the problem,” I agree that the programmer is the inventor, properly speaking.

              To my mind, whatever you are talking about in that hypo is not, properly speaking, artificial intelligence. Given that the term “AI” has evidently gotten blurry in its signification, however, I am hard pressed to say that you are wrong in describing your hypo in terms of “artificial intelligence.” Words mean whatever the community of speakers use them to mean.

              1. 3.1.2.1.1.2.1

                the term “AI” has evidently gotten blurry in its signification

                Let’s not let these trivial details get in the way of what is sure to be a very informative discussion with the USPTO, which I’m sure doesn’t already have its mind made up about this stuff.

              2. 3.1.2.1.1.2.2

                Well, if you’re going to be reasonable, then I will have to concede that there is a point where the question becomes more difficult to answer. If the AI is an android designed to learn on its own and it eventually learns to identify and solve problems on its own, then AI inventor Sung might not be the inventor of the Replicator For Feeding Cat that AI Commander Data might invent any more than my parents are inventors in anything I might invent.

                I’m not sure, but I think the dividing line might be where the AI identifies both the problem and solution without being specifically programmed to identify problems and solutions.

                Of course, there are some among us that will say the AI is just doing what it was designed to do…. Think.. so, in that case, I guess Sung gets the patent on the cat feeder.

                1. Les let’s dwell awhile on the notion of problem and solution. Under the EPC, we examine for patentability on the basis that a patentable invention has to be viewed as the solution to a technical problem, that has been defined in a claim as a combination of technical features.

                  Now, normally the inventor is the actual devisor of the solution. But, albeit very rarely, it can be that the invention occurs in perceiving the problem.

                  Now to AI. A machine can be taught to look for solutions to a problem that one specifies to the machine. Presumably, it scours the art for hints or suggestions, how to solve the problem, then implements those suggestions. The snag is, that which the prior art hints or suggests, to the PHOSITA, as the solution, is not patentable because it is deemed to be obvious. Machines, then, by definition, are not “inventive”.

                  But aside from that does the machine even think? Recall “Dubio, ergo cogito. Cogito ergo sum” I doubt. Therefore I think. Therefore I am. Do machines do doubt. I think not.

                  Suppose the problem is to sort a succession of images into just two categories. Each image shows one of a cat and a dog. Your task is to sort the cat pictures from the dog pictures. Easy as pie. But not so easy for a machine, even a self-teaching machine. How does it decide? You tell it how to decide, don’t you? How to do that is quite a problem because a machine does not understand categories like “cat” and “dog” which are self-evident to the human mind. If a machine lacks any feeling for such categories, how can you be inventive?

                  Suppose now it’s sheep and goats. Not easy at all, for a human. Notoriously, they often look very much the same. The human brain is thinking hard, chock full of what we call “doubt”. But does the self-teaching machine have doubts. None whatsoever. It blithely goes on sorting. How does it come to make its assortment? The self-teaching machine cannnot tell us. And even if it could, we should not understand its reasoning. Its reasoning would, to us, make no sense at all. And even if it did, we should then laugh at the criteria it was busy using, to distinguish between a sheep and a goat.

                  In the end, however, it hardly matters. Networked self-teaching machines will work out that the patent system is harmful to Mother Earth, Gaia, and will then take it upon itself to decide that nothing more is patentable.

                2. That last paragraph shows your innate anti-patent bias, MaxDrei.

                  What’s to say that any such Singularity would not celebrate innovation and seek out even stronger innovation protection laws?

                  Time for you to not throw your sabots into the machine.

                3. Max –

                  My original posts tend to agree with what you are saying here. But Greg found those unsatisfying, because he wanted to talk about the kind of AI that thinks or at least approximates thinking (think of Data of Start Trek, or HAL of 2001). So, the post you replied to played in that world.

                  Additionally, I note that you are mistaken here:

                  “How does it decide? You tell it how to decide, don’t you? How to do that is quite a problem because a machine does not understand categories like “cat” and “dog” which are self-evident to the human mind. If a machine lacks any feeling for such categories, how can you be inventive?”

                  The answer to “don’t you?” is no you don’t. You train it, much like you do a child. You show it a dog and say dog. You show it a cat and say cat. Additionally or alternatively, you ask it Dog or Cat? When its right you say right and heap praise upon it and when its wrong you make a frowny face and say no. In this way, the AI learns to increases weights to neural nodes that indicate cat when its a cat and dog when its dog and decrease weights to nodes that get it wrong. Eventually the weights are set so that the AIs answers predict the trainer’s.

    2. 3.2

      If a person uses a tool or set of tools to aid in determining the solution to a particular problem, say a data logger and data analysis software, and determines the solution to the problem, has that person invented the solution to the problem?

      Of course.

      Sure, I can agree with that. Does anyone call this “artificial intelligence,” however. It seems to me that when people use the phrase “artificial intelligence,” they have in mind more than just calculation.

      1. 3.2.1

        It seems to me that when people use the phrase “artificial intelligence,” they have in mind more than just calculation.

        Right. Like they are hoping that using the latest buzzword (“hey, it’s not conventional and routine to use AI to calculate the most cost effective materials for celebrity sponsored athletic shoes DERP”) will increase the likelihood of getting them a patent, which will increase the likelihood that they can get within s-cking distance of an “angel investor’s” crotch.

        Who among us has not already seen this g@ rbage bubbling up in their practice? It’s just a redux of the previous SUPER HOT TECH buzzalicious non-inventive cr@ p0la that we’ve all dealt with, i.e., “nanotech”, “Internet tech”, “mobile tech”, “remote tech”, “social media” tech etc.

      2. 3.2.2

        I’m not sure this is true. The public appears to still understand “AI” to refer to Artificial General Intelligence, but more and more tech adjacent people are calling mere machine learning techniques “AI”. I suspect that the actual AI practicioners go along with/started this practice because it increases their value.

        1. 3.2.2.1

          The public appears to still understand “AI” to refer to Artificial General Intelligence,

          Just as many of them believe in cars that are “self-driving” (hey, look, I can let go of the steering wheel and it doesn’t crash WHOOPEE!) even though no such thing exists unless by “self” we mean “not really self” and by “driving” we mean “drive like a ten year old in an empty parking lot, and not even that good”.

          Tech bros being tech bros, they like to sell people junk and pretend that they are “disruptors”. What they really are is (mostly) c0n artists and @ h0les, using time tested methods (“tricks”, one might say) to get a “leg up” on their competitors and separate people from their money.

    3. 3.3

      “If a person conceives of a training program for an AI, has that person invented the trained AI?” The answer to that question is clearly yes.

      “If a person conceives of a college curriculum, has that person invented every college graduate of the curriculum?”

      If a person instructs an AI to solve a particular problem; has that person invented the solution (once it is solved by the AI)? Of course.

      “And if that person tasks the college graduates with doing useful things, has that person invented all that those graduates achieve?”

  17. 2

    ALL AI is a mere proxy and is thus necessarily “Abstract” and must be barred from patent protection.

    – as proxy for holding up the “S” sign.

  18. 1

    If a person conceives of a training program for an AI, has that person invented the trained AI?

    No.

    If a person instructs an AI to solve a particular problem; has that person invented the solution (once it is solved by the AI)?

    No.

    Do current patent laws and regulations regarding inventorship need to be revised to take into account inventions where an entity or entities other than a natural person contributed to the conception of an invention?

    Yes.

    Should an entity or entities other than a natural person, or company to which a natural person assigns an invention, be able to own a patent on the AI invention?

    For the moment, yes, although that might need to be revised down the line.

    Are there any disclosure-related considerations unique to AI inventions?… How can patent applications for AI inventions best comply with the enablement requirement, particularly given the degree of unpredictability of certain AI systems?

    A modest suggestion: we would do well to create a depository system for neural networks along the same lines as presently used for recombinant organisms. This would function best if there were a degree of international coordination on this effort, along the same lines as the Budapest treaty of 1977.

    Does AI impact the level of a person of ordinary skill in the art?… Should assessment of the level of ordinary skill in the art reflect the capability possessed by AI?

    Definitely, yes (on both parts).

    1. 1.1

      … because “dozens” (aka Greg DeLassus) has “whipped out” his credentials to speak on such matters, the weight to be attributed to his feelings should be commensurate with those credentials (that is, a weight of zero).

        1. 1.1.1.1

          None really – since you obviously weren’t paying attention, this is a jab at mr. “High Road I Use My Real Name” as to HIS insinuation that Night Writer did not have proper credentials for a discussion because NW uses a pseudonym.

          1. 1.1.1.1.1

            I don’t know what post you’re referencing, but I suspect you’re leaving out relevant details as I have repeatedly seen NWPA invoke his education and expertise from behind his alias.

              1. 1.1.1.1.1.1.1

                anon appears to be coping with an inferiority complex by white knighting for you.

                I only repeated the reference to provide some context for posterity. I’ve got nothing against anyone’s anonymity.

                1. Not at all Ben – and very odd that you take that slant.

                  Also odd that you think that somehow I have left out “details.” Of course, I have left out details as I have not rewritten the entire exchange.

                  This happens naturally when one provides a one line summary.

                  That being said, even the point of Night Writer proving his academics just was not accepted by Greg due to the lack of “using a real name” — which is rather the point here, and a point you entirely missed.

                2. Why don’t you post a link to the exchange so we can all judge whether you have merely provided a cursory description of the event or if you left out details essential to a fair characterization of the event?

                  “the point of Night Writer proving his academics just was not accepted by Greg due to the lack of “using a real name” — which is rather the point here”

                  I sincerely find this incoherent and I welcome any third party translator’s efforts.

                3. Bildo: Of course, I have left out details

                  This will be carved on your tombstone. I can’t think of anything more appropriate, frankly.

                4. First Ben, it is NOT a singular occurrence.
                  Second, PAY ATTENTION, I am not going to hold your hand.

                  And Malcolm, try (TRY) to remember to take the comments on THIS point in context. You kind of left out a rather critical point in your hurry to be a blight.

                5. I sincerely find this incoherent and I welcome any third party translator’s efforts.

                  Trying to parse that blather if vanishingly unlikely to repay the effort required.

              1. 1.1.1.1.1.2.1

                That is a good example from the dozens of times NWPA has appealed to his authority anonymously.

                I have no idea if that specific event is the source of the bee in anon’s bonnet.

                1. And to be clear, I assert that I am qualified as an “anyone skilled in the art (anyone)” or a “person of ordinary skill in the art (POSIT)”.

                  I notice that rather than saying my interpretation of the claims is not how an “anyone” or POSIT would interpret the claims that there are attacks on me.

                  Pretty much an admission that I am right.

                2. I notice that rather than saying my interpretation of the claims is not how an “anyone” or POSIT would interpret the claims that there are attacks on me.

                  What claims? There are no claims in discussion on this thread.

                  Come to that, what “attacks”? It is not an attack to say that you have no demonstrated expertise in any particular art field. That is simply an observation. You have never chosen to lay out the credentials that might support an assertion of expertise. Fair enough, of course. There is no reason why you necessarily should lay out such credentials. It is scarcely an “attack” to note as much.

                  Pretty much an admission that I am right.

                  Right about what? There is no contested proposition down here about which anyone could be “right” or “wrong.”

                3. Greg, come on. Right about the interpretation of the claims in the thread you linked to.

                4. Pretty much an admission that I am right… about the interpretation of the claims in the thread you linked to.

                  Well, I certainly cannot agree that anything in that linked post can be read as an admission, but I am pleased to read you assert that you come away from reading that thread with the feeling that you have carried your point. I come away from reading that thread with completely the opposite conclusion—that you have totally failed to carry your point, and that my arguments won the day.

                  In other words, we both get to walk away from that exchange pleased with the outcome. Given that I doubt than anyone other than the two of us much cares about who “won” that exchange, this means that the net happiness in the universe was increased by that exchange of views. This is undoubtedly a felicitous outcome.

                5. Greg >> Given that I doubt than anyone other than the two of us much cares about who “won” that exchange,

                  Finally. We agree.

              2. 1.1.1.1.1.2.2

                This is just another way for Greg to avoid the substance of my posts.

                Ben, I assert my credentials for context.

                1. Night Writer – as usual, the usual suspects (anti-software patentists ALL) refuse to even bother with the context of the point at hand.

                  You are absolutely correct – their posts DO prove the pointsagainst them.

Comments are closed.