153 thoughts on “Who Invented What?

  1. 14

    Why is it that the world of patent law is so focused on the inventorship issue with “AI”?

    The impact of “AI” on obviousness seems to be a much more pressing issue in patent law.

      1. 14.1.1

        How about “Ben is not wrong for once”…?

        While he has “identified”- wrinkle (one of which I have identified many times previously), he really has not staked out a cogent legal position, now has he?

        1. 14.1.1.1

          Yes anon I know that you, me, and others have all made the same point.

          But Ben’s point is simply that the 103 issue is a bigger issue than inventorship, which I agree with.

          1. 14.1.1.1.1

            I suppose “bigger” might be accurate, as “State of Art” affects all, and not just any asserted (or omitted) inventorship — which may actually come to bear only later when trying to assert a patent.

            Can you imagine what the courts might do to those who purposely omit that an inventive aspect cannot be traced directly to a real live human?

            1. 14.1.1.1.1.1

              Given the CAFC’s propensity to take any issue and try to find a way to limit patents/invalidate the patent at bar, I’d say that the likely case law will be that anything found by an AI is presumptively obvious.

              Or some other type of rot that the likes of Taranto will conjure up.

              1. 14.1.1.1.1.1.1

                Well here, I am not sure that “rot” is a fair description.

                Of course, it may well be important to note that AI itself will not be static – today’s AI is to be expected to be inferior to tomorrow’s AI, and the legal aspect of obviousness — already an item outside of a REAL person, and being that of a legal fiction — would need be maintained for its purpose of reflecting State of the Art.

                IF the naysayers here do not want to ascribe an advance by a non-human to BE inventive, then it is only logical that any such advance BE deemed to be a State of the Art against which a human would then need to advance upon.

                In my Black Box analogy, that person in the second room that opens a black box that merely already has the invention of another in it, is not – and cannot be – an inventor. To THAT person, if one wants to deem that person to be AN inventor, there need be FURTHER invention, and such would need be not obvious to what is in the box.

              2. 14.1.1.1.1.1.2

                As I have already pointed out (and Greg followed later), a somewhat parallel case is the Simian Selfie – and the “right” answer there was that NO ONE got copyright.

  2. 13

    “Who Invented What?”

    I don’t know what the answer presently is, but once the question is relevant to the real world the answer is of course:

    “paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip…

  3. 12

    I’m wondering how to reconcile the court decisions in England and Germany which admit of the possibility of recognising an AI as the “actual devisor” of the claimed subject matter while continuing to require that a human be named as “inventor”.

    Perhaps TSM offers a prism to look through.

    I mean, suppose the AI throws up all the possible solutions to a technical problem, for review by its human master, who then identifies the patentable matter. The inventive step lies in that identification. The AI merely provides a shortlist of those possibilities for which it found a hint or suggestion in its review of the published art. So, its contribution, as such, was not inventive.

    How about that?

    1. 12.1

      No.

      That is expressly NOT reconciliation.

      Further, this definitely falls flat in view of my Black Box scenario that I know that you have been long aware of.

      Your flinging of C R P here does not even make it to the wall — it remains stuck on you.

    2. 12.2

      I mean, suppose the AI throws up all the possible solutions to a technical problem, for review by its human master, who then identifies the patentable matter. The inventive step lies in that identification. The AI merely provides a shortlist of those possibilities for which it found a hint or suggestion in its review of the published art. So, its contribution, as such, was not inventive.
      This is consistent with my hypothetical at 2.1.1 — a hypothetical that no one on this blog has addressed.

      Anybody suggesting that AI is capable of doing more than that needs to present evidence of this expanded capability. The word of someone who believes: (i) AIs are sentient, (ii) AIs have emotion, (iii) AIs can have near-death experiences, and (iv) AIs are capable of both mental illness and criminality is not enough in my book.

      1. 12.2.1

        Your “book” simply does not accord with the facts presented.

        I “get” that you do not like the DABUS applications, but notwithstanding possible OTHER violations (such as 112), you are simply not able to make up your own take of what has been accepted by the courts.

        I am actually quite amazed at how diligent you have been in trying NOT to talk about the instant hypo. You have spent far more energy in avoiding something than actually discussing anything that you would deign to paint as “more realistic.”

        1. 12.2.1.1

          with the facts presented.
          Are we talking about real facts or are we talking about the hypothetical? For real facts I want real evidence. The hypothetical is fantasy as far as I’m concerned until someone presents evidence otherwise.

          you are simply not able to make up your own take of what has been accepted by the courts.
          Accepted by the courts? What evidence has been presented by the courts? You should know better than most that what a court ‘accepts’ as being true does not necessarily comport with reality.

          You have spent far more energy in avoiding something than actually discussing anything that you would deign to paint as “more realistic.
          I’m discussing the issue as it pertains to my experience with artificial intelligence (i.e., reality as I know it). If there is a reality that differs from mine, I want evidence of that reality. For someone that usually shows a healthy amount of skepticism, I am curious as to why you have bought everything Thaler’s selling — hook, line, and sinker.

          1. 12.2.1.1.1

            It is not simply not I that is doing any “buying” hook, line and sinker.

            Why are you fighting so feverishly to avoid the discussion?

            What are you afraid of?

      2. 12.2.2

        Wt, personally, I am still caught between two lines of thought, as follows:
        1. Only by “thinking” can a patentable invention be conceived. But recall from the 18th century Enlightenment the utterance: Dubio ergo cogito. Cogito ergo sum. Everybody knows the second sentence but the first might here be important. Before you can designate some action as thinking, you have to find doubt. Computers don’t do doubt though, do they. Therefore they don’t think. Therefore they don’t conceive inventions (however much devising they do). This might explain the decision of the appeal court in Germany in the DABUS case.
        2. In answer to Greg at 11124, for the patent drafter it is immaterial whether the communication of a patentable invention emanates from a human inventor or an AI or anon’s Black Box. Apply the Turing Test. Can the drafter tell the difference between an AI inventor and a humn inventor?

        We might need philosophers to help us to mull over the repercussions of 1. and 2.

          1. 12.2.2.1.1

            The UK patent statute (unlike the EPC to which it is subordinate) includes a definition of who is to be named as “inventor” namely the “actual devisor” of the claimed subject matter. This definition complicated the DABUS litigation in the courts of England.

            1. 12.2.2.1.1.1

              It really is no more difficult than the US version — IF people would simply be honest and admit to the portions that actual humans met the actual US legal requirements to.

              Yet, THAT seems awfully difficult for a lot of people who would rather insist on some type of “Sentience Plus.”

              My black box analogy cleanly — and simply — takes care of this.

        1. 12.2.2.2

          As to “Apply the Turing Test

          The easy rebuttal is: Why?

          To what effect would ANY answer be as to the practitioner opening up ANY black box and proceeding to write the application?

          It simply matters not at all from that practitioner’s perspective.

          1. 12.2.2.2.1

            What? When you draft, do you not ever ask the inventor any questions, with a view to flushing out what their invention is? How is that process going to fare, when the inventor is an AI?

            1. 12.2.2.2.1.1

              What? When you draft, do you not ever ask the inventor any questions, with a view to flushing out what their invention is? How is that process going to fare, when the inventor is an AI?
              Good points. I want to see the original disclosure. IMHO, the fractal container patent application is a half-baked idea — it is written as a half-baked idea.

              My questions would involve: exactly what of this patent application did DADUS provide? How did DABUS provide it? In what form did DABUS provide it?

            2. 12.2.2.2.1.2

              MaxDrei, your questions are simply immaterial to the point at hand and obfuscate the point of the hypo.

              You are aiding those seeking to avoid discussing the legal point AT point.

            3. 12.2.2.2.1.3

              Already count filtered….

              Your comment is awaiting moderation.

              April 22, 2022 at 10:41 am

              MaxDrei, your questions are simply immaterial to the point at hand and obfuscate the point of the hypo.

              You are aiding those seeking to avoid discussing the legal point AT point.

      3. 12.2.3

        Come on Wandering you are better than this. Your (i)-(iv) are ridiculous and have nothing to do with the issues.

        The reality is that AI is here and it is going to keep getting better and better with more and more functionality.

        Your example is ridiculous as one can write AI to include evaluating the solutions. The reality is that there is nothing that people can do that AI won’t be able to do within 20 years.

        1. 12.2.3.2

          Come on Wandering you are better than this. Your (i)-(iv) are ridiculous and have nothing to do with the issues.
          (i)-(iv) are relevant because the one person (Thaler) whose activities have spurred this discussion also has those beliefs. If we are going to leave the hypothetical world and enter the realm of reality, then we are left discussing the issue framed by the purported facts presented by Thaler. As such, (i)-(iv) goes to Thaler’s credibility and motivation.

          The reality is that AI is here and it is going to keep getting better and better with more and more functionality.
          I’ve drafted/prosecuted many patent applications that involve AI. I am familiar with AI’s capabilities and limitations. As such, I’m viewing AI through reality, as I know it — not the kind of reality being pushed by Thaler.

          Your example is ridiculous as one can write AI to include evaluating the solutions.
          If all AI is doing is performing some known evaluation then what the AI is doing is not inventive.

          The reality is that there is nothing that people can do that AI won’t be able to do within 20 years.
          Hardly. This is the ‘singularity’ that Anon speaks of. If AI is capable of doing everything people can do, then AI would be capable of programming another AI. Moreover, given the inherent capabilities of AI, an AI should be able to do it better/faster. If that happens, the AI can create better versions of itself, which can then create even more better versions of itself. A walk down that path leads to a frequent trope in science fiction.

          I suggest that everyone actually read the two patent applications that DABUS is the supposed inventor of. They are examples of the capabilities (or lack thereof of DABUS). I’ve made this point before, but what has DABUS been doing the last 4 years? Has everything capable of being invented by AI already been invented?

          1. 12.2.3.2.1

            Wandering, all fair enough I suppose.

            I would argue that you underestimate AI. It really is all based on having the processing power to perform the functions that humans perform. AI will continue to become more and more powerful as processing speeds go up.

            Not worth arguing where that is going to go. I do AI applications with some of the best in AI as well and have to draft applications based on papers to the AI conferences. So, I have a pretty good idea where it is now. I am old AI person that learned symbolic AI with Lisp back in the late 70’s/early 80’s and have being doing AI on and off the entire time.

            I took a course some 40-50 years ago from one of the top people in AI at the time. One of those MIT people that worked with McCarthy who was the one that came up with the “singularity” idea but it was stolen by that other guy and renamed.

            Anyway, this top AI researcher spent about 1/2 a lecture one day going over the computational speed of the brain compared with computers. He pretty much mapped out where AI would go over the next 100 years. I think back in 1980, he said that he thought it would be until 2040 before we saw AI that would match humans.

            Anyway…..we’ll see how it develops. I think–just my opinion–that your problem is that you don’t understand cognitive science well enough. We really aren’t that great at processing information.

            1. 12.2.3.2.1.1

              Night Writer,

              i would also point out that it is NOT only a function of processing power, as while that is important to unleash the power of neural networks, it is also HOW those neural networks can “self-align” or otherwise act in unforeseen manners that TAKES AWAY from an actual human that which is being designated as “inventive.”

              And just wait until Quantum computing is established — Moore’s law on steroids, so to speak.

              Be all of that as it may, I would hesitate to further pontificate about the FUTURE , as that is only feeding the Ostrich effect of those not wanting to NOW explore the legal implications of non-human inventorship (and I will point out again, this does NOT require any type of *sentience-plus* as may come with The Singularity).

              Heck, Wt thrusts his head into the sand so forcibly that he cannot (will not) even contemplate any notion of how that OTHER non-human legal aspect may be impacted – the legal fiction of the Person Having Ordinary Skill In The Art.

            2. 12.2.3.2.1.2

              “AI will continue to become more and more powerful as processing speeds go up.”

              I just want to note that one can agree with this while also thinking we’re a long way from AGI. I think it’s clear our single-purpose “AIs” will be vastly better in 2030, and they will probably be performing many human tasks better than humans. But the existence of narrow “AI” excellence does not imply broad “AI”/AGI processing.

              1. 12.2.3.2.1.2.1

                Well Ben, let me give you some credit here and note that for many, the notion of “narrow AI” may well be MORE ‘buzz word’ and LESS any inventive aspect by a non-human.

                But the better discussion would come from giving the Professor’s hypo its full weight, rather than the (massive amount of) hiding from the proposed point.

            3. 12.2.3.2.1.3

              It really is all based on having the processing power to perform the functions that humans perform. AI will continue to become more and more powerful as processing speeds go up.
              AI performs certain functions wonderfully — even much better than humans. However, AI is not particularly bright in identifying hidden problems (i.e., problems that exist but most people aren’t aware of). AI also has a very poor understanding of the natural world. This is why, for example, AI can confuse a full moon with a traffic signal, or is unable to identify a stop sign that has vines growing over it, or is unable to determine the difference between a real tree and a tree painted on a truck. BTW, these are real examples. AI, as I know it, cannot identify a problem, identify multiple possible solutions to that problem (perhaps pulling from very different source material), perhaps integrating aspects of those possible solutions, and then work through those solutions to arrive at something unique.

              On the other hand, if you give AI a haystack and tell it to “find all the needles in that haystack,” it can perform wonderfully. Once it is trained to tell the difference between hay and a needle, it’ll do a great job identifying needle candidates. Some of those needle candidates may not, in fact, be needles. However, it’ll take further training and adding additional capability of the AI for it to make those determinations. Regardless, is finding a needle in a haystack inventive? Some may differ on that answer. If all it is doing is using brute force to do so, then I would say no.

              1. 12.2.3.2.1.3.1

                If all it is doing is using brute force to do so, then I would say no.

                And you would be wrong. See 35 USC 103.

                (You keep on spending energy avoiding the point AT point)

                1. And you would be wrong. See 35 USC 103.
                  103 is about obviousness — not inventorship. But you already know that.

                  You keep on spending energy avoiding the point AT point
                  Don’t wait on me to start discussing the hypothetical. My guess is that 90% of your posts to this article has been complaining about people not engaging with the hypothetical. Why don’t you write something about your thoughts regarding the hypothetical? Maybe you’ll write something that people want to comment about.

                  You aren’t the thought police on this blog. If we choose to not engage with the hypothetical, that’s our prerogative. Why are you so insistent on forcing us to engage in a discussion that we want no part of?

                2. I responded to YOUR use of the term inventive.

                  35 USC 103 IS directly on point to that point.

                3. looking for something else, I happened upon this:

                  link to patentlyo.com

                  Shockingly, this was about the same time that we last saw ANY activity over on the ethics side of this blog.

                  That space would be better put to use following my very informative postings.

              2. 12.2.3.2.1.3.2

                However, AI is not particularly bright in identifying hidden problems (i.e., problems that exist but most people aren’t aware of).

                As I mentioned, you need to check out better prior art.

                There is art in the AI space dealing directly with this.

                1. As I mentioned, you need to check out better prior art.
                  Always with generalities and never with specifics. How about YOU present that evidence.

              3. 12.2.3.2.1.3.3

                Wandering,

                The proof will be in the pudding. Your points are –to my mind–about 20 years behind the current thinking. But it doesn’t really matter.

                AI will have to prove itself and when AI exhibits a functionality, I am sure you will acknowledge it.

          2. 12.2.3.2.2

            Wt – you are letting your emotions (and 0bsess10n) with Dr. Thaler simply overwhelm any sense of reason.

            I “get” that you think his DABUS thing is one big publicity stunt (and your quip of “ Has everything capable of being invented by AI already been invented?” MISPLAYS that sense of DABUS not having been let loose to do other inventing”.

            It is easy enough to see that DABUS is just not the be all and end all, so I really do not get why you are so afraid of letting that go and focusing on the hypos presented.

            Your quip to Night Writer in regards to what I have been calling The Singularity also evidences your too-quickly running away, as had you paused (at all), you would have recognized that THAT point of reaching The Singularity is NOT at point here. You are being distracted by a level of *sentience* NOT REQUIRED to bring about the more direct point of an inventive aspect that cannot be (legally) traced to a human as the inventor of that aspect.

            Again, my ‘black box’ hypothetical clearly – and cleanly – makes this distinction.

            I suggest that you stop kicking up dust with your 0bsess10n over DABUS, and engage on the merits of the presented hypothetical.

            1. 12.2.3.2.2.1

              I really do not get why you are so afraid of letting that go and focusing on the hypos presented.
              I have no interest in giving Thaler what he wants, which is publicity for his cause. He is using this issue as a backdoor into giving AI legal personhood. If AI can be an inventor, AI should also have other rights that are normally associated with a natural person.

              Suppose, for example, there was a serious effort put forth by serious people to have AI recognized as capable of being an inventor. The people who attack the patent system would have a field day on this issue. While you may be willing to engage in a civil philosophical discussion over “AI as a inventor,” the enemies of our patent system would use this issue against the patent system writ large.

              I could envision the arguments. AI will be taking away people’s jobs. Next, they’ll be giving AI the right to vote. Who is going to be controlling AI — yeah, you know, those big-tech liberals. We need to put serious constraints on the patent system to make sure that this doesn’t happen. The patent reform act that explicitly kills AI as an inventor will also be the one that kills all computer-implemented inventions.

              1. 12.2.3.2.2.1.1

                Not waste where your qualm is coming from, there exists already legal personhood for non-humans in several aspects of patent law.

                To your assertion of “attacking,” I have already showed that an attack by NOT reflecting the actuality of AI as inventor is upon us with State of the Art and “must be obvious.”

                Your House of Woes (for “future” other rights) is — again — only dustkicking, and as I have pointed out, the legal point AT point simply does not need to reach the level of The Singularity.

                1. These count filters b l 0 w

                  Your comment is awaiting moderation.

                  April 25, 2022 at 12:56 pm

                  Odd autocorrect…

                  “waste” —> “seeing”

                2. I have already showed that an attack by NOT reflecting the actuality of AI as inventor is upon us with State of the Art and “must be obvious.”
                  Sigh. You’ve shown? Much of your writing is collection of self-referential, missives that lead a reader thinking to himself/herself … ‘what the heck is he talking about?”

                  Neither I nor other readers are any position to know what you wrote about a topic a week ago, a month ago, or a year ago. However, you write as if everyone has complete knowledge (and understanding) of everything you’ve written. Let’s forget, for a moment, that on many topics you dance around the point without ever getting to a point, we’ve been discussing AI for how long, and I don’t even know if you think AI being an inventor is a good thing or a bad thing and the reasons why.

                  It doesn’t take long for one reading my comments to understand my point of view and what I have that point of view. Although not on every topic, but on many topics I’m still not sure what your point of view is — this one in particular. It doesn’t help when the vast majority of your posts are directed to telling people how wrong they are — regardless of the topic.

                  I’m sorry, but your writing style leaves a LOT to be desired.

                3. As off point as they are, your feelings are noted.

                  Of course, your feelings continue to cloud your judgment, or – as is plain here – preclude you from even bothering to attempt to address Prof. Crouch’s actual hypo.

                  I suggest that you put your feelings aside.

                  But you be you (and any relation of that sentiment to your own advice to Ben is indeed deliberate).

    3. 12.3

      [S]uppose the AI throws up all the possible solutions to a technical problem, for review by its human master, who then identifies the patentable matter. The inventive step lies in that identification. The AI merely provides a shortlist of those possibilities for which it found a hint or suggestion in its review of the published art. So, its contribution, as such, was not inventive.

      Suppose that a grad student throws up all the possible solutions to a technical problem, for review by the thesis advisor, who then identifies the patentable matter. Is it your contention that only the thesis advisor qualifies as an “inventor” here?

      Suppose that a slave in 1840 Alabama throws up all the possible solutions to a technical problem, for review by her owner, who then identifies the patentable matter. Is it your contention that only the owner qualifies as an “inventor” here? Is that really the implication of TSM?

      1. 12.3.1

        Someone may want to help Greg with my post directly on the slave angle.

        Oh wait – he’s already seen it as that post as at IPWatchdog, a place that Greg does not have his “anon says” technical blinder in place.

        But hey, why give credit where credit is due?

      2. 12.3.3

        Greg, I don’t know enough about how an AI devises a solution gto a technical problem but what I was thinking is that what an AI does is to react to hints or suggestions in the state of the art, to come up with its solutions.

        But if we apply a TSM approach to obviousness, all those solutions are (to the imaginary omniscient PHOSITA) obvious solutions. You don’t get to be an inventor if all you do is output what was obvious (to the imaginary PHOSITA) to do to solve the problem.

        1. 12.3.3.1

          You don’t get to be an inventor if all you do is output what was obvious (to the imaginary PHOSITA) to do to solve the problem.

          MaxDrei (yet again) reveals the huge pile of horse carcasses next to the well of wisdom that I lead him to.

          Greg, I don’t know enough about how an AI devises a solution…

          That is PAINFULLY obvious [pun intended], as your ‘grasp’ of legal logic is very much deficient here. You simply do NOT get to postulate that the AI contribution is nothing more than “obvious” to insert the Euro throw-away buzzword of “TSM” to find that “inventor” does not apply to provisions of obviousness.

          The whole point here is that there IS a legitimate, patentable advance, and that advance is just not ENTIRELY traceable to human inventors.

          The hypo here is NOT that this can be broken down into a Human patentable advance and a separate non-human, obvious “help” from a non-human.

          If that were the case, then the ENTIRE scenario would never even BE advanced.

      3. 12.3.4

        Suppose that a grad student throws up all the possible solutions to a technical problem, for review by the thesis advisor, who then identifies the patentable matter. Is it your contention that only the thesis advisor qualifies as an “inventor” here?
        Like everything, it all depends upon the facts. I put in a bunch of keywords related to my problem into a search engine and come up with a bunch of things, does that make Google’s AI-powered search engine an inventor? Is your grad student doing more or less than a search engine?

        Suppose that a …
        Did you really have to go there? Seriously?

  4. 11

    Dennis, I’m pleased to see that your interest in this issue continues with vigor. I’ve read your brief, remarks, and comments of all those above, and see that there are many interesting points of view. But I continue to rely on my gut that tells me that use of AI—at least, current AI—doesn’t raise any issues. I particularly liked the remarks of one person who used the analogy of the microscope. Humans invent, machines only make the job easier.

    1. 11.1

      another ostrich…

      If AI were only “another tool,” this would not be a topic.

      Sad to say, but the only person who appears to even approach engaging is Greg DeLassus.

      1. 11.1.1

        Not an ostrich but a person that doesn’t understand information processing.

        Note the separation of brain/people from machines. They don’t get that we are information processors and that what we do a machine can do just like a bulldozer can push dirt like our bodies can.

        They are just stuck in their education pre-information age.

        1. 11.1.1.1

          I think that we both can agree use we ALL should expect better from those involved in the field of innovation.

    2. 11.2

      count filtered…

      Your comment is awaiting moderation.

      April 21, 2022 at 11:32 am

      another 0str1ch…

      If AI were only “another tool,” this would not be a topic.

      Sad to say, but the only person who appears to even approach engaging is Greg DeLassus.

  5. 9

    OT but important patent litigation surprise reported:
    [“Who Owns What” instead of “Who Invented What”]
    “This past Monday, Chief Judge Connolly of D. Del issued a standing order for all pending litigation before him requiring disclosure of certain financial relationships from litigating parties. The information is due 30 days from filing of an initial pleading, and includes arrangements made between parties and third party funders. While eminently sensible in terms of identifying true decision makers for settlement purposes, or identifying potential conflicts of interest, the new requirements will surely send NPEs screaming into the night.”

    1. 9.1

      Interesting and thanks.

      Would this also be reciprocal to the business model of say Unified Patents?

    2. 9.2

      How many NPEs choose to file in D Del? I thought that NPEs hated that district like the plague. It is a high traffic venue for patent litigation, to be sure, but I was under the impression that this was mostly a function of litigation between practicing entities.

      1. 9.2.1

        Greg, I believe D. Del is still number 2 for new patent suits, but I appreciate that most PAEs are much more likely to file in Waco WDTX than Delaware nowadays, IF they can get valid venue there. But I am curious if any other district courts other than Delaware have adopted any such standing order?

  6. 8

    I am not surprised that most of the response seem to be to resist the hypo. A lot of folks really do not want to think about this issue.

    1. 8.2

      Oxygen is absolutely essential to human life. However, through all of human history, oxygen has been abundant and ubiquitous. For this reason, it has never made sense for government to concern itself with securing an oxygen supply for its citizens, and I am unaware of any government in human history ever undertaking to do so.

      By contrast, new know-how has—at all times in human history—been both beneficial and hard to achieve. For this reason, it has made sense for various governments at various times to institute programs to smooth the way for the discovery of new know-how, and to incentivize people to busy themselves in the work of such discovery. Patent laws represent one such government program, and probably the most successful such program.

      A world of real AI, however, is (perhaps) a world in which new know-how becomes much easier to achieve, and therefore much more common. It will probably never be as common as oxygen, but it is easy to imagine a world in which new know-how becomes so cheap to obtain that the logic underlying patent law erodes. In a world in which you can get new know-how even without the incentives of patent law, does it make sense to incur the costs of such incentives?

      In other words, the future progression of AI is one that bids fair to put most of us around these parts out of work. Can it be a surprise, then, that the most common reaction around these parts to hypothetical speculation about such a day is a sort of whistling past the graveyard (e.g., “Dennis is… assuming facts that don’t exist in the real world” and “AI has far more shortcomings than most people know”)?

      1. 8.2.1

        “In a world in which you can get new know-how even without the incentives of patent law, does it make sense to incur the costs of such incentives?”

        Still need it to be disclosed, so I’m not really sure. Is everyone operating the inventive AIs in their backyard on their own personal fusion reactor?

        1. 8.2.1.1

          6,

          You alight upon one of the Libs fears (unstated) in this drama: AI kept under wraps.

          The push against a view of not granting protection is that such would push non-disclosure.

          There are inherent and natural difficulties already with being able to maintain “just how” a solution had been derived with AI — which as I am certain that You are aware of, impedes implementation of Identity Politics.

          If AI “goes deep,” then the political control leverage is lost.

        2. 8.2.1.2

          Still need it to be disclosed…

          Will we? Part of the logic of patents is that you want to facilitate disclosure because making a dozen different inventors to re-discover the same fact is less efficient than having one make the discovery and the other eleven build on top of that publicized knowledge. In other words, the reason why you care about incentivizing disclosure is because of the costs incurred in reaching the discovery. You want to spare others from incurring those costs, so that their resources can go, instead, into further progress rather than “reinventing the wheel,” so to speak.

          In a world, however, in which discoveries become cheaper and easier to realize, the inefficiency of having the other eleven have to make the discovery independently is correspondingly less. At that point, maybe you just do not care so much about incentivizing disclosure.

          1. 8.2.1.2.1

            If tis truly an AI making “inventions” then I will presume that all the AIs may in fact come up with different inventions bro. You seem to be implying (not sure if you are or not) that all the AIs will all converge around inventing the same thing(s).

            “In other words, the reason why you care about incentivizing disclosure is because of the costs incurred in reaching the discovery. ”

            You’re still going to have costs bro, still gotta go to the lab to see if the invention cooked up/dreamed up by the AI actually works in real life.

            “In a world, however, in which discoveries become cheaper and easier to realize,”

            We’ll just see if that “realization” happens on the cheap or not. Truthfully, innovation in my field is becoming more and more costly (as it becomes more and more complex obviously) if anything. A probably slight downward pressure on that ever ballooning increase by AIs helping out would be a welcome thing. I won’t hold my breath for that in my lifetime.

          2. 8.2.1.2.2

            An important clarification from Greg’s own words is immediately available by way of adding emphasis.

            To wit: “ Part of the logic of patents…

            Greg proceeds to disband ALL of the “logic of patents” bases on a mere ‘part.’

            It is not a coincidence that the part that Greg ‘relies on’ is informed by his (lack of self-awareness) Big Pharma bias (b-b-but development costs are so high — ).

            His penchant for somehow making this into some type of “single-runner race” is simply missing the LARGER PART of the “logic of patents” – that the US Sovereign wants the race to be a multi-runner race, as that is the significant driver of faster promotion (be there some redundant costs or not).

            Greg’s view is the epitome of “penny wise and pound f001ish.”

      2. 8.2.2

        This struck me as excessively unkind.

        I don’t consider WT an ally, and I generally don’t value his arguments.

        And it’s from that context that I say it’s not reasonable to suggest that he’s refusing to engage with the hypothetical as some sort of ego defense mechanism regarding his role in an hypothetical post-intellectual-scarcity society.

        1. 8.2.2.1

          That you regard 8.2 as unkind is useful feedback. I appreciate your candor in rebuke.

          I confess, however, that I am a little bit lost as to why you take 8.2 as uniquely directed against WT. I do not mention WT specifically, nor did I intend it to refer uniquely to WT (although he certainly is among the variety of respondents on this thread who seem unwilling to engage with the actual hypo). Why do you read 8.2 as relating particularly to WT?

      1. 8.3.1

        People all the time making these charts about how AI is going to take over jerbs and all (they always put “the takeover” 10 years away etc). Meanwhile in reality AI is doing well to make old movies have better quality and win SC2 games and other near trivialities, just like it did for 70 years. And yes, I see their explanation. Still not convinced.

        1. 8.3.1.1

          I hear you 6 — the very same type of “tomorrow is Doomsday” is bandied about for the “D” mantras such as global warming (uh, I mean Climate Change).

          That being said, the Mother Jones article is playing this a bit differently by attempting to align with Moore’s Law.

          One larger point here — much to the chagrin of the herd of ostriches — is that legal implications should be thought through NOW, before that tipping point of Moore’s Law makes it a type of “too late” discussion.

      2. 8.3.2

        I remember this article coming out in 2013. Thanks for making me feel old.

        I’m willing to forgive Drum’s exuberance because from what I understand there was a growth spurt in ML around 2010-2012 which didn’t persist. What may have been only extremely optimistic in 2013 (“True artificial intelligence will very likely be here within a couple of decades) seems a bit silly today.

          1. 8.3.2.1.1

            I do not find hand-waving at exponential growth to be a persuasive argument here.

            Thinking about how to advance the discussion beyond “I disagree”, I looked at some surveys regarding what actual “AI” researchers think (one example: link to arxiv.org ). It does seem like there is a non-zero number of experts who support very short timeliness, but that the more median experts find it likely that “high level machine intelligence” will be achieved in the range of 2040-2060.

            I hesitated to even mention that because I don’t find it compelling, but I thought I should since it shows that a few experts believe in what I find to be preposterous (AGI in the next decade).

            There is definitely a self selection bias between people who decide to spend their lives researching “AI” and belief in “AI” progress, so I’m not putting a whole lot of weight on the 2040-2060 time-frame either (though I don’t find it totally preposterous). After all, nuclear fusion has been 20 years away for 40 years, right?

            I guess we’ll just have to leave it at “I disagree.”

            1. 8.3.2.1.1.1

              Maybe use some of that ‘examiner skill’ and find someone with more than one 2012 patent to his name (as Owain does).

          2. 8.3.2.1.2

            6,

            You should like the link presented by Greg. There are political implications abounding…

            This is particularly true if many of the ways that AI works are ones we’d rather not know about, for example by violating people’s privacy or relying on information or generalizations that companies or governments would rather not endorse publicly but are useful algorithmically.

    2. 8.4

      Greg, it might depend on what “folks” you allude to, whether they have any appetite to “think” creatively and philosophically about the AI as Inventor issue.

      Science used to be known as “natural philosophy”. My guess is that university departments of philosophy are crying out for interesting new areas of study, new issues about which to “think”.

      If so, it won’t be long before we see a tidal wave of literature and conferences about this topic. But not organised by patent attorneys: it’s not their field.

      1. 8.4.1

        My guess is that university departments of philosophy are crying out for interesting new areas of study, new issues about which to “think”.

        Why would you guess that?

        Have you been around academia? at all?

        1. 8.4.1.1

          Why guess that? Because of the maxim “Publish or perish” and the need not only to get something published but to publish something that is interesting enough to get cited? These threads are full of comments from patent practitioners disparaging academics for publishing stuff that isn’t insightful enough to deserve publication.

          What do you mean by “around” academia? I did only one year of post-grad research before joining a law firm. I guess that’s not enough for me to have been “around” academia. Oh well, I won’t let it get me down.

          1. 8.4.1.1.1

            Lol – I bet that you do not even realize just how backwards your post paints you.

            But you be you and keep on circling that same tiny block that you have been circling for more than four decades — that way, you can rest assured that you won’t fall off your special Peak of Mount
            S
            T
            U
            P
            I
            D

    3. 8.5

      But the courts in England and Germany are thinking about this issue. Below a Link (hat tip to Watchdog and James Nurton) for Germany. In England, the Thaler case provoked a 2:1 majority decision in the Court of Appeal (perhaps tee-ing it up for consideration by the Supreme Ct)

      link to ipwatchdog.com

  7. 7

    “Who Invented What?”

    Ha! This is a trick question Dennis . . . as everything that can be invented . . . has been invented.

    (And if you don’t believe me, just go ask the PTAB, the CAFC, and SCOTUS)

  8. 5

    What is the actual policy argument against allowing corporate inventorship?

    Many inventors are compelled to assign anyway, aren’t they?

    Corporate personhood is entirely woven into our law so what’s so special about patents?

    That question is only half sarcastic.

    1. 5.1

      Marty,

      It is LESS a “policy” argument and one more that sounds in what the inchoate nature of MAN’S inventions are taken to be — as anyone who has studied the terrain of patent law could inform you (Think Locke).

      The Supreme Court walked through this in the Roche v. Stanford case.

      That answer is zero percent sarcastic.

    2. 5.2

      What is the actual policy argument against allowing corporate inventorship?

      The arguments that I have seen so far all run to the effect that allowing the corporation to be listed as the inventor will work a detriment to individual engineer’s c.v.s. I am not really convinced that this is true, given that individual animators, composers, etc. are able to list “I worked on the Volkswagen ‘Think Small’ ad campaign,” even against a legal backdrop that allows Volkswagen to be listed as the “author” of those ads in the copyright registrations.

  9. 4

    I’m not sure calling it AI adds much. “Corporate owned modeling algorithm outputs optimized, non-obvious shape for a manifold. Attorney claims manifold”

  10. 3

    How can either 1 or 3 be determined with no clue as to who wrote that AI software, or if it, or at least its particular use, is novel and unobvious?

  11. 2

    Here, the devil is in the details. Your hypothetical states “AI’s contribution would be co-inventive if it were human.” What exactly would that contribution be?

    There is a considerable amount of caselaw discussing what contributions are needed to qualify as an inventor. However, that caselaw can charitably be described as murky. Here is a quote from a very old District Court case:
    “The exact parameters of what constitutes joint inventorship are quite difficult to explain. It is one of the muddiest concepts in the muddy metaphysics of the patent law.”

    Here is an interesting finding from Kimberly-Clark v. Procter & Gamble, 973 F.2d 911 (Fed. Cir. 1992):
    For persons to be joint inventors under Section 116, there must be some element of joint behavior, such as collaboration or working under common direction, one inventor seeing a relevant report and building upon it or hearing another’s suggestion at a meeting. Here there was nothing of that nature. Individuals cannot be joint inventors if they are completely ignorant of what each other has done until years after their individual independent efforts. They cannot be totally independent of each other and be joint inventors.
    In the context of what AI is capable of doing today, there is no collaboration between a human and the AI. The AI (e.g., a neural network) is unaware of what is being done by the human inventor. The AI computes an output based upon data inputs. AI is a tool — albeit a very sophisticated tool. I doubt anybody would deem a microscope to be a joint inventor because it enables some researcher to better understand the impact of a particular pharmaceutical compound on a cell. After all, a microscope is but a tool.

    Getting back to my original question (i.e., “What exactly would that contribution be?”), I don’t believe AI (in its current state) is capable of providing an inventive contribution. While I don’t purport to be an expert in AI, I have written/prosecuted a substantial number of patent applications that rely upon AI, which has led me to do a considerable amount of research into AI, how AI works, and its capabilities. While AI has some impressive abilities, AI has far more shortcomings than most people know.

    1. 2.1

      I don’t see the collaboration requirement as much of a hurdle. There is no requirement that the joint inventors know or understand that they are part of an invention process. And the invention need not be identified or recognized by all joint inventors. Rather, it is enough that Human-A provides information to Human-B, and that information provides a substantial contribution to the ultimate invention. The question here is whether the results change when “Human-A” is replaced by a non-human.

      1. 2.1.1

        Rather, it is enough that Human-A provides information to Human-B, and that information provides a substantial contribution to the ultimate invention.
        If providing information was enough, shouldn’t Google’s search engine be listed as co-inventor on 30-40% of all US Applications? In fact, Google does use artificial intelligence in providing its results.

        Hypothetical: Human A, to facilitate a particular drug interactions, wants to identify a protein that has a particular 3D shape having certain characteristics. Human A asks Human B to identify candidate amino-acid sequences that could meet that requirement. Human B then uses a computer program, coupled to an AI network, to identify these potential candidates. The AI network spits out 5 candidate amino-acid sequences that Human B relays to Human A. Human A, after testing, determines that 2 of the candidates match the requirements and produced the desired effect. A patent application is subsequently filed that includes claims that recite the 2 candidate amino-acid sequences that proved effective.
        Question? Is Human B a co-inventor? Is the AI a co-inventor?

        My answer is that neither Human B nor the AI are co-inventors. Human B merely exercised normal skill expected of one skilled in the art in preparing the search. As for the AI, it performed very sophisticated pattern matching. However, the pattern it was looking for was programmed by somebody else (i.e., Human B). It is just a very sophisticated search engine, and I don’t believe search engines should be named inventors.

        BTW — the AI that I described above is real, and its capabilities have been talked about as being a great advance. In short, it is able to accurately predict protein structures from their amino-acid sequence. For more information, do a Google search on AlphaFold.

        In my eyes, it is a very powerful tool. However, it is not an inventor. It doesn’t understand what it is being looked for. It doesn’t appreciate the problem being solved. Once conceived, this particular AI doesn’t even understand the invention.

        To be clear, as a technical/legal matter, I don’t believe that current AIs are capable of being inventors. As a matter of policy, I think permitting an AI to be an inventor is a bad idea and not necessary based upon the current state of the technology.

        1. 2.1.1.1

          Is your point here to prove that it is possible for AI not to be an inventor? I think that everyone agrees that—on any given invention—it is both possible and plausible that none of the inventors to be listed are AI. I am not really clear what you hope to establish with this one.

          1. 2.1.1.1.1

            FWIW, I would almost certainly have listed B alongside A. I am not really clear from the hypo’s presentation what the “AI” was doing here, so it is harder to say whether the AI deserves a mention in the inventor list.

    2. 2.2

      Wt,

      That is an interesting “devil,” but a bit of a non sequitur (as opposed to being any type of decisive factor).

      I would take this opportunity to also note that several of the twits are also pursuing non sequitors (such as wanting to focus on other steps outside of the inventor per se), seemingly wanting a Singularity type of presence — no such level is necessary for the legal points AT point to be discussed.

      1. 2.2.1

        That is an interesting “devil,” but a bit of a non sequitur (as opposed to being any type of decisive factor).
        The details are hardly a non sequitur. Since DABUS/Thaler first presented itself on the scene, I have reviewed dozens of Federal Circuit decisions that involve inventorship. The details are always important.

        1. 2.2.1.1

          What I meant is for the hypo HERE — you wanted to answer a different hypo, and certainly, any such different details for a different hypo would be important.

          But the thrust here was to get to a legal point with certain things to be taken as GIVEN.

          Also, since you have processed to hold a certain viewpoint — that of AI incapable of being the inventive entity, you appear to not want to engage THIS hypo.

          You should suspend your predisposition and take the hypo as GIVEN though — if for nothing else, to appreciate just how any future advance that WOULD satisfy your view of AI as inventive entity would play out.

          1. 2.2.1.1.1

            What I meant is for the hypo HERE — you wanted to answer a different hypo, and certainly, any such different details for a different hypo would be important.
            The problem with the hypo HERE is that it assumes a set of facts that I don’t think are possible based upon the current state of AI.

            Also, since you have processed to hold a certain viewpoint — that of AI incapable of being the inventive entity, you appear to not want to engage THIS hypo.
            I like to engage in reality — not fantasy (or science fiction).

            You should suspend your predisposition and take the hypo as GIVEN though — if for nothing else, to appreciate just how any future advance that WOULD satisfy your view of AI as inventive entity would play out.
            Its not a particularly difficult endeavor to suspend my predisposition. In that instance, I am about as confident as I can be that neither the USPTO, nor the Courts, nor Congress are going to be inclined to implement, interpret, or draft the law in a way that will allow an artificial intelligence to be an inventor. As such, given the hypothetical given, my answer would be the first choice.

            Personally, I see nothing good come from allowing an artificial intelligence to be an inventor. Also, I’m still waiting for someone to put forth a policy argument as to why allowing an AI to be an inventor is a good thing.

            1. 2.2.1.1.1.1

              Thanks Wt (I was working up from the botttom of the comments).

              That being said, whether or not you are confident that NONE of the USPTO, the Courts or Congress would allow a patent to grant to an AI as inventive entity, your answer of the first choice would be wrong.

              Taking the hypo as GIVEN – you would need to fire yourself, as you have knowingly perpetrated a fraud on the Office.

              (you seem to still want to address some different hypo).

              Further – as I noted – the hypo AS GIVEN generates additional interesting wrinkles.

              1. 2.2.1.1.1.1.1

                Taking the hypo as GIVEN – you would need to fire yourself, as you have knowingly perpetrated a fraud on the Office.
                No I wouldn’t. The Courts get to interpret the law as to who is the inventor. If they say an AI cannot be an inventor, then I haven’t perpetrated a fraud on the Office. That being said, where in the hypothetical does it say that I’m the prosecuting attorney? And you come down on me for changing the HYPTO — tsk, tsk.

                Further – as I noted – the hypo AS GIVEN generates additional interesting wrinkles.
                We can consider a hypothetical in which every conservative member of the Supreme Court resigns and Biden gets to choose add another 6 members to the court. I’m sure there are a lot of wrinkles to consider there. Moreover, this hypothetical could be more likely than the HYPO presented above so why don’t we discuss this?

                No judge, in his or her right mind, is going to issue a decision that gives a machine the same legal standing as a natural person.

                If you want to have a debate as to the policy reasons for or against having an AI as an inventor, then let’s go there. And when I say “let’s go there,” this is same type of debate that will be presented before the Court and/or in Congress before the HYPO presented by Dennis can be realistic. Let’s not put the cart before the horse.

                1. It is less any type of “policy debate” and more a mere reflection of the fundamentals of the US patent system and that system’s genesis in a Lockean view.

                  Long and short of it is that it is simply plain dishonest to pretend that an innovative advance has been made by some one ELSE (that is, the mere real person that opens a black box that contains an innovation devised by another is KNOWN NOT to legally BE the inventor) is somehow “ok” just because the innovation (or aspect thereof) was non-human generated.

                  In a parallel copyright case (of sorts), the “right answer” is that no one gets the protection.

                  See link to hrfmlaw.com.

  12. 1

    Nice – this provides a natural extension to the DABUS case (there, it was admitted that NO human inventor could be named).

    In this hypo, the extension (may) explore the legal effect of NOT naming ALL of the actual inventors.

    Not included in the hypo though is the ethical consideration impacting the prosecution attorney in knowing that not all of the actual inventors CAN be named under current US law.

    Ethics does NOT allow such a prosecuting attorney to turn a blind eye to a full capture of the inventorship (as claims would NOT be properly attributed to other inventors that may have invented only portions of the claimed invention – knowing misattribution of listed inventors to portions KNOWN not to be invented by the human inventors cannot ethically be taken).

    Also possibly unfolding here (but still not explicit) is the effect of non-human inventorship on the OTHER non-human juristic person (legal person) of the Person Having Ordinary Skill In The Art.

    That legal person is also known to be a different type of non-human, as NO human in the world can have the capabilities of the legal person of the PHOSITA (for example, instant world-wide knowledge of all published materials). This is NOT an idle factor, nor one that can wait for any enlarged view of The Singularity, as this legal notion is concerned with the State of the Art, and that State — today, right now — includes AI inventions not made by actual humans.

    1. 1.1

      anon,

      I wonder if the ethical practitioner interviewed the AI to ascertain what it contributed to the invention? Did the AI feel what it did was novel? Would the AI be open to be deposed if the patent was litigated?

      1. 1.1.1

        Interesting, but there is no requirement for the practitioner to interview any of the inventors, so the attempted point does not reach.

        1. 1.1.1.1

          Interesting, but there is no requirement for the practitioner to interview any of the inventors, so the attempted point does not reach.
          If you are implying that a US patent practitioner has the ethical duty to identify proper inventorship (i.e., “Ethics does NOT allow such a prosecuting attorney to turn a blind eye to a full capture of the inventorship”), why wouldn’t you interview all potential inventors? Merely taking the word of one person that someone(s) are inventors does not count for much of an investigation. Would you file a complaint of patent infringement just on the basis of the client’s word that the other party infringes?

          In special instances, such as this, I would perform a higher level of diligence. Another example that might require a higher level of diligence would involve inventorship that potentially spans multiple companies. Regardless, in this instance of a supposed AI inventor, I would like to know exactly what the AI contributed to the invention and how the contribution came about. I wouldn’t put my name on any filing that is (deliberately) coming right up against (and testing) the boundaries of the law involving inventorship without full knowledge of the facts from which I could base my own professional opinion as to inventorship. An attorney who is abdicating, to the client, a determination of inventorship is not doing their job.

          Also, the example that Dennis provided was that of joint inventorship, and being a joint inventor is a lower standard that being a sole inventor. However, in the instance of DABUS, Thaler is stating that DABUS is the sole inventor. From my technical experience with AI and my knowledge of the US legal requirements for inventorship, that is not possible absent some special legal carveouts.

          1. 1.1.1.1.1

            You over read my point — which is that it is presented as KNOWING that an improper/incomplete inventorship situation is presented.

            To parallel to something you might be willing to understand (given your reticence of accepting AI as a possible inventive entity), it would be like you being provided a list of inventors and the client telling you that they are purposefully leaving out one guy who the firm wants to fire and not give proper credit.

            1. 1.1.1.1.1.1

              it would be like you being provided a list of inventors and the client telling you that they are purposefully leaving out one guy who the firm wants to fire and not give proper credit
              Unless the client is a patent attorney, it isn’t up to them to evaluate inventorship — that’s my job. If I think the client is deliberately trying to leave out an inventor, then I fire the client. If the clink is deliberately trying to add an inventor that doesn’t belong, then I fire the client.

              In most instances, inventorship is pretty well set and it isn’t going to be contested. In those instances, I’m not going to do an extensive litigation. However, if I believe that accurate inventorship might become an issue down the road, then I’m going to investigate and come to my own conclusion.

              I get what Dennis is trying to do. However, I believe he is assuming facts that don’t exist in the real world.

              1. 1.1.1.1.1.1.1

                It’s called a hypo – so even if you do NOT want to accept what others accept, give the hypo a shot.

        2. 1.1.1.2

          Who is making the determination of the contribution to the claimed invention in this example? 1) Human, 2) Practitioner, 3) AI? (Setting aside that Dennis’s question is not complete. There can be no analysis of who an inventor is without a claim!)

          This is why I went done the practitioner inquiry path. The AI is not determining that it is an inventor – it doesn’t not have the capability to ascertain whether it’s contributions were inventive. In fact, I would also argue that AI can’t be an inventor because it lacks awareness.* Given this, whether the AI contributed to the claimed invention must be made by someone else – either it is the practitioner or human.

          If practitioner, contrary to some on this board, the practitioner has an ethical duty to act in good faith before the PTO and investigate proper inventorship. What are the implications of inproper inventorship? (1) requesting correction of inventorship, (2) invalidating a patent, (3) a derivation proceeding, or (4) a finding of inequitable conduct if the requisite deceptive intent is found. See, e.g., Ajinomoto Co., Inc. v. Archer-Daniels-Midland Co., 228 F.3d 1338 (Fed. Cir. 2000); Frank’s Casing Crew & Rental Tools v. PMR Techs., 292 F.3d 1363 (Fed. Cir. 2002).

          If human, does human, being a non-practitioner, have the ability to make this legal determination? Checkpoint Systems, Inc. v. All-Tag Security S.A., 412 F.3d 1331, 1338 (Fed. Cir. 2005) (“Inventorship is a
          question of law with underlying factual issues.”

          So no, AI (in its current state of self (or lack thereof) awareness), is not an inventor.

          *An entity must have contemporaneous recognition and appreciation of the invention for there to be conception. Because the AI doesn’t have this ability, it can’t form a definite and permanent idea of the complete and operable invention, thus one can’t establish that an AI conceived of an invention. Unless and until AI becomes sentient, Ai can’t be an inventor.

          1. 1.1.1.2.3

            *An entity must have contemporaneous recognition and appreciation of the invention for there to be conception. Because the AI doesn’t have this ability, it can’t form a definite and permanent idea of the complete and operable invention, thus one can’t establish that an AI conceived of an invention. Unless and until AI becomes sentient, Ai can’t be an inventor.

            No. You are also trying to elevate what it legally takes. As noted in the DAVUS case, there WAS “contemporaneous recognition and appreciation of the invention.”

            Again – I “get” that you want to hold onto a certain belief – but it is YOU that is attempting to insert facts that are just not a part of this hypo.

            C’mon folks – the time is NOW to explore the legal ramifications. Let’s just stop trying to play ostrich, eh?

            1. 1.1.1.2.3.1

              As noted in the DAVUS case, there WAS “contemporaneous recognition and appreciation of the invention.”
              Again, as someone who is familiar with how AI works, I would say that this finding is a big, smelly, crock of BS.

              C’mon folks – the time is NOW to explore the legal ramifications. Let’s just stop trying to play ostrich, eh?
              Sorry — not much interest in exploring legal ramifications of an exceptionally unlikely factual situation. If you want to play this game, you have to play it yourself.

              1. 1.1.1.2.3.1.1

                Are you saying that with a combination of controlled and uncontrolled learning, along with the relatively easy parameters of whether something fits some definition of novel and non-obvious that “contemporaneous recognition and appreciation” is somehow unforeseeable?

                Maybe you need to work with better state of the art….

                1. that “contemporaneous recognition and appreciation” is somehow unforeseeable
                  You didn’t write it was somehow foreseeable. You wrote there “WAS” contemporaneous recognition and appreciation. If ‘contemporaneous recognition and appreciation’ is being claimed, I want to know what that actually looks like.

                  Maybe you need to work with better state of the art….
                  Extraordinary claims require extraordinary evidence — Carl Sagan.

                  And BTW, Thaler admits to being described as “an outside, fringe scientist.” As best I can tell, there is only him working on DABUS. I find no mention of other collaborators. When I reviewed both his old issued patents and his technical papers, what I noticed is a lack of extensive citations to either. I would expect that someone who is producing cutting edge technology to have his work extensively cited.

                  Let’s talk for a moment about the two patent applications that were filed. Both were filed in 2018. What has DABUS been doing the last 4 years? If you ask me, one way to convince people that AI is capable of being an inventor is to show LOTS of inventions created by the AI.

                  And let’s talk about the inventions. The fractal container patent application is, IMHO, poorly drafted and the idea itself, IMHO, is ill-conceived to put it mildly. The flickering light patent application, IMHO, is even worse.

                  Today was the first day I actually read the flickering light application. At the tail-end of the application includes a list of “Literature References” that were cited in the application. Coincidentally, 8 of the 11 references were by Thaler — yet somehow this application was all the idea of DABUS.

                  Here are two passages from the flickering light patent application:
                  Such embodiments stem from the notion of one perceiving neural net monitoring another imagining net, the so-called “Creativity Machine Paradigm” (Thaler 2013), which has been proposed as the basis of an “adjunct” religion wherein cosmic consciousness, tantamount to a deity, spontaneously forms as regions of space topology pinch off from one another to form similar ideating and perceiving pairs, each consistent of mere inorganic matter and energy. Ironically, this very neural paradigm has itself proposed an alternative use for such a flicker rate, name a religious object that integrates features of more traditional spiritual symbols such as candles and torches.
                  Moreover, in a theory of how cosmic consciousness may form from inorganic matter and energy (Thaler, 1997A, 2010, 2017), the same attentional beacons may be at work between different regions of spacetime. Thus, neuron-like, flashing elements may be used as philosophical, spiritual, or religious symbols, especially when mounted atop candle- or torch-like fixtures, celebrating what may be considered deified cosmic consciousness. Such a light source may also serve as a beacon to that very cosmic consciousness most likely operating via the same neuronal signaling mechanism.

                  It seems that L. Ron Hubbard has a serious rival. Seriously, if I’m looking to advocate for having AI as being capable of being an inventor, I’m not sure if I want this guy to be the one leading the charge.

                  I think the word “fringe” was aptly used.

              2. 1.1.1.2.3.1.2

                Another count filter…

                Your comment is awaiting moderation.

                April 21, 2022 at 1:30 pm

                Are you saying that with a combination of controlled and uncontrolled learning, along with the relatively easy parameters of whether something fits some definition of novel and non-obvious that “contemporaneous recognition and appreciation” is somehow unforeseeable?

                Maybe you need to work with better state of the art….

            2. 1.1.1.2.3.2

              Recognition by who? AI is not recognizing that it’s output is an invention.

              Programmer of the AI may or may not be an inventor depending on the facts. AI itself cannot be an inventor – inventorship requires contribution to ‘conception’. This doesn’t occur until some human looks at the output of the AI.

              1. 1.1.1.2.3.2.1

                AI is not recognizing that it’s output is an invention.

                This is not true (leastwise in the DABUS case).

                The larger point though is that humans CAN and DO recognize that an inventive item is not theirs.

                So many people trying so very hard to NOT engage the actual hypo….

                1. So many people trying so very hard to NOT engage the actual hypo….
                  Again, people are more interested in engaging in a discussion about things more closely tethered to reality than fantasy (or science fiction).

            3. 1.1.1.2.3.3

              How ever can an applicant comply with the disclosure duty under Rule 56 in a case where a processor (a lump of silicon, copper, aluminum, and plastic) is an inventor?

            4. 1.1.1.2.3.4

              “You are also trying to elevate what it legally takes.” Um yes…. the hypo assumes we are answering this question as it pertains to the law of the land today.

              Inventorship is a mulitple-pronged test. Is the inventor an individual (DABUS case); and can the inventor appreciate that an invention was made, a concept imbedded in conceived and reduced to practice?

              Setting aside the first prong, the inventor must form a definite and permanent idea of the complete and operable invention to establish conception. Bosies v. Benedict, 27 F.3d 539, 543, 30 USPQ2d 1862, 1865 (Fed. Cir. 1994). There must be a contemporaneous recognition and appreciation of the invention for there to be conception. Silvestri v. Grant, 496 F.2d 593, 596, 181 USPQ 706, 708 (CCPA 1974).

              You miss the point on contemporaneous as used in the quoted case law. In Silvestri, Silvestri was trying to show prior invention in an interference. Silvestri was using notes etc as evidence of conception and reduction to practice. The Fed Cir (Authored by Rich) emphasized that more important to the Court’s decision was the board’s conclusion that at the time the invention was alleged to have been made by Silvestri, there was no contemporaneous recognition that the new form of ampicillin [the invention] defined by the count had been made. Hence contemporaneous means did the inventor appreciate that there was an invention occurring in the same period of time as the documents alleging proving conception of the invention. In other words, was there a recognition by the inventor that an invention was made? Since AI has yet to achieve this level of consciousness, AI cannot (yet) meet this prong.

              Regarding DABUS (Thaler v Hirshfeld), that court did not hold that there was “contemporaneous recognition and appreciation of the invention.” Please provide a pinpont cite if I missed it. My read of the DABUS case turned on the requirement that an inventor be an individual and discussed the statutory construction of what it meant to be an individual, i.e. a natural person.

              If we assume (facts not in the hypo) that AI is sentient by inference from Dennis’ language of “AI’s contribution would be co-inventive if it were human,” then AI could meet the second prong of an inventor.

              1. 1.1.1.2.3.4.1

                Full sentience is not required.

                All that is required is to have some portion NOT properly allocated to a human inventor.

                And — as my “black box” example has shown — merely having some real person open up a black box containing the inventive work of another does NOT count.

                As to the court and its holdings – the (US) court not having it as a holding is not dispositive, as it was uncontested. So no, your thrust there also falls short.

          2. 1.1.1.2.4

            AI can’t be an inventor because it lacks awareness… Unless and until AI becomes sentient, A[I] can’t be an inventor.

            You do not know that AI lacks sentience, any more than you know that Elon Musk has sentience. Each of us knows that s/he is sentient, and merely assumes as much of those we meet. You merely assume that Musk is sentient and assume that AI is not. You could not prove either contention if you had to do so.

            This lack of actual knowledge—the working on unprovable assumptions—rather gives the lie to the idea that sentience is actually relevant to the determination of inventorship. Precisely because we invest precisely zero effort in confirming sentience of the inventors for whom we all work, it can be easily ascertained that we actually ascribe zero significance to sentience. The important thing is that an invention disclosure arrives to us. We (properly) regard the disclosure as proof that an invention has been made, and proceed apace. Occasionally we have reason to wonder whether the names listed on the disclosure really were involved, and undertake to investigate (“where and when did you first communication this idea?, or “did you save those e-mails?”, etc.). There is no logical reason why the endpoint of such an investigation could not be a computer. Sentience, in this circumstance, is an irrelevant red-herring.

              1. 1.1.1.2.4.1.1

                Egads, I have to check myself — Greg is getting my points….
                That says a whole lot about your position, doesn’t it?

            1. 1.1.1.2.4.2

              “Each of us knows that s/he is sentient, and merely assumes as much of those we meet.”

              Ridiculously bad philosophical outlook there Greg. You being a literal solipsism adopter explains why you have other issues in correctly understanding the world around you though. That literally impacts your whole world view and leads to modern day leftist positions in many ways. It’s a real plague upon the world. Good for you to confirm that you have adopted that philosophy as now I can certainly understand many of the positions you will take as a consequence.

              Just as an FYI, no, not everyone presumes that they are sentient and merely assumes that those around them are sentient. Some doubt their own sentience and some literally consider it established that others around them are sentient (the later being the chad view/philosophical outlook btw and upon which most good philosophical outlooks are based).

              “You could not prove either contention if you had to do so.”

              That depends entirely on your standards of proof.

            2. 1.1.1.2.4.3

              Nice contribution, Greg, on the relevance of sentience. Perhaps a sort of “Turing Test”. You, the patent attorney, receive a written communication, purporting to be a patentable invention. You can envisage a patent application based on the content of the communication.

              Can you determine whether the sender was human or an AI? Does it matter, whether you can or can’t? And what would be the correct declaration of inventorship to the PTO?

              1. 1.1.1.2.4.3.1

                Nice (polite) post from you here MaxDrei — but profoundly RUDE at the same time, in that you (again) are ign0r1ng a rebuttal already on the table that takes care of the situation that you want to paint: my black box analogy.

              2. 1.1.1.2.4.3.2

                one reply appears to have been eaten – but the long and short of it is that your “politeness” to Greg only compounds your rudeness to others, as this post here has already been directly contemplated and is encapsulated in my Black Box analogy.

                I fully “get” that you “may not be interested” in having particular conversations with particular people, but that predilection does not change the FACT of behavior as rude or not.

              3. 1.1.1.2.4.3.3

                Does it matter, whether you can or can’t? And what would be the correct declaration of inventorship to the PTO?

                Exactly the right question. I think that (for better or worse) the current state of U.S. law means that it makes a difference if one concludes that the inventor is an AI. Naruto v. Slater, 888 F.3d 418, 426 (9th Cir. 2018) announces a sort of “plain statement” rule if Congress intends to extend statutory rights to non-humans. Because there is no such “plain statement” in Title 35, the ostensible conclusion is that “whoever invents or discovers” in 35 U.S.C. §101 does not reach as far as AI “inventors.”

                As a matter of public policy and first principles, however, it is less clear to me why we should want to exclude AI. Presumably there is (at least at present) as much value in the public disclosure of ideas first articulated by AI as there is in the public disclosure of ideas first articulated by humans. Therefore, there is as much reason to what AI to participate in the patent system (or rather, not to keep such inventions as trade secrets) as there is for human participants.

                Of course, at present the way that this problem is typically handled is that some human merely pretends to have invented that which the AI produced. I think that it would be better (not only ethically, but that is also a consideration) if we were honest and upfront about such things and listed AI as an inventor (or even the inventor) where relevant.

                1. Greg, from the IPWatchdog blog item on DABUS in the German court of appeal, I take the following:

                  “….the court said that a revised designation of the inventor stating “Stephen L. Thaler, PhD who prompted the artificial intelligence DABUS to create the invention” was allowable.”

                2. ….the court said that a revised designation of the inventor stating “Stephen L. Thaler, PhD who prompted the artificial intelligence DABUS to create the invention” was allowable.”

                  Germany – as its own Sovereign – is free to do as it may so choose, but this path is BOTH a very slippery slope, and would NOT fly under the Constitutional moors of the United States.

                  First, the slippery slope: What the German court in essence has done is open a can of worms with this “prompt” language. While in the immediate instance, the German court is seeking to placate a “real person” into somehow being “at the onset,” a future application of this ‘reasoning’ could easily per ver t that aim and turn this AGAINST a real person at the onset, and place the “revised designation of inventor” as being a NON-human “person.”

                  To wit: the legal fiction of a corporation is known to be a non-human person under the law. A corporation, through its charter, hiring and paying of real persons, can “prompt” those real persons to invent.

                  This is entirely unsurprising.

                  But what this “revised designation of inventor” permits – through the ‘prompt’ language – is that the corporation may take the INITIAL designation of “inventor” AWAY from the real person inventor.

                  Sure, there may well be recourse of challenge by the real human person inventor (and then again, there may not be) – but that is entirely a secondary factor.

                  Second, as I have long pointed out, the US basis simply will not allow this mechanism. I have pointed out the Supreme Court’s reasoning in Roche v. Stanford explicates why the US Sovereign’s basis in Lockean theory absolutely bars this type of bypass.

                  As I have also pointed out, the attempt to so strenuously deny the rather simple concept that an invention (a TRUE invention, and not your ostrich-type MIS-setting of AI as some type of mere “provider of a list” [which – as I am certain that you are aware – does NOT satisfy the notion of devisor, so your suggestion is null from the start]) may well have inventive elements that simply cannot be LEGALLY traced to an actual human inventor.

                  No matter what you may think of DABUS (crank, fringe, L I A R, or the like), the plain and inescapable situation that THAT case starts (and here, Prof. Crouch attempts to broaden with his different hypo), is that an invention WILL BE presented to which some PART of the inventive claim cannot legally (and morally) be asserted to be BY a human.

                  Now while Ben above muses about the focus on AI as inventor as opposed to AI’s invention informing the State of the Art for the other NON-HUMAN legal concern of the legal fiction of Person Having Ordinary Skill In The Art, I (and yes, no matter how “uninteresting” you may deign to view such – being from me) have been among the first to invite a legal dialogue on these multiple legal points.

            3. 1.1.1.2.4.4

              Sentience means having the capacity to have feelings. This requires a level of awareness and cognitive ability.

              I know Elon is sentient. I have seen (on TV and in his Tweets) Elon express what we, as human beings, consider to be “feelings.” I observe behaviors of Elon showing a level of awareness of his surroundings and cognitive ability. No software to date, has what we as humans call sentience.

              “The important thing is that an invention disclosure arrives to us. ” Maybe this is the problem – assuming that just because someone filled out a piece of paper (an invention disclosure form), an invention was made?

              “There is no logical reason why the endpoint of such an investigation could not be a computer.” How do you query the computer to see when it made the idea? You ask the operator of the AI when s/he input the parameters that the AI used to generate the output.

              Let’s continue down your chosen path. If AI can be an inventor, what are the parameters we used to determine when to list the AI. In other words, what other machines that generate our data would be needed to be listed? Help me understand what rises to the level of AI? A simple tool, a simple machine, a genomic sequencing machine?

              1. 1.1.1.2.4.4.1

                Yet again xtian – as you seem to insist on missing the point — full sentience is NOT what is required for the legal point of inventorship.

                In that (limited) sense, Greg is close to being on point than you.

                As I have attempted to make it easy for you and Wt, you can take the converse path: the inventorship of a non-human IS implicated if an invention exists and at least some portion of that invention cannot be claimed to LEGALLY be made by a human.

                This ‘wanting the Singularity’ or expanding the legal point to other sentient acts is nothing more than dust kicking.

                Stop kicking dust — that really does not help anyone.

                1. Stop kicking dust — that really does not help anyone.
                  I’m not the one kicking up dust. I want to know EXACTLY what DABUS provided. I want transparency as it pertains to allegations of inventorship. You, on the other hand, want to cover up what DABUS did (or did not) actually provide and ASSUME that DABUS, in fact, provided something that could be called inventive. You are the one kicking up dust.

                  The legal conclusion of inventorship always rests upon the underlying facts. You want to skip over the underlying facts so you can focus your discussion on the legal conclusion. Many of us are saying — don’t put the cart before the horse. This isn’t kicking up dust — rather, this is performing the inquiry in the proper order.

                2. You ARE kicking up dust with wanting a different hypo than the one presented.

                  This was pointed out to you in the First instance.

                  You have expressed “your belief,” and have refused to open your mind to ANY legal consideration that would transgress YOUR expressed belief.

                  Not only are you kicking up dust, you protest too much about YOUR action of kicking up dust.

                  The tighter you cling to your belief, the more silly you look.

                3. I want to know EXACTLY what DABUS provided. I want transparency as it pertains to allegations of inventorship. You, on the other hand, want to cover up what DABUS did (or did not) actually provide and ASSUME that DABUS, in fact, provided something that could be called inventive. You are the one kicking up dust.

                  Prof. Crouch’s hypo is not DABUS. It is possible that DABUS did not really invent anything, but even if that were the case, it would not really affect Prof. Crouch’s hypo at all. It is not particularly helpful to clear thinking to conflate the two.

                4. The tighter you cling to your belief, the more silly you look.
                  The more silly I look? I see you haven’t addressed what Thaler and his actions are really about — for good reason, Thaler is on the fringe of the fringe.

                  What DABUS did or didn’t do is reality — yet you are running away from it.
                  What Thaler actually writes about AI is reality — yet you don’t want to address it.

                  You want to embrace a hypothetical that is grounded in fantasy — not reality. Go for it. Discuss it with whomever you want. I’m not going to stop you. You, on the other hand, are obsessed with those of us who see (and identify) the shortcomings in the hypothetical itself and are uninterested in discussing the hypothetical, as presented.

                  No one is stopping you from having a nice long conversation with Greg about the legal ramifications of the hypothetical. However, YOU don’t get to tell ME what I can or cannot discuss. I choose not to embrace the facts underlying the HYPO presented by Dennis, and I have presented my reasons why I have chosen not to embrace those underlying facts. That’s my choice — a choice you have no right to make — despite all of your haranguing.

                  If you want me to discuss your hypothetical, then present a realistic fact pattern that leads to this hypothetical.

                  And to address Greg, who wrote:
                  Prof. Crouch’s hypo is not DABUS. It is possible that DABUS did not really invent anything, but even if that were the case, it would not really affect Prof. Crouch’s hypo at all. It is not particularly helpful to clear thinking to conflate the two.
                  While Dennis presents a hypo that has the potential to be more realistic than the one being claimed by Thaler (i.e., regarding DABUS), I still see it as one that doesn’t reflect the reality of AI, as I know it. Dennis assumes that what the AI did would be inventive if did by a human. From my experience, I don’t see that happening today or anytime soon.

                  Before I get to the point of addressing what happens when an AI provides an inventive activity, I won’t to know:
                  1) what are the capabilities of an AI?
                  2) would any of those capabilities qualify one to be a joint inventor?

                  The answer to question 1) is technical. The answer to question 2) involves legal issues. However, I’m not going to work under the assumption that the answer to question 2) is “yes.”

              2. 1.1.1.2.4.4.2

                I know Elon is sentient. I have seen (on TV and in his Tweets) Elon express what we, as human beings, consider to be “feelings.”

                You have heard and read Musk emit words. You interpret those words as evidence of feelings.

                Computers are also capable of emitting words. If a computer emitted exactly the same words that you have heard and read from Musk, would you conclude that the computer is also expressing the same feelings as Musk?

                1. My test for sentience is to ask the AI something that it is not programmed to do. A simple question to AI: “When did you recognize that you had an invention?”

                  Anon: please address my other question. Let’s assume AI is an inventor (by what ever standards you are using). Now provide me a test to determine what other type of machines w/should be listed as an inventor? A hammer? a microscope? A genetic sequencer?

                  What happens if the AI is cloud based and is being leased by the company. Does the company who runs the AI then own the invention? Ownership follows inventorship, right? So the AI signed an employee contract assigning its rights to the company who programmed it?

                2. Y
                  A
                  W
                  N

                  My test for sentience is…

                  immaterial.

                  “Not programmed to do” is ALREADY reached — leastwise for those that employ unstructured learning.

                  You are STILL looking for TOO MUCH.

                  As to your rather silly question, it is abundantly obvious in the difference between the use of a tool to obtain the mental construct already harbored in a human’s mind, and the acknowledged LACK of having that result and then leaning on something else to generate the actual result (unforeseen and unbidden).

                  You really need to have at least a minimal grasp of why the AI topic is reaching this level of discussion, xtian.

                3. ..and again, questions of ownership are SEPARATE from questions of inventorship.

                  Stop trying to run from the immediate point at hand.

                4. My test for sentience is to ask the AI something that it is not programmed to do.

                  Do you apply this same test to Elon Musk? How do you know that Musk’s remarks are not pre-programmed?

              3. 1.1.1.2.4.4.3

                “I know Elon is sentient. I have seen (on TV and in his Tweets) Elon express what we, as human beings, consider to be ‘feelings.'”

                He could be a p-zombie.

Comments are closed.