HYPO: Human uses a corporate owned AI to generate an invention. AI's contribution would be co-inventive if it were human.
Who invented?
— Dennis Crouch (@patentlyo) April 19, 2022
HYPO: Human uses a corporate owned AI to generate an invention. AI's contribution would be co-inventive if it were human.
Who invented?
— Dennis Crouch (@patentlyo) April 19, 2022
Why is it that the world of patent law is so focused on the inventorship issue with “AI”?
The impact of “AI” on obviousness seems to be a much more pressing issue in patent law.
I think Ben is right for once.
How about “Ben is not wrong for once”…?
While he has “identified”- wrinkle (one of which I have identified many times previously), he really has not staked out a cogent legal position, now has he?
Yes anon I know that you, me, and others have all made the same point.
But Ben’s point is simply that the 103 issue is a bigger issue than inventorship, which I agree with.
I suppose “bigger” might be accurate, as “State of Art” affects all, and not just any asserted (or omitted) inventorship — which may actually come to bear only later when trying to assert a patent.
Can you imagine what the courts might do to those who purposely omit that an inventive aspect cannot be traced directly to a real live human?
Given the CAFC’s propensity to take any issue and try to find a way to limit patents/invalidate the patent at bar, I’d say that the likely case law will be that anything found by an AI is presumptively obvious.
Or some other type of rot that the likes of Taranto will conjure up.
Well here, I am not sure that “rot” is a fair description.
Of course, it may well be important to note that AI itself will not be static – today’s AI is to be expected to be inferior to tomorrow’s AI, and the legal aspect of obviousness — already an item outside of a REAL person, and being that of a legal fiction — would need be maintained for its purpose of reflecting State of the Art.
IF the naysayers here do not want to ascribe an advance by a non-human to BE inventive, then it is only logical that any such advance BE deemed to be a State of the Art against which a human would then need to advance upon.
In my Black Box analogy, that person in the second room that opens a black box that merely already has the invention of another in it, is not – and cannot be – an inventor. To THAT person, if one wants to deem that person to be AN inventor, there need be FURTHER invention, and such would need be not obvious to what is in the box.
As I have already pointed out (and Greg followed later), a somewhat parallel case is the Simian Selfie – and the “right” answer there was that NO ONE got copyright.
“Who Invented What?”
I don’t know what the answer presently is, but once the question is relevant to the real world the answer is of course:
“paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip, paperclip…
I’m wondering how to reconcile the court decisions in England and Germany which admit of the possibility of recognising an AI as the “actual devisor” of the claimed subject matter while continuing to require that a human be named as “inventor”.
Perhaps TSM offers a prism to look through.
I mean, suppose the AI throws up all the possible solutions to a technical problem, for review by its human master, who then identifies the patentable matter. The inventive step lies in that identification. The AI merely provides a shortlist of those possibilities for which it found a hint or suggestion in its review of the published art. So, its contribution, as such, was not inventive.
How about that?
No.
That is expressly NOT reconciliation.
Further, this definitely falls flat in view of my Black Box scenario that I know that you have been long aware of.
Your flinging of C R P here does not even make it to the wall — it remains stuck on you.
I mean, suppose the AI throws up all the possible solutions to a technical problem, for review by its human master, who then identifies the patentable matter. The inventive step lies in that identification. The AI merely provides a shortlist of those possibilities for which it found a hint or suggestion in its review of the published art. So, its contribution, as such, was not inventive.
This is consistent with my hypothetical at 2.1.1 — a hypothetical that no one on this blog has addressed.
Anybody suggesting that AI is capable of doing more than that needs to present evidence of this expanded capability. The word of someone who believes: (i) AIs are sentient, (ii) AIs have emotion, (iii) AIs can have near-death experiences, and (iv) AIs are capable of both mental illness and criminality is not enough in my book.
Your “book” simply does not accord with the facts presented.
I “get” that you do not like the DABUS applications, but notwithstanding possible OTHER violations (such as 112), you are simply not able to make up your own take of what has been accepted by the courts.
I am actually quite amazed at how diligent you have been in trying NOT to talk about the instant hypo. You have spent far more energy in avoiding something than actually discussing anything that you would deign to paint as “more realistic.”
with the facts presented.
Are we talking about real facts or are we talking about the hypothetical? For real facts I want real evidence. The hypothetical is fantasy as far as I’m concerned until someone presents evidence otherwise.
you are simply not able to make up your own take of what has been accepted by the courts.
Accepted by the courts? What evidence has been presented by the courts? You should know better than most that what a court ‘accepts’ as being true does not necessarily comport with reality.
You have spent far more energy in avoiding something than actually discussing anything that you would deign to paint as “more realistic.
I’m discussing the issue as it pertains to my experience with artificial intelligence (i.e., reality as I know it). If there is a reality that differs from mine, I want evidence of that reality. For someone that usually shows a healthy amount of skepticism, I am curious as to why you have bought everything Thaler’s selling — hook, line, and sinker.
It is not simply not I that is doing any “buying” hook, line and sinker.
Why are you fighting so feverishly to avoid the discussion?
What are you afraid of?
Lol — too many “nots” there.
Let’s be simple: stop running from this hypo.
Wt, personally, I am still caught between two lines of thought, as follows:
1. Only by “thinking” can a patentable invention be conceived. But recall from the 18th century Enlightenment the utterance: Dubio ergo cogito. Cogito ergo sum. Everybody knows the second sentence but the first might here be important. Before you can designate some action as thinking, you have to find doubt. Computers don’t do doubt though, do they. Therefore they don’t think. Therefore they don’t conceive inventions (however much devising they do). This might explain the decision of the appeal court in Germany in the DABUS case.
2. In answer to Greg at 11124, for the patent drafter it is immaterial whether the communication of a patentable invention emanates from a human inventor or an AI or anon’s Black Box. Apply the Turing Test. Can the drafter tell the difference between an AI inventor and a humn inventor?
We might need philosophers to help us to mull over the repercussions of 1. and 2.
“…however much devising they do).”
Trying so very hard to be an ostrich…
The UK patent statute (unlike the EPC to which it is subordinate) includes a definition of who is to be named as “inventor” namely the “actual devisor” of the claimed subject matter. This definition complicated the DABUS litigation in the courts of England.
It really is no more difficult than the US version — IF people would simply be honest and admit to the portions that actual humans met the actual US legal requirements to.
Yet, THAT seems awfully difficult for a lot of people who would rather insist on some type of “Sentience Plus.”
My black box analogy cleanly — and simply — takes care of this.
As to “Apply the Turing Test”
The easy rebuttal is: Why?
To what effect would ANY answer be as to the practitioner opening up ANY black box and proceeding to write the application?
It simply matters not at all from that practitioner’s perspective.
What? When you draft, do you not ever ask the inventor any questions, with a view to flushing out what their invention is? How is that process going to fare, when the inventor is an AI?
What? When you draft, do you not ever ask the inventor any questions, with a view to flushing out what their invention is? How is that process going to fare, when the inventor is an AI?
Good points. I want to see the original disclosure. IMHO, the fractal container patent application is a half-baked idea — it is written as a half-baked idea.
My questions would involve: exactly what of this patent application did DADUS provide? How did DABUS provide it? In what form did DABUS provide it?
MaxDrei, your questions are simply immaterial to the point at hand and obfuscate the point of the hypo.
You are aiding those seeking to avoid discussing the legal point AT point.
Already count filtered….
Your comment is awaiting moderation.
April 22, 2022 at 10:41 am
MaxDrei, your questions are simply immaterial to the point at hand and obfuscate the point of the hypo.
You are aiding those seeking to avoid discussing the legal point AT point.
Come on Wandering you are better than this. Your (i)-(iv) are ridiculous and have nothing to do with the issues.
The reality is that AI is here and it is going to keep getting better and better with more and more functionality.
Your example is ridiculous as one can write AI to include evaluating the solutions. The reality is that there is nothing that people can do that AI won’t be able to do within 20 years.
B-b-but Ben (in his own infinite wisdom) disbelieves that notion.
Come on Wandering you are better than this. Your (i)-(iv) are ridiculous and have nothing to do with the issues.
(i)-(iv) are relevant because the one person (Thaler) whose activities have spurred this discussion also has those beliefs. If we are going to leave the hypothetical world and enter the realm of reality, then we are left discussing the issue framed by the purported facts presented by Thaler. As such, (i)-(iv) goes to Thaler’s credibility and motivation.
The reality is that AI is here and it is going to keep getting better and better with more and more functionality.
I’ve drafted/prosecuted many patent applications that involve AI. I am familiar with AI’s capabilities and limitations. As such, I’m viewing AI through reality, as I know it — not the kind of reality being pushed by Thaler.
Your example is ridiculous as one can write AI to include evaluating the solutions.
If all AI is doing is performing some known evaluation then what the AI is doing is not inventive.
The reality is that there is nothing that people can do that AI won’t be able to do within 20 years.
Hardly. This is the ‘singularity’ that Anon speaks of. If AI is capable of doing everything people can do, then AI would be capable of programming another AI. Moreover, given the inherent capabilities of AI, an AI should be able to do it better/faster. If that happens, the AI can create better versions of itself, which can then create even more better versions of itself. A walk down that path leads to a frequent trope in science fiction.
I suggest that everyone actually read the two patent applications that DABUS is the supposed inventor of. They are examples of the capabilities (or lack thereof of DABUS). I’ve made this point before, but what has DABUS been doing the last 4 years? Has everything capable of being invented by AI already been invented?
Wandering, all fair enough I suppose.
I would argue that you underestimate AI. It really is all based on having the processing power to perform the functions that humans perform. AI will continue to become more and more powerful as processing speeds go up.
Not worth arguing where that is going to go. I do AI applications with some of the best in AI as well and have to draft applications based on papers to the AI conferences. So, I have a pretty good idea where it is now. I am old AI person that learned symbolic AI with Lisp back in the late 70’s/early 80’s and have being doing AI on and off the entire time.
I took a course some 40-50 years ago from one of the top people in AI at the time. One of those MIT people that worked with McCarthy who was the one that came up with the “singularity” idea but it was stolen by that other guy and renamed.
Anyway, this top AI researcher spent about 1/2 a lecture one day going over the computational speed of the brain compared with computers. He pretty much mapped out where AI would go over the next 100 years. I think back in 1980, he said that he thought it would be until 2040 before we saw AI that would match humans.
Anyway…..we’ll see how it develops. I think–just my opinion–that your problem is that you don’t understand cognitive science well enough. We really aren’t that great at processing information.
Night Writer,
i would also point out that it is NOT only a function of processing power, as while that is important to unleash the power of neural networks, it is also HOW those neural networks can “self-align” or otherwise act in unforeseen manners that TAKES AWAY from an actual human that which is being designated as “inventive.”
And just wait until Quantum computing is established — Moore’s law on steroids, so to speak.
Be all of that as it may, I would hesitate to further pontificate about the FUTURE , as that is only feeding the Ostrich effect of those not wanting to NOW explore the legal implications of non-human inventorship (and I will point out again, this does NOT require any type of *sentience-plus* as may come with The Singularity).
Heck, Wt thrusts his head into the sand so forcibly that he cannot (will not) even contemplate any notion of how that OTHER non-human legal aspect may be impacted – the legal fiction of the Person Having Ordinary Skill In The Art.
“AI will continue to become more and more powerful as processing speeds go up.”
I just want to note that one can agree with this while also thinking we’re a long way from AGI. I think it’s clear our single-purpose “AIs” will be vastly better in 2030, and they will probably be performing many human tasks better than humans. But the existence of narrow “AI” excellence does not imply broad “AI”/AGI processing.
Well Ben, let me give you some credit here and note that for many, the notion of “narrow AI” may well be MORE ‘buzz word’ and LESS any inventive aspect by a non-human.
But the better discussion would come from giving the Professor’s hypo its full weight, rather than the (massive amount of) hiding from the proposed point.
It really is all based on having the processing power to perform the functions that humans perform. AI will continue to become more and more powerful as processing speeds go up.
AI performs certain functions wonderfully — even much better than humans. However, AI is not particularly bright in identifying hidden problems (i.e., problems that exist but most people aren’t aware of). AI also has a very poor understanding of the natural world. This is why, for example, AI can confuse a full moon with a traffic signal, or is unable to identify a stop sign that has vines growing over it, or is unable to determine the difference between a real tree and a tree painted on a truck. BTW, these are real examples. AI, as I know it, cannot identify a problem, identify multiple possible solutions to that problem (perhaps pulling from very different source material), perhaps integrating aspects of those possible solutions, and then work through those solutions to arrive at something unique.
On the other hand, if you give AI a haystack and tell it to “find all the needles in that haystack,” it can perform wonderfully. Once it is trained to tell the difference between hay and a needle, it’ll do a great job identifying needle candidates. Some of those needle candidates may not, in fact, be needles. However, it’ll take further training and adding additional capability of the AI for it to make those determinations. Regardless, is finding a needle in a haystack inventive? Some may differ on that answer. If all it is doing is using brute force to do so, then I would say no.
“ If all it is doing is using brute force to do so, then I would say no.”
And you would be wrong. See 35 USC 103.
(You keep on spending energy avoiding the point AT point)
And you would be wrong. See 35 USC 103.
103 is about obviousness — not inventorship. But you already know that.
You keep on spending energy avoiding the point AT point
Don’t wait on me to start discussing the hypothetical. My guess is that 90% of your posts to this article has been complaining about people not engaging with the hypothetical. Why don’t you write something about your thoughts regarding the hypothetical? Maybe you’ll write something that people want to comment about.
You aren’t the thought police on this blog. If we choose to not engage with the hypothetical, that’s our prerogative. Why are you so insistent on forcing us to engage in a discussion that we want no part of?
I responded to YOUR use of the term inventive.
35 USC 103 IS directly on point to that point.
looking for something else, I happened upon this:
link to patentlyo.com
Shockingly, this was about the same time that we last saw ANY activity over on the ethics side of this blog.
That space would be better put to use following my very informative postings.
“However, AI is not particularly bright in identifying hidden problems (i.e., problems that exist but most people aren’t aware of).”
As I mentioned, you need to check out better prior art.
There is art in the AI space dealing directly with this.
As I mentioned, you need to check out better prior art.
Always with generalities and never with specifics. How about YOU present that evidence.
Generalities suffice.
How about you stop running away from the actual hypo?
Wandering,
The proof will be in the pudding. Your points are –to my mind–about 20 years behind the current thinking. But it doesn’t really matter.
AI will have to prove itself and when AI exhibits a functionality, I am sure you will acknowledge it.
Wt – you are letting your emotions (and 0bsess10n) with Dr. Thaler simply overwhelm any sense of reason.
I “get” that you think his DABUS thing is one big publicity stunt (and your quip of “ Has everything capable of being invented by AI already been invented?” MISPLAYS that sense of DABUS not having been let loose to do other inventing”.
It is easy enough to see that DABUS is just not the be all and end all, so I really do not get why you are so afraid of letting that go and focusing on the hypos presented.
Your quip to Night Writer in regards to what I have been calling The Singularity also evidences your too-quickly running away, as had you paused (at all), you would have recognized that THAT point of reaching The Singularity is NOT at point here. You are being distracted by a level of *sentience* NOT REQUIRED to bring about the more direct point of an inventive aspect that cannot be (legally) traced to a human as the inventor of that aspect.
Again, my ‘black box’ hypothetical clearly – and cleanly – makes this distinction.
I suggest that you stop kicking up dust with your 0bsess10n over DABUS, and engage on the merits of the presented hypothetical.
I really do not get why you are so afraid of letting that go and focusing on the hypos presented.
I have no interest in giving Thaler what he wants, which is publicity for his cause. He is using this issue as a backdoor into giving AI legal personhood. If AI can be an inventor, AI should also have other rights that are normally associated with a natural person.
Suppose, for example, there was a serious effort put forth by serious people to have AI recognized as capable of being an inventor. The people who attack the patent system would have a field day on this issue. While you may be willing to engage in a civil philosophical discussion over “AI as a inventor,” the enemies of our patent system would use this issue against the patent system writ large.
I could envision the arguments. AI will be taking away people’s jobs. Next, they’ll be giving AI the right to vote. Who is going to be controlling AI — yeah, you know, those big-tech liberals. We need to put serious constraints on the patent system to make sure that this doesn’t happen. The patent reform act that explicitly kills AI as an inventor will also be the one that kills all computer-implemented inventions.
Not waste where your qualm is coming from, there exists already legal personhood for non-humans in several aspects of patent law.
To your assertion of “attacking,” I have already showed that an attack by NOT reflecting the actuality of AI as inventor is upon us with State of the Art and “must be obvious.”
Your House of Woes (for “future” other rights) is — again — only dustkicking, and as I have pointed out, the legal point AT point simply does not need to reach the level of The Singularity.
These count filters b l 0 w
Your comment is awaiting moderation.
April 25, 2022 at 12:56 pm
Odd autocorrect…
“waste” —> “seeing”
I have already showed that an attack by NOT reflecting the actuality of AI as inventor is upon us with State of the Art and “must be obvious.”
Sigh. You’ve shown? Much of your writing is collection of self-referential, missives that lead a reader thinking to himself/herself … ‘what the heck is he talking about?”
Neither I nor other readers are any position to know what you wrote about a topic a week ago, a month ago, or a year ago. However, you write as if everyone has complete knowledge (and understanding) of everything you’ve written. Let’s forget, for a moment, that on many topics you dance around the point without ever getting to a point, we’ve been discussing AI for how long, and I don’t even know if you think AI being an inventor is a good thing or a bad thing and the reasons why.
It doesn’t take long for one reading my comments to understand my point of view and what I have that point of view. Although not on every topic, but on many topics I’m still not sure what your point of view is — this one in particular. It doesn’t help when the vast majority of your posts are directed to telling people how wrong they are — regardless of the topic.
I’m sorry, but your writing style leaves a LOT to be desired.
As off point as they are, your feelings are noted.
Of course, your feelings continue to cloud your judgment, or – as is plain here – preclude you from even bothering to attempt to address Prof. Crouch’s actual hypo.
I suggest that you put your feelings aside.
But you be you (and any relation of that sentiment to your own advice to Ben is indeed deliberate).
[S]uppose the AI throws up all the possible solutions to a technical problem, for review by its human master, who then identifies the patentable matter. The inventive step lies in that identification. The AI merely provides a shortlist of those possibilities for which it found a hint or suggestion in its review of the published art. So, its contribution, as such, was not inventive.
Suppose that a grad student throws up all the possible solutions to a technical problem, for review by the thesis advisor, who then identifies the patentable matter. Is it your contention that only the thesis advisor qualifies as an “inventor” here?
Suppose that a slave in 1840 Alabama throws up all the possible solutions to a technical problem, for review by her owner, who then identifies the patentable matter. Is it your contention that only the owner qualifies as an “inventor” here? Is that really the implication of TSM?
Someone may want to help Greg with my post directly on the slave angle.
Oh wait – he’s already seen it as that post as at IPWatchdog, a place that Greg does not have his “anon says” technical blinder in place.
But hey, why give credit where credit is due?
… and TSM has zero to do with the point at hand — it’s a Euro throw-away
Greg, I don’t know enough about how an AI devises a solution gto a technical problem but what I was thinking is that what an AI does is to react to hints or suggestions in the state of the art, to come up with its solutions.
But if we apply a TSM approach to obviousness, all those solutions are (to the imaginary omniscient PHOSITA) obvious solutions. You don’t get to be an inventor if all you do is output what was obvious (to the imaginary PHOSITA) to do to solve the problem.
“You don’t get to be an inventor if all you do is output what was obvious (to the imaginary PHOSITA) to do to solve the problem.”
MaxDrei (yet again) reveals the huge pile of horse carcasses next to the well of wisdom that I lead him to.
“Greg, I don’t know enough about how an AI devises a solution…”
That is PAINFULLY obvious [pun intended], as your ‘grasp’ of legal logic is very much deficient here. You simply do NOT get to postulate that the AI contribution is nothing more than “obvious” to insert the Euro throw-away buzzword of “TSM” to find that “inventor” does not apply to provisions of obviousness.
The whole point here is that there IS a legitimate, patentable advance, and that advance is just not ENTIRELY traceable to human inventors.
The hypo here is NOT that this can be broken down into a Human patentable advance and a separate non-human, obvious “help” from a non-human.
If that were the case, then the ENTIRE scenario would never even BE advanced.
Suppose that a grad student throws up all the possible solutions to a technical problem, for review by the thesis advisor, who then identifies the patentable matter. Is it your contention that only the thesis advisor qualifies as an “inventor” here?
Like everything, it all depends upon the facts. I put in a bunch of keywords related to my problem into a search engine and come up with a bunch of things, does that make Google’s AI-powered search engine an inventor? Is your grad student doing more or less than a search engine?
Suppose that a …
Did you really have to go there? Seriously?
Dennis, I’m pleased to see that your interest in this issue continues with vigor. I’ve read your brief, remarks, and comments of all those above, and see that there are many interesting points of view. But I continue to rely on my gut that tells me that use of AI—at least, current AI—doesn’t raise any issues. I particularly liked the remarks of one person who used the analogy of the microscope. Humans invent, machines only make the job easier.
another ostrich…
If AI were only “another tool,” this would not be a topic.
Sad to say, but the only person who appears to even approach engaging is Greg DeLassus.
Not an ostrich but a person that doesn’t understand information processing.
Note the separation of brain/people from machines. They don’t get that we are information processors and that what we do a machine can do just like a bulldozer can push dirt like our bodies can.
They are just stuck in their education pre-information age.
I think that we both can agree use we ALL should expect better from those involved in the field of innovation.
count filtered…
Your comment is awaiting moderation.
April 21, 2022 at 11:32 am
another 0str1ch…
If AI were only “another tool,” this would not be a topic.
Sad to say, but the only person who appears to even approach engaging is Greg DeLassus.
link to smbc-comics.com
OT but important patent litigation surprise reported:
[“Who Owns What” instead of “Who Invented What”]
“This past Monday, Chief Judge Connolly of D. Del issued a standing order for all pending litigation before him requiring disclosure of certain financial relationships from litigating parties. The information is due 30 days from filing of an initial pleading, and includes arrangements made between parties and third party funders. While eminently sensible in terms of identifying true decision makers for settlement purposes, or identifying potential conflicts of interest, the new requirements will surely send NPEs screaming into the night.”
Interesting and thanks.
Would this also be reciprocal to the business model of say Unified Patents?
How many NPEs choose to file in D Del? I thought that NPEs hated that district like the plague. It is a high traffic venue for patent litigation, to be sure, but I was under the impression that this was mostly a function of litigation between practicing entities.
Greg, I believe D. Del is still number 2 for new patent suits, but I appreciate that most PAEs are much more likely to file in Waco WDTX than Delaware nowadays, IF they can get valid venue there. But I am curious if any other district courts other than Delaware have adopted any such standing order?
I am not surprised that most of the response seem to be to resist the hypo. A lot of folks really do not want to think about this issue.
Even an international organization (as provided in a link at: link to ipwatchdog.com ) does NOT want to touch “AI as inventor.”
Oxygen is absolutely essential to human life. However, through all of human history, oxygen has been abundant and ubiquitous. For this reason, it has never made sense for government to concern itself with securing an oxygen supply for its citizens, and I am unaware of any government in human history ever undertaking to do so.
By contrast, new know-how has—at all times in human history—been both beneficial and hard to achieve. For this reason, it has made sense for various governments at various times to institute programs to smooth the way for the discovery of new know-how, and to incentivize people to busy themselves in the work of such discovery. Patent laws represent one such government program, and probably the most successful such program.
A world of real AI, however, is (perhaps) a world in which new know-how becomes much easier to achieve, and therefore much more common. It will probably never be as common as oxygen, but it is easy to imagine a world in which new know-how becomes so cheap to obtain that the logic underlying patent law erodes. In a world in which you can get new know-how even without the incentives of patent law, does it make sense to incur the costs of such incentives?
In other words, the future progression of AI is one that bids fair to put most of us around these parts out of work. Can it be a surprise, then, that the most common reaction around these parts to hypothetical speculation about such a day is a sort of whistling past the graveyard (e.g., “Dennis is… assuming facts that don’t exist in the real world” and “AI has far more shortcomings than most people know”)?
“In a world in which you can get new know-how even without the incentives of patent law, does it make sense to incur the costs of such incentives?”
Still need it to be disclosed, so I’m not really sure. Is everyone operating the inventive AIs in their backyard on their own personal fusion reactor?
6,
You alight upon one of the Libs fears (unstated) in this drama: AI kept under wraps.
The push against a view of not granting protection is that such would push non-disclosure.
There are inherent and natural difficulties already with being able to maintain “just how” a solution had been derived with AI — which as I am certain that You are aware of, impedes implementation of Identity Politics.
If AI “goes deep,” then the political control leverage is lost.
Still need it to be disclosed…
Will we? Part of the logic of patents is that you want to facilitate disclosure because making a dozen different inventors to re-discover the same fact is less efficient than having one make the discovery and the other eleven build on top of that publicized knowledge. In other words, the reason why you care about incentivizing disclosure is because of the costs incurred in reaching the discovery. You want to spare others from incurring those costs, so that their resources can go, instead, into further progress rather than “reinventing the wheel,” so to speak.
In a world, however, in which discoveries become cheaper and easier to realize, the inefficiency of having the other eleven have to make the discovery independently is correspondingly less. At that point, maybe you just do not care so much about incentivizing disclosure.