The AGI Lawsuit: Elon Musk vs. OpenAI and the Quest for Artificial General Intelligence that Benefits Humanity

By Dennis Crouch

Elon Musk was instrumental in the initial creation of OpenAI as a nonprofit with the vision of responsibly developing artificial intelligence (AI) to benefit humanity and to prevent monopolistic control over the technology. After ChatGPT went viral in late 2022, the company began focusing more on revenue and profits.  It added a major for-profit subsidiary and completed a $13+ billion deal with Microsoft — entitling the industry giant to a large share of OpenAI’s future profits and a seat on the Board. 

In a new lawsuit, Elon Musk alleges that OpenAI and its CEO Sam Altman have breached the organization’s founding vision. [Musk vs OpenAI]. 

Musk contributed over $44 million between 2015 and 2020 to OpenAI. He alleges OpenAI induced these large donations through repeated promises in its founding documents and communications that it would remain a public-spirited non-profit developing artificial general intelligence (AGI) cautiously and for the broad benefit of humanity. Musk claims he relied on these assurances that OpenAI would not become controlled by a single corporation when deciding to provide essential seed funding. With OpenAI now increasingly aligned with Microsoft’s commercial interests, Musk argues the results of his financial contributions did not achieve their promised altruistic purpose. 

Perhaps the most interesting portion of the debate involves allegations that OpenAI’s latest language model, GPT-4, already constitutes AGI — meaning it has human-level intelligence across a range of tasks. Musk further claims OpenAI has secretly developed an even more powerful AGI system known as Q* that shows ability to chain logical reasoning beyond human capability — arguably reaching artificial super intelligence (ASI) or at least strong AGI. 

The complaint discusses some of the potential risks of AGI: 

Mr. Musk has long recognized that AGI poses a grave threat to humanity—perhaps the greatest existential threat we face today. His concerns mirrored those raised before him by luminaries like Stephen Hawking and Sun Microsystems founder Bill Joy. Our entire economy is based around the fact that humans work together and come up with the best solutions to a hard task. If a machine can solve nearly any task better than we can, that machine becomes more economically useful than we are. As Mr. Joy warned, with strong AGI, “the future doesn’t need us.” Mr. Musk publicly called for a variety of measures to address the dangers of AGI, from voluntary moratoria to regulation, but his calls largely fell on deaf ears.

Complaint at paragraph 18. In other words, Musk argues advanced AI threatens to replace and surpass humans across occupations if its intelligence becomes more generally capable. This could render many jobs and human skills obsolete, destabilizing economies and society by making people less essential than automated systems.

One note here for readers is to recognize important and fundamental differences between AGI and consciousness. AGI refers to the ability of an AI system to perform any intellectual task that a human can do, focusing on problem-solving, memory utilization, creative tasks and decision-making capabilities.  On the other hand, consciousness involves self-awareness, subjective experiences, emotional understanding, and decision-making capabilities that are not solely linked to intelligence levels.  AGI – the focus of the lawsuit here – poses important risks to our human societal structure. But, it is relatively small potatoes to consciousness that raises serious ethical considerations as the AI moves well beyond a human tool. 

The complaint makes it clear Musk believes OpenAI has already achieved AGI with GPT-4 — but it is a tricky thing to measure.  Fascinatingly, whether Musk wins may hinge on a San Francisco jury deciding if programs like GPT-4 and Q* legally constitute AGI. So how might jurors go about making this monumental determination? There are a few approaches they could take:

  1. Turing Test: This classic test, proposed by Alan Turing in 1950, involves a human judge interacting with a hidden entity, either another human or a computer program. If the judge cannot reliably distinguish the program from the human through conversation, the program is considered intelligent. While the Turing Test has limitations, it remains a starting point for evaluating an AI’s ability to exhibit human-like skills and understanding.
  2. Employment Test: This approach, proposed by Nils Nilsson, suggests measuring an AI’s capability to perform various jobs traditionally held by humans. If an AI can demonstrably perform a wide range of tasks across diverse fields at a level comparable to or exceeding human performance, it signifies a significant step towards AGI. This test is important because it relates directly to the societal risks posed by AGI. As Musk’s complaint describes, our modern economy depends on humans exchanging their labor and talents. However, if an AGI system can outperform humans across a wide range of diverse jobs, this equilibrium becomes disrupted. Such an AGI could replace human roles, causing technological unemployment and economic instability.
  3. Reasoning and Problem-Solving: A crucial aspect of human intelligence is the ability to reason, solve problems, and make sound decisions in complex situations. An AGI should demonstrate similar capabilities, going beyond simply following predefined algorithms or mimicking observed patterns. It should be able to analyze information, identify relationships, and draw logical conclusions to solve problems creatively and independently.
  4. Social and Emotional Intelligence: While not strictly necessary for some definitions of AGI, the ability to understand and respond to human emotions and social cues is a significant aspect of human intelligence. An AGI with social and emotional intelligence could potentially interact more naturally with humans, build stronger relationships, and navigate social situations more effectively.

A 2023 article from a group of China-based AI researchers proposes what they call the “Tong test” for assessing AGI. An important note from the article is that AGI is not a simple yes/no threshold but rather is something that should be quantified across a wide range of dimensions. The article proposes five dimensions: vision, language, reasoning, motor skills, and learning. The proposal would also measures the degree to which an AI system exhibits human values in a self-driven manner.

I can imagine expert testimony in the case, with Musk’s lawyers presenting key examples showing the wide applicability of GPT-4 and OpenAI’s own lawyers showing its own system repeatedly failing. Although this approach is obviously not a true measure of general intelligence or an ideal way to make such an important decision, it does highlight challenges inherent in trying to pass judgment on either a  complex machine system and our measures of human intelligence.  At its best, the adversarial litigation process itself, with its proof and counterproof process, reflects  a form of scientific process — with the benefit of actually arriving at a legally binding answer. 

Understanding the Inner Workings: OpenAI’s latest language models keep their internal designs largely opaque — similar to the human brain.  Because of our thick-skulls and complex neural arrangement, the vast majority of human neurologic and intelligence testing is functional — focusing on the skills and abilities of the individual rather than directly assessing the inner workings.  It is easy to assume a parallel form of analysis for AI intelligence and capability — especially because human results serve as the standard for measuring AGI.  But the approach to human understanding is a feature of our unique biology and technology level.  AI systems are designed and built by humans and do not have the natural constraints dictated by evolution. And, if transparency and understanding is a goal, it can be directly designed-into the system using transparent design principles. The current black box approach for OpenAI makes evaluating claims of attaining artificial general intelligence difficult. We cannot peer inside to judge whether displayed abilities reflect true comprehension and reasoning or mere pattern recognition.  A key benefit of the litigation system for Elon Musk in this case is that it may force OpenAI to come forward with more inner transparency in order to adequately advocate its position. 

What do you think: What should be the legal test for artificial general intelligence? 

 

 

35 thoughts on “The AGI Lawsuit: Elon Musk vs. OpenAI and the Quest for Artificial General Intelligence that Benefits Humanity

    1. 11.1

      You are aware that that is a future stardate, so not admissible in the current time stream, right?

  1. 10

    One shouldn’t blame Elon for justifiably being displeased here.

    (Noting of course that there are many reasons to be less than pleased with Elon and his antics.)

    Yet, is it not more important — indeed critical — to consider what could happen were democracy-hating countries like China and Russia to lead / control such technologies?

    link to wsj.com

    Note to Congress: Time yet to resort unrestricted patent eligibility to all areas of innovation . . . before it’s too late?

  2. 9

    – ChatGPT is the Netscape of AI. It showed the world the possibilities, but it probably won’t be a major market winner or even survive as a stand-alone business for very long. I already switched to Gemini nee Bard and there will be thousands of specialized tools coming along.

    – Replicating a human mind would be a low bar- not because it’s easy to do, but because it would not accomplish much. The price of human life is already low & the price of an educated human life not that much higher in capex terms. Highly developed AI – in a hundred years- will far surpass our current conceptions of human ability. It will appear as magic, just as current technology would appear as magic to someone from 1924.

    – Elon Musk has warped past the crank stage to full-on weird o, heading for Howard Hughes territory. He also lies, a lot.

    1. 9.1

      Already are thousands.

      Gemini? LOL – was that before or after the Woke snafu?

      Hundred years? – you would be on the VERY VERY far side of experts predictions.

      1. 9.1.1

        Are these the same “experts” who predicted that by 2025. we’d all be reading newspapers while being driven to work by robot cars? Or is it the “experts” who think global warming is a liberal hoax?

        Asking for my computer.

        1. 9.1.1.1

          For your computer, no.

          But that’s some pretty lipstick you are slathering on there…

  3. 8

    What do I think? I think you are so far out of your lane here that the highway is barely even visible through a high power telescope. This is not an IP case. This is a disgruntled-rich-white-male-wants-to-impose-his-overblown-will-on-a-whole-industry case. Poor Elon put a few pennies (relatively speaking) into an organisation he didn’t control, and then got upset when it turned out he didn’t control it! My guess is that this is really about the fact that Musk’s own AI play (Tesla) is still trying to convince the market that it’s not just an auto-maker, while OpenAI hogs the limelight – and milks the cash cow – with its plausible sentence generator.

    More pragmatically, if this is a breach of contract case, what are the maximum damages? Can Elon get back any more than the $44 million or so that he put in? If not, I can’t imagine that OpenAI would be feeling very concerned, with the tens of billions of dollars in backing it now has. It’ll give Elon the finger on principle, but pay up of it has to.

    1. 8.1

      +1

      But let’s revisit your past conversations with me on this tech – that would be fun – for me.

  4. 7

    Artificial intelligence does not mean sentience.

    If you want to create a sentient computer that mimics the human consciousness of the majority of humans it will not be based on artificial intelligence. It has to be artificial stupidity.

    1. 7.2

      It has to be artificial stupidity.
      It will need a poor short-term memory. It will need a poorer long-term memory. It’s logic will be flawed in many instances. It will be slow as molasses. It should act against its own best self-interest in many instances. It will be easily fooled.

      In other words, it should resemble its creator … Elon Musk.

  5. 6

    >And, if transparency and understanding is a goal, it can be directly designed-into the system using transparent design principles.

    IDK. OpenAI can probably provide information on what actions its employees did and why. But I’m skeptical about that that information can provide much insight about the model itself; “model understandability” is pretty rudimentary.

    1. 6.1

      Excellent point OC – there be Billions and TRillions of relations in most all models, through tons of transformers.

      This only reinforces the notion that for this technology, transformation is critical, and that even if a listing of inputs is available (as is in the Open Source models having been created), actual transparency of relation between individual trained elements and any output is dubious at best.

      1. 6.1.1

        “ actual transparency of relation between individual trained elements and any output is dubious at best”

        Seems a tad problematic for the patenting of so-called “AI.” But we all knew that already.

        1. 6.1.1.1

          Oh, this should be good…

          Please explain your view of what is problematic from a technical standpoint.

          Maybe your reformatting your hard rive with the Britney Spears CD has you a bit confused.

        2. 6.1.1.2

          And yet another case of Malcolm running away.

          P00r, p00r marty is going to be sad that I have “dominated” Malcolm again.

  6. 5

    So bros, who here thinks that Elon’s payday of 65 billy (approved by 80% of the board I hear) was unjust but the literal lawyers that argued the case should make 5-6 billylololol? Totally just n sheet?

    link to yahoo.com

  7. 4

    “What should be the legal test for artificial general intelligence?”

    Whatever test our computer overlords tell us will work best.

      1. 4.1.1

        lol – yet another day of Malcolm’s “the Court can reach their desired Ends by any Means” s1ap across the face return.

        Funny – as I predicted – that his reliance on that mode would not please him when it was not HIS Ends being reached.

        1. 4.2.1.1

          Well they’re near worthless because we have one party that keeps trying to buy (ahem “fortify”) the election. Doesn’t always work out, but they keep trying and it worksish sometimes. We’ll see if they can buy it a second time around. And if the American people will just keep letting people buy elections outright.

          link to yahoo.com

          1. 4.2.1.1.1

            meh – plenty of dark money on both sides of the aisle (generally speaking).

            For the Pres race, dynamics may well be different.

            1. 4.2.1.1.1.1

              Check the estimates. And not just dark money counts, even the legit numbers are way off.

  8. 3

    Yawn. Lone S k u m needs to do the world a favor and shoot himself into deep space.

    1. 3.1

      based.

      But not yet, should be the first one to drive a car into mars (literally). It should be the latest cybertruck in 20 years.

  9. 2

    0h N0es – we can’t talk about this, anon has already provided views and we can’t let him “be right.”

    — pretty much any of my naysayers.

  10. 1

    It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at link to arxiv.org

    1. 1.1

      Theory – not a fact: “and higher order consciousness, which came to only humans with the acquisition of language

      Truth be told, to this day, we really do not understand all there is about the human machine.

Comments are closed.