Elon Musk Challenges OpenAI in Court, Claims Breach of Contract
steamXO/flickr
Article audio sponsored by The John Birch Society

Elon Musk is taking the competition against one of his biggest artificial intelligence (AI) competitors to the courtroom.

The Tesla founder and CEO on Thursday filed a lawsuit against OpenAI, its founder Sam Altman, and several other individuals on the grounds that they violated the company’s founding mission.

As CNBC reports, Musk’s lawsuit, filed with a San Francisco court, claims that OpenAI, which produces the popular ChatGPT chatbot, was founded on the proposition that it would make artificial intelligence “for the benefit of humanity broadly.”

According to Musk, Altman and OpenAI strayed from that mission with the firm’s partnership with Microsoft, perverting OpenAI’s objective of making AI for “humanity” into making AI for Microsoft’s benefit.

“To this day, OpenAI, Inc.’s website continues to profess that its charter is to ensure that AGI [artificial general intelligence] ‘benefits all of humanity.’ In reality, however, OpenAI, Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft,” reads Musk’s lawsuit.

Musk, the owner of social media platform X, was a co-founder of OpenAI back in 2015. His attorneys said he was approached by Altman and OpenAI co-founder Greg Brockman that year and agreed to go along with their proposition to create an AI lab whose technology would be for the “benefit of humanity.” Musk stepped down from the company’s board in 2018.

Per Musk’s legal team, OpenAI’s aim of making money for Microsoft constitutes a breach of contract given the AI firm’s original aim. “Under its new Board, it is not just developing but is actually refining an AGI to maximize profits for Microsoft, rather than for the benefit of humanity,” the lawsuit contends.

Musk’s lawyers also maintained that the goal of the lawsuit is “to compel OpenAI to adhere to the Founding Agreement and return to its mission to develop AGI for the benefit of humanity, not to personally benefit the individual Defendants and the largest technology company in the world.”

In just a short time since debuting in November of 2022, OpenAI has skyrocketed to becoming one of the most important technology firms in the world. Its ChatGPT product remains the leader in the “large language model” field, although other big tech giants have tried to play catch-up with chatbots of their own, including Google with its answer to ChatGPT, Gemini.

Musk stepped into the AI competition himself with the launch of xAI last July. As Reuters notes of Musk’s efforts in the artificial intelligence field:

Musk’s rival AI effort with xAI is made up of engineers hired from some of the top U.S. technology firms he hopes to challenge, such as Google and Microsoft.

The startup started rolling out its ChatGPT competitor Grok for Premium+ subscribers of social media platform X in December and aims to create what Musk has said would be a “maximum truth-seeking AI.”

According to xAI’s website, the startup is a separate company from Musk’s other businesses, but will work closely with X and Tesla.

Musk has also made waves about his interest in artificial intelligence via Tesla. In January, he stirred controversy with Tesla shareholders, saying he felt uncomfortable growing the carmaker into a leader in AI and robotics unless he had at least 25% voting control of the company. Musk, who ranked second on the Forbes Real-Time Billionaires List on Friday, at an estimated worth of $210.6 billion, currently owns about 13% of Tesla.

Musk has waded into AI despite repeatedly sounding the alarm about the dangers the technology poses. Last year, for example, he was one of a large group of tech experts who put out a document calling for a six-month pause to the development of any AI more powerful than ChatGPT-4. He has also said artificial intelligence is “potentially more dangerous than nukes.”

As far as Musk’s legal battle against Altman, legal experts who spoke with Reuters gave the opinion that the X owner’s breach-of-contract claim is unlikely to hold up in court.

But OpenAI has other legal ordeals on its plate. Its relationship with Microsoft is being probed by antitrust officials in both the United States and the United Kingdom.

AI is gradually becoming integrated in all aspects of society. The Pentagon, for example, is actively developing AI for military use, with applications being explored for cyberwarfare, drones, logistics, decision-making, and more.

But this raises questions about the dangers of autonomous weapons. AI systems, despite advancements, can still make mistakes or misinterpret situations. This could lead to civilian casualties, escalation of conflicts, and unpredictable outcomes in warfare that might fall outside human control — including the control of elected officials and military brass.

There are also ethical considerations to take into account. When AI makes decisions that result in harm, assigning blame and holding someone accountable becomes complex. This can blur responsibility and create situations where no one is held responsible for the consequences — even when these consequences involve massive loss of life.

Often, creating new technologies can prove to be easier than grappling with the moral dilemmas that result from those technologies.