Technology

Elon Musk Ramps Up A.I. Efforts, Even as He Warns of Dangers

In December, Elon Musk took a step back in anger at the development of artificial intelligence.

He knew OpenAI, the startup behind the popular chatbot ChatGPT, had a relationship with Twitter, which it acquired for $44 billion in October. OpenAI licensed Twitter data — a feed of all tweets — for about $2 million a year to help build ChatGPT, two people familiar with the matter said. Musk thought the AI ​​startup wasn’t paying Twitter enough, they said.

That’s why Musk said he separated OpenAI from Twitter’s data.

Since then, Musk has stepped up his AI efforts while publicly discussing the dangers of the technology. He is in talks with Jimmy Barr, a researcher and professor at the University of Toronto, about forming a new AI company called X.AI, according to three people familiar with the matter. He hired his top AI researcher from his DeepMind at his Google on his Twitter. And he has spoken publicly about creating a rival to his ChatGPT that generates political content without restrictions.

These actions are part of Musk’s long and complex history with AI, dominated by his conflicting views on whether technology will ultimately benefit or destroy humanity. He recently rushed to launch an AI project, but last month he also signed an open letter demanding that development of the technology be halted for six months, citing “serious risks to society.”

Musk, who opposes OpenAI and has plans to compete with it, helped found the AI ​​Lab as a nonprofit in 2015. Since then, he says he became disillusioned with OpenAI because it no longer operates as a nonprofit and builds technology that supports one side of the political and social debate.

The point of Musk’s AI approach is to do it yourself. The 51-year-old billionaire, who also runs electric car maker Tesla and rocket company SpaceX, has long believed his own AI efforts offer a better and safer alternative to competitors.

A theoretical cosmologist at the University of California, Santa Cruz, and the organization behind the open letter, the Future of Life Institute. “Like many others, he wonders. What are we going to do about it?”

Musk and Barr, who are known for creating popular algorithms used to train AI systems, did not respond to requests for comment. Talks are continuing, according to three people familiar with the matter.

Hannah Wong, a spokeswoman for OpenAI, said that while it now benefits investors, it is still controlled by a non-profit organization and its profits are capped.

Musk’s roots in AI go back to 2011. At the time, he was an early investor in DeepMind. DeepMind is a London startup that started in 2010 to build Artificial General Intelligence (AGI). can. Less than four years later, Acquired by Google A 50-person company for $650 million.

At an aerospace event at the Massachusetts Institute of Technology in 2014, Musk said: he showed hesitance Build your own AI.

“I think we have to be very careful about artificial intelligence,” he said, answering audience questions. “We are summoning demons with artificial intelligence.”

That winter, the Future of Life Institute, which examines human existential risks, held a closed-door conference in Puerto Rico focused on the future of AI. Musk gave a speech there, arguing that AI could enter dangerous territory without anyone realizing it. He announced that he would fund the institute. He donated $10 million.

In the summer of 2015, Mr. Musk met privately with several AI researchers and entrepreneurs at a dinner party at the Rosewood hotel in Menlo Park, Calif., famous for Silicon Valley deals. By the end of the year, several others who attended the dinner with him — including Sam Altman, then-president of the startup his incubator Y Combinator, and Ilya Sutskever, he’s a top AI researcher — Founded OpenAI.

OpenAI was founded as a non-profit organization, with Musk and others pledging a $1 billion donation. The lab has pledged to “open source” all of its research. This means sharing the underlying software code with the world. Musk and Altman argued that making the technology accessible to everyone, not just tech giants like Google and Facebook, would reduce the threat of harmful AI.

But when OpenAI started building the technology that led to ChatGPT, many in the lab realized that sharing that software openly was risky. With AI, individuals and organizations may be able to generate and distribute false information more quickly and efficiently than other methods. Many OpenAI employees have said the lab should not publish ideas or pieces of code.

Musk resigned from OpenAI’s board in 2018, according to two people familiar with the matter. By then, he had built his own AI project at Tesla — Autopilot, a driver-assist technology that automatically steers, accelerates, and brakes cars on highways. To that end, he poached key employees from his OpenAI.

In a recent interview, Altman declined to specifically talk about Musk, but said Musk’s split with OpenAI was one of many schisms the company has had over the years.

“There is disagreement, mistrust, and egos,” Altman said. There are fierce battles between the closest people.”

After ChatGPT launched in November, Musk became increasingly critical of OpenAI. “We don’t want this to become a profit-maximizing devil out of Hell.

Musk reiterated his frustration that AI was dangerous and accelerated efforts to build it. At a Tesla investor event last month, his car company used his AI system to push the boundaries of self-driving technology, causing a fatal crash, even though regulators called him out. He called on us to protect society from AI.

The same day, Musk tweeted that Twitter would use its own data to train its technology in line with ChatGPT. Twitter said he has hired two researchers from DeepMind, two of whom are familiar with the hiring. information and insider previously reported details of hiring and Twitter’s AI efforts.

In an interview with Carlson last week, Musk said OpenAI no longer serves as a check on the power of the tech giant. He wanted to build TruthGPT. It is a “maximum truth-seeking AI that seeks to understand the nature of the universe”.

Last month, Musk registered X.AI. The startup is incorporated in Nevada, according to registration documents, and the company’s directors are also listed as Musk and his financial manager, Jared Virtual.The document was previously reported by wall street journal.

Experts who have discussed AI with Musk believe he is seriously concerned about the dangers of the technology. Others said his stance was influenced by other motivations, particularly his efforts to promote and profit from his company.

“He says robots are going to kill us?” Ryan Caro, a professor at the University of Washington School of Law, said he once attended an AI event with Musk. “The cars his company makes are already killing people.”

Related Articles

Back to top button