Technology

A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn

A group of industry leaders warned Tuesday that the artificial intelligence technology they are building could one day pose an existential threat to humanity and should be considered a social risk on par with a pandemic or nuclear war. I plan to.

“Reducing the risk of AI-induced extinction should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” he said. one sentence statement It will be released by the non-profit Center for AI Safety. The open letter is signed by more than 350 of his executives, researchers and engineers working in AI.

The signatories included executives from three leading AI companies. Demis Hassabis, CEO of Google DeepMind. Dario Amodei, CEO of Anthropic.

Jeffrey Hinton and Joshua Bengio, two of the three Turing Award-winning researchers for their pioneering work on neural networks and considered the “godfathers” of the modern AI movement, are among others in the field. Signed the statement as well as eminent researchers from Yann LeCun, a three-time Turing Award winner and head of Meta’s AI research effort, had not signed as of Tuesday.)

The statement comes amid growing concerns about the potential harm of artificial intelligence. Recent advances in so-called large language models, a type of AI system used by ChatGPT and other chatbots, could soon see AI used at scale to spread misinformation and propaganda. , or there is growing concern that millions of white-collar jobs will disappear. .

Some believe that eventually AI will be powerful enough to wreak societal-scale chaos within a few years if nothing is curbed, but researchers have no idea how that will happen. I may have neglected to explain about.

These concerns are shared by many industry leaders, and the technologies they are building, and often in fierce competition to build faster than their competitors, pose significant risks. , is in the unusual position of arguing that it should be more tightly regulated.

This month, Altman, Hassabis and Amodei met with President Biden and Vice President Kamala Harris to discuss AI regulation. In Senate testimony after the meeting, Altman warned that the risks of advanced AI systems were severe enough to justify government intervention and called for regulation of AI against potential harm.

Dan Hendricks, executive director of the AI ​​Safety Center, said in an interview that the open letter represents the “coming out” of some industry leaders (but only inside) who have expressed concerns about the risks of the technology. rice field. developing.

“Even in the AI ​​community, there is a common misconception that only a handful of people are catastrophic,” Hendricks said. “But in reality, many people would quietly express concerns about these things.”

Some skeptics argue that AI technology is still too immature to pose an existential threat. When it comes to AI systems today, we worry more about short-term problems such as biased or incorrect responses than about long-term dangers.

However, some argue that AI is advancing rapidly, already surpassing human-level performance in some areas, and will soon exceed human-level performance in others. They say that the technology has shown signs of sophistication and comprehension, and that “artificial general intelligence” (AGI), which can match or exceed human-level performance on a wide variety of tasks, is the most advanced form of artificial intelligence. It is said that it raises concerns that it is not one of a kind. far away.

and Last week’s blog postAltman and two other OpenAI executives proposed several ways powerful AI systems could be managed responsibly. They called for cooperation among major AI makers, further technical research on large-scale language models, and the establishment of an international AI safety agency, such as the International Atomic Energy Agency, aimed at controlling the use of nuclear weapons.

Altman also expressed support for a rule requiring manufacturers of large, cutting-edge AI models to register for government-issued licenses.

In March, more than 1,000 technologists and researchers launched the development of the largest AI model, citing concerns about “an uncontrollable race to develop and deploy an ever more powerful digital mind.” signed another open letter calling for a six-month suspension of ”

The letter, which was sponsored by the Future of Life Institute, another AI-focused non-profit, and signed by Elon Musk and other prominent tech leaders, was signed by major AI labs. was not much.

The conciseness of the Center for AI Safety’s new statement (only 22 words in total) suggests that while opinions may differ on the nature of specific risks and the steps to take to prevent them from occurring, it is a general statement against powerful risks. It was intended to bring together AI professionals who share common concerns. It’s an AI system, Hendricks said.

“We didn’t want to push a very large menu of 30 potential interventions,” Hendricks said. “Then the message gets diluted.”

He said the statement was initially shared with several high-profile AI experts, including Hinton, who left Google earlier this month to allow people to talk more freely about potential harm from artificial intelligence. From there, it went on to some major AI labs, where some of its employees signed on.

With millions of people turning to AI chatbots for entertainment, socializing and increased productivity, the underlying technology is rapidly advancing, increasing the urgency of warnings from AI leaders. I’m here.

“If this technology doesn’t work, I think it could go very wrong,” Altman told a Senate subcommittee. “We want to work with the government to prevent that from happening.”

Related Articles

Back to top button