What Exactly Are the Dangers Posed by A.I.?
In late March, more than 1,000 technology leaders, researchers and other experts working inside and outside artificial intelligence signed an open letter warning that AI technology poses “serious risks to society and humanity.” .
The group, which includes Tesla CEO and Twitter owner Elon Musk, has asked the AI Lab to develop six of its most powerful systems so it can better understand the dangers behind the technology. I requested that it be suspended for a month.
“Powerful AI systems should only be developed if we are confident that their effects are positive and the risks are manageable,” the letter said.
The letter, which now has over 27,000 signatures, was brief. The language was extensive. Also, some of the names behind the letters seemed to have a conflicting relationship with AI. Musk, for example, is building his own AI startup and is one of his major donors to the organization that wrote the letter.
But the letter represents a growing concern among AI experts that modern systems such as GPT-4, a technology introduced by San Francisco startup OpenAI, could harm society. They believed future systems would be even more dangerous.
Some risks have occurred. Others do not for months or years. In addition, some are purely hypothetical.
Yoshua Bengio, professor and AI researcher at the University of Montreal, said: “So we have to be very careful.”
why are they worried?
Dr. Bengio is probably the most important person to have signed the letter.
Working with two other academics, Geoffrey Hinton, who until recently was a researcher at Google, and Yann LeCun, chief AI scientist at Facebook owner Meta, Dr. Bengio has spent the past 40 years working on GPT-4. I spent a lot of time developing the technology to drive such a system. In 2018, the researcher was awarded the Turing Prize, often referred to as the “Nobel Prize for Computing”, for his work on neural networks.
A neural network is a mathematical system that analyzes data and learns skills. About five years ago, companies like Google, Microsoft, and OpenAI began building neural networks that learned from vast amounts of digital text called Large Language Models (LLMs).
By pinpointing patterns in that text, LLMs learn to generate unique texts, such as blog posts, poems, and computer programs. They can even carry on a conversation.
This technology helps computer programmers, writers, and other workers generate ideas and get things done faster. But Dr. Bengio and other experts also warn that LLMs can learn unwanted and unexpected behaviors.
These systems can generate falsehoods, prejudices, and other toxic information. Systems like GPT-4 misinterpret facts and make up information. This phenomenon is called “hallucination”.
Companies are grappling with these issues. But experts like Dr. Bengio worry that new risks will emerge as researchers make these systems more powerful.
Short-term risk: disinformation
Because these systems provide information that appears to be completely reliable, they may struggle to distinguish between truth and fiction when in use. Experts are concerned that people will rely on these systems for medical advice, emotional support, and the raw information they use to make decisions.
Subbarao Kambhampati, professor of computer science at Arizona State University, said:
Experts also worry that people will abuse these systems to spread disinformation. It’s surprisingly persuasive because you can have human-like conversations.
“We now have a system that can interact via natural language, but it can’t tell the difference between real and fake,” said Dr. Bengio.
Medium-term risk: unemployment
Experts worry that new AI could take jobs. Today, technologies like GPT-4 tend to complement human workers. But OpenAI admits it can replace some workers, including those who manage content on the internet.
They cannot yet imitate the work of lawyers, accountants and doctors. However, they can replace paralegals, personal assistants and translators.
Papers written by OpenAI researchers estimated that 80% of the U.S. workforce could be affected by LLM for at least 10% of their work tasks, and 19% of workers for at least 50% of their tasks. I’m here.
“There are signs that mechanical jobs are going away,” said Oren Etzioni, founder of the Allen Institute for Artificial Intelligence, a Seattle lab.
Long-Term Risk: Loss of Control
Some of the people who signed the letter believe that artificial intelligence could get out of our control and even destroy humanity. increase.
This letter was written by a group of the Future of Life Institute, an organization dedicated to investigating human survival risks. They warn that AI systems often learn unexpected behavior from the vast amounts of data they analyze, which can lead to serious and unexpected problems.
They are concerned that as companies plug their LLMs into other Internet services, these systems could write their own computer code, gaining unexpected powers. They say that letting powerful AI systems run their own code creates new risks for developers.
A theoretical cosmologist and physicist at the University of California, Santa Cruz, the Future of Life Institute.
“In the unlikely scenario, where things actually get off the ground, there’s no real governance, and these systems turn out to be stronger than we thought, things get really crazy. will be,” he said.
Dr. Etzioni said talk of existential risk is only hypothetical. But he said other risks, especially disinformation, were no longer speculation.
“Now we have some real problems,” he said. “They are the real deal. They need to be treated responsibly. Regulations and laws may be needed.”