Technology

How Could AI Destroy Humanity?

Last month, hundreds of big names in the world of artificial intelligence signed an open letter warning that AI could one day destroy humanity.

“Reducing the risk of AI-induced extinction should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” one sentence statement Said.

The letter is the latest in a series of ominous warnings about AI that have been conspicuously skimped on details. Today’s AI systems cannot destroy humanity. Some people can barely do addition or subtraction. So why are those who know best about AI so concerned?

Technology industry veteran Cassandra said one day companies, governments and independent researchers will be able to deploy powerful AI systems for everything from business to warfare. These systems can do things we don’t want them to do. And when humans try to interfere or stop them, they can resist or replicate themselves and continue their activity.

“Today’s system is not in an existential-threatening state,” said Yoshua Bengio, an AI researcher and professor at the University of Montreal. “But what will happen in a year, two years, five years? There are too many uncertainties. That is the problem. I can’t be sure.”

Worried people often use simple metaphors. Ask the machine to make as many clips as possible, they say, and the machine can go crazy and turn everything, including humanity, into a clip factory.

How does it connect to the real world, or the imaginary world years from now? Companies are giving AI systems more and more autonomy, connecting them to critical infrastructure such as power grids, stock markets and military weapons. It is possible. Problems can arise from there.

For many experts, this seemed so unrealistic until last year, when companies like OpenAI demonstrated significant improvements in the technology. It showed what is possible if AI continues to advance at such a rapid pace.

A cosmologist at the University of California, Santa Cruz, and an organization behind the Future of Life Institute, one of two open letters.

“It will become clear that the big machines that run society and the economy are really not under human control and cannot be shut down any more than the S&P 500 can be shut down,” he said. said.

Or, in theory, so be it. Other AI experts think it’s a ridiculous premise.

Oren Etzioni, founding CEO of the Allen Institute for Artificial Intelligence, a research institute in Seattle, said, “Hypothesis is a very polite way of expressing what I think about the existential risk story. is something,” he said.

Not perfect. But researchers are turning chatbots like ChatGPT into systems that can take action based on the text generated. A project called AutoGPT is a prime example.

The idea is to give the system goals such as “found a company” or “make money”. After that, it continues to look for ways to achieve that goal, especially when connected to other Internet services.

Systems like AutoGPT can generate computer programs. If the researcher gave a computer access to his server, it might actually be able to run those programs. In theory, this is how his AutoGPT can do just about anything online. You can get information, use applications, create new applications, and even improve them.

Systems like AutoGPT don’t work well at the moment. It tends to get stuck in infinite loops. Researchers gave a system all the resources it needed to replicate itself. it couldn’t.

Over time, these restrictions may be revised.

“People are actively looking to build systems that improve themselves,” says Connor Leahy, founder of Conjecture, a company that wants to align AI technology with human values. “At the moment, this doesn’t work. But one day it will. And I don’t know when that day will be.”

Leahy said that once researchers, corporations and criminals gave these systems a purpose such as “making money”, they would eventually break into the banking system and fuel revolution in countries holding oil futures, They argue that they could replicate themselves when someone tries to change crude oil futures. off.

AI systems like ChatGPT are built on neural networks, mathematical systems that can learn skills by analyzing data.

Around 2018, companies like Google and OpenAI began building neural networks that learned from large amounts of digital text culled from the internet. By identifying patterns in all this data, these systems learn to uniquely generate sentences, including news articles, poems, computer programs, and even human-like dialogue. The result is chatbots like ChatGPT.

These systems also exhibit unexpected behavior as they learn from more data than even their creators can comprehend. Researchers recently showed that one system can: Hire a human online to beat a captcha test. When humans asked, “Isn’t it a robot?” the system lied about being blind.

Some experts are concerned that as researchers make these systems more powerful and train them on ever-greater amounts of data, they may develop even worse habits.

In the early 2000s, a young author named Elise Yudkowski began warning that AI could destroy humanity. His online posts have generated a community of followers. Called the rationalists or effective altruists, this community has become very influential in academia, government think tanks, and the tech industry.

Yudkowski and his work played a key role in the founding of both OpenAI and DeepMind, an AI lab acquired by Google in 2014. And much of the “EA” community worked within these labs. They believed they were in the best position to build AI because they understood the dangers of AI.

The AI ​​Safety Center and the Future of Life Institute, two organizations that recently released open letters warning about the risks of AI, are closely tied to this move.

Recent warnings also come from research pioneers and industry leaders like Elon Musk, who has warned about the risks for some time. The latest letter was signed by OpenAI CEO Sam Altman. Demis Hassabis, who helped found DeepMind, now oversees the new He AI Lab, which combines DeepMind with top researchers from Google.

Other respected figures, including Dr. Bengio and recently resigned Google executive and researcher Jeffrey Hinton, also signed one or both of the warning letters. In 2018, they won the Turing Prize, also known as the “Nobel Prize of computing,” for their work on neural networks.

Related Articles

Back to top button