Technology

‘The Godfather of A.I.’ Quits Google and Warns of Danger Ahead

Jeffrey Hinton was a pioneer of artificial intelligence. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created the technology that will serve as the intellectual foundation for AI systems that tech giants believe are the key to the future.

But on Monday he officially announced that these companies are headed for danger in an aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT. We have officially joined the chorus of critics who claim to be in competition.

Dr. Hinton said he left his job at Google after more than a decade to become one of the most respected voices in the field, giving him the freedom to speak out about AI risks. , now regrets his life’s work.

“I console myself with the usual excuses. If I didn’t do it, someone else would have,” Dr. Hinton said last week in a lengthy interview in the dining room of his home in Toronto. rice field. His students made a breakthrough.

Dr. Hinton’s journey from AI pioneer to apocalyptic is a remarkable moment for the tech industry at perhaps its most significant inflection point in decades. The industry leader believes new AI systems are as important as his introduction of web browsers in the early 1990s and could bring breakthroughs in fields ranging from pharmaceutical research to education. .

But what’s gnawing at many industry insiders is the fear that they’re letting something dangerous run wild. Generative AI may already be a tool of misinformation. Immediately, it can become a risk to work. Somewhere out there, the tech industry’s biggest concern is saying it could be a risk to humanity.

Dr. Hinton said, “It’s hard to see how you could prevent someone with bad intentions from exploiting it.

After San Francisco startup OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers took six months to develop a new system because AI technology “posed serious risks to society.” Signed an open letter requesting a moratorium. and humanity. ”

A few days later, 19 current and former leaders of the Association for Advancement of Artificial Intelligence, a 40-year-old academic society, publish your letter AI Risk Warning This group included Eric Horvitz, Chief Scientific Officer at Microsoft. Microsoft has incorporated his OpenAI technology into a wide range of products, including the Bing search engine.

Dr. Hinton, often referred to as the “godfather of AI,” did not sign any of these letters, saying he did not want to publicly criticize Google or other companies until he quit his job. He gave notice of his resignation to the company last month and spoke by phone on Thursday with Sundar Pichai, CEO of Alphabet, Google’s parent company. He declined to speak publicly about the details of his conversation with Mr. Pichai.

Google Chief Scientist Jeff Dean said in a statement:

Dr. Hinton is a 75-year-old British expat and lifelong researcher whose career has been fueled by a personal belief in the development and use of AI. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton developed an idea called neural networks. A neural network is a mathematical system that analyzes data and learns skills. At the time, few researchers believed in this idea. But it became his life’s work.

In the 1980s, Dr. Hinton was a professor of computer science at Carnegie Mellon University, but was reluctant to accept Pentagon funding, so he left the university for Canada. At the time, most of his AI research in the United States was funded by the Department of Defense. Dr. Hinton strongly opposes the use of artificial intelligence on the battlefield, or what he calls “robot soldiers”.

In 2012, Dr. Hinton and two Toronto students, Ilya Sutskever and Alex Kryshevsky, were able to analyze thousands of photographs and learn to identify common objects such as flowers, dogs, and cars. I built a neural network.

Google spent $44 million To acquire a company started by Dr. Hinton and his two students. And their system has led to the creation of increasingly powerful technologies, including new chatbots such as ChatGPT and Google Bard. Sutskever has since become the lead scientist at OpenAI. In 2018, Dr. Hinton and two of his other longtime collaborators were awarded the Turing Prize, often called the “Nobel Prize of Computing,” for his work on neural networks.

Around the same time, companies such as Google and OpenAI began building neural networks that learned from vast amounts of digital text. Dr. Hinton saw it as a powerful way for machines to understand and generate language, but inferior to how humans process language.

And last year, as Google and OpenAI built their systems with much more data, his views changed. He still believed that the system was inferior to the human brain in some respects, but surpassed human intelligence in others. “Maybe what’s going on in those systems is actually a lot better than what’s going on in the brain.”

He believes that as companies improve their AI systems, they will become more and more dangerous. “Look at what happened five years ago and now,” he says of his AI technology. “Take the difference and propagate it forward. That’s what scares me.”

Until last year, he said, Google acted as a “proper steward” of the technology and was careful not to release anything that could cause harm. But now that Microsoft has expanded his Bing search engine with chatbots, it’s challenging Google’s core business. Google is rushing to roll out the same kind of technology. According to Dr. Hinton, tech giants are embroiled in a race that may be impossible to stop.

His immediate concern is that the Internet is flooded with false photos, videos, and texts that make the average person “not sure what the truth is.”

He also worries that AI technology will eventually upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they may replace others who handle simple tasks, such as paralegals, personal assistants, and translators. “It takes the hassle out of it,” he said. “It might take more than that.”

He worries that future versions of technology will pose a threat to humanity in the future. This is because they often learn unexpected behavior from the vast amounts of data they analyze. He said this is a problem because individuals and companies are allowing AI systems to not only generate their own computer code, but actually run that code on their own. And he fears the day when truly autonomous weapons—killer robots—become a reality.

“The idea that this would actually make us smarter than humans – a few people believed it,” he said. “But most people thought it was off the mark. , I no longer think of it.”

Many other experts, including many of his students and colleagues, say the threat is hypothetical. However, Dr. Hinton believes the competition between the likes of Google and Microsoft will escalate into a global competition that cannot be stopped without some kind of global regulation.

But that may not be possible, he said. Unlike nuclear weapons, he said, there is no way to know if a company or country is working on the technology in secret. Our best hope is that the world’s leading scientists will collaborate on how to control technology. “I don’t think they should expand this further until they understand if they can control it,” he said.

Dr. Hinton said when people asked him how he could tackle potentially dangerous technology, he paraphrased Robert Oppenheimer, who led the U.S. effort to build the atomic bomb. I said yes. go ahead and do it.

he doesn’t say it anymore

Related Articles

Back to top button