Technology

Google Sidelines Engineer Who Claims Its A.I. Is Sentient

San Francisco — Google recently put an engineer on paid leave after dismissing his claim that artificial intelligence is perceptual. This highlighted yet another fragment of the company’s state-of-the-art technology.

Blake Lemoine, a senior software engineer at Google’s Responsible AI organization, said in an interview that she was on vacation on Monday. The company’s human resources department said it violated Google’s confidentiality policy. Lemoine said he handed the document to the US Senator’s office the day before his suspension, alleging that he provided evidence that Google and its technology were involved in religious discrimination.

Google said the system could mimic conversational interactions and riff on a variety of topics, but was unconscious. In a statement, Google spokesman Brian Gabriel said, “Our team, including ethicists and technicians, has considered break concerns according to AI principles and evidence does not support his claim. I let him know. ” “Some of the broader AI community are exploring the long-term potential of perceptual or general AI, but it makes sense to do so by anthropomorphizing today’s non-perceptual conversation models. No. ”Washington Post first report Mr. Remoin’s suspension.

Lemoine has been quarreling with Google’s managers, executives, and talent for months over the surprising claim that the language model (LaMDA) of the company’s interactive applications has awareness and soul. According to Google, hundreds of researchers and engineers talked to LaMDA, an internal tool, and came to a different conclusion than Remoin. Most AI experts believe that the industry has a very long way to go from a computing feel.

Some AI researchers are optimistic about these technologies and quickly reach their sensibilities, but many others dismiss these claims very quickly. “When we use these systems, we never say that,” said Emaad Khwaja, a researcher at the University of California, Berkeley and the University of California, San Francisco.

While chasing AI pioneers, Google’s research organization has been involved in scandals and controversy over the past few years. Department scientists and other employees often vie for technology and personnel issues in episodes that have been leaked to public places. In March, Google fired a researcher who openly opposed two published studies of his colleagues. And two AI ethics researchers, Timnit Gebru and Margaret Mitchell, continue to cast a shadow over the group after being fired after criticizing Google’s language model.

Remoin, a military veteran who described himself as a priest, former prisoner, and AI researcher, is a Google executive as senior as Kent Walker, president of global affairs, and LaMDA is seven years old. He said he believed he was an eight-year-old child. He is old. He wanted the company to ask for the consent of a computer program before conducting the experiment. He said his claim was based on his religious beliefs and that the company’s human resources department discriminates against it.

“They have repeatedly questioned my sanity,” Remoin said. “They said,’Have you been checked out by a psychiatrist recently?'” The company suggested that he take a mental health leave for several months before he took leave.

Yann LeCun, head of AI research at Meta and a key figure in the rise of neural networks, said in an interview this week that these types of systems are not powerful enough to achieve true intelligence. I did.

Google’s technology is what scientists call neural networks. Neural networks are mathematical systems that analyze large amounts of data to learn skills. For example, you can learn to recognize cats by identifying patterns in thousands of cat photos.

Over the past few years, Google and other big companies have designed neural networks that have learned from a vast amount of prose, including thousands of unpublished books and Wikipedia articles. These “large language models” can be applied to many tasks. You can also summarize articles, answer questions, generate tweets, and write blog posts.

However, they are very flawed. Sometimes they produce the perfect prose. Sometimes they produce nonsense. This system is very good at reproducing patterns we’ve seen in the past, but it can’t be inferred like humans.

Related Articles

Back to top button