Technology

A.I. Is Not Sentient. Why Do People Say It Is?

In the mid-1960s, Massachusetts Institute of Technology researcher Joseph Weizenbaum created an automated psychotherapist called Eliza. This chatbot was simple. Basically, you enter a thought on your computer screen and it asks you to expand on that thought. Or just repeated words in the form of questions.

Even when Dr. Weizenbaum curated a conversation in an academic paper published on the technology, it looked like this, and Eliza responded in capital letters.

All men are the same.

How?

They are always bothering us about something.

Can you think of a specific example?

Well, my boyfriend made me come here.

your boyfriend made you come here

However, to Dr. Weizenbaum’s surprise, people treated Eliza like a human being. They freely shared their personal problems and found comfort in their answers.

“I knew from long experience that the strong emotional bond that many programmers have with their computers is often formed after a short experience with a machine. I have written“What I didn’t realize was that very brief exposure to relatively simple computer programs can induce powerful delusional thoughts in ordinary people.”

We humans are sensitive to these emotions. When a dog, cat, or other animal exhibits even the slightest bit of human-like behavior, it’s easy to assume that they are more like humans than they really are. Much the same thing happens when machines see hints of human behavior.

Scientists now call this the Eliza effect.

Much the same thing is happening with modern technology. A few months after the release of GPT-3, I received an email from his inventor and entrepreneur, Philip Bosua. The subject is “God is a machine”.

“In my opinion, there is no question that GPT-3 has emerged as sentient,” it reads. “We all knew this was going to happen in the future, but this future seems like it is now. is.

Related Articles

Back to top button