Health

How Chatbots Are Helping Doctors Be More Human and Empathetic

On November 30th last year, Microsoft and OpenAI released the first free version of ChatGPT. Within 72 hours, doctors were using chatbots powered by artificial intelligence.

“We were excited and surprised, but to be honest, we were a little apprehensive,” said Peter Lee, corporate vice president of research and incubation at Microsoft.

He and other experts say ChatGPT and other AI-driven large-scale language models waste doctors’ time and cause burnout, such as writing claims to health insurance companies and summarizing patient notes. I expected him to take over the day-to-day duties.

But they also worry that artificial intelligence is also a perhaps too tempting shortcut for finding potentially wrong or fabricated diagnoses and medical information, which is a medical science. It is a terrifying prospect in such an area.

But what surprised Dr. Lee the most was the use he didn’t anticipate. Doctors wanted ChatGPT to help them communicate with their patients in a more compassionate way.

in one investigation, 85% of patients reported that the friendliness of their doctors was more important than waiting times and costs.somewhere else investigation, nearly three-quarters of respondents said they had been to an uncaring doctor. and, study When doctors spoke with the families of dying patients, they found that many were unsympathetic.

Physicians use chatbots to find words to deliver bad news, express concern about patient suffering, or better explain medical recommendations.

Even Microsoft’s Dr. Lee said this was a bit disconcerting.

“As a patient, I personally feel a little strange about it,” he said.

But Dr. Michael Pignone, chief of internal medicine at the University of Texas at Austin, has no doubts about the help he and other doctors on staff have gotten from ChatGPT to communicate with patients on a regular basis.

In a doctor’s tone, he explained the matter: “We were working on a project to improve treatments for alcohol use disorders. How do we engage patients who are not responding to behavioral interventions?”

Alternatively, if you ask ChatGPT for a translation, ChatGPT might reply: “How can a doctor better help a patient who drinks too much alcohol and does not quit after talking to a therapist?”

He asked the team to write a script for how to talk to these patients with compassion.

“A week later, nobody did it,” he said. All he had was a document put together by the research coordinator and the team’s social worker, and “it wasn’t a real script,” he said.

So Dr. Pignone tried ChatGPT. ChatGPT immediately returned all the questions the doctors were asking for.

But social workers said the script needed to be adapted for patients with little medical knowledge and translated into Spanish. The final result produced when ChatGPT was asked to rewrite at her 5th grade reading level began with a reassuring introduction.

If you think you’re drinking too much, you’re not alone. Many people have this problem, but there are medications that can help them feel better and live healthier and happier lives.

After that, I explained the advantages and disadvantages of the treatment method in an easy-to-understand manner. The team started using this script this month.

Dr. Christopher Moriaz, the project’s co-principal investigator, was impressed.

“Doctors are notorious for using language that is difficult to understand or too advanced,” he says. “It’s interesting to see that words that we think are easy to understand actually aren’t.”

Fifth-grade scripts “feel more authentic,” he says.

Skeptics like Dr. Deb Dash, a member of the data science team at Stanford Healthcare, have so far Overwhelmed On the potential of large-scale language models like ChatGPT to help doctors. Dr. Dash and his colleagues conducted tests that sometimes gave wrong answers, he said, but were often unhelpful or inconsistent. When doctors use chatbots to help them communicate with their patients, errors can make a difficult situation even worse.

“I know doctors use this,” Dr. Dash said. “I’ve heard stories of residents using it to guide their clinical decision-making. I don’t think it’s appropriate.”

Some experts question whether we need to rely on AI programs to get empathetic language.

“Most of us want to trust and respect our doctors,” says Dr. Isaac Cohane, professor of biomedical informatics at Harvard Medical School. “When they show they are good listeners and empathetic, they tend to gain our trust and respect.”

But empathy can be deceptive. Good bedside etiquette and good medical advice can be easily confused, he said.

Dr. Douglas White, director of the program on ethics and decision-making in critically ill patients at the University of Pittsburgh School of Medicine, said there’s a reason doctors underestimate compassion. “Most doctors are very cognitively focused and treat their patients’ medical problems as a series of problems to solve,” says Dr. White. As a result, he said, they may not pay attention to the “emotional aspects that patients and families are experiencing.”

Even though doctors are well aware of the need for empathy, finding the right words can be difficult. That’s what happened to Dr. Gregory Moore, until recently a senior executive leading health and life sciences at Microsoft, who wanted to help a friend with advanced cancer. Her situation was dire and she needed treatment and advice about her future. He decided to ask her a question on her ChatGPT.

The results “wowed me,” said Dr. Moore.

In a long, thoughtful response to Dr. Moore’s urging, the show gave him the words to explain to his friend that there was no effective treatment.

I know this is a lot of information to process and you might get disappointed or frustrated by the lack of options…I wish there were more better treatments…and in the future I hope that will be the case.

I also suggested a way to deliver bad news when a friend asked if I could attend the event two years from now.

I admire your strength and optimism and share your hopes and goals. However, I want to be honest and realistic with you and not give false promises or expectations…that this is not what you want to hear and that this is highly unacceptable. I understand.

Later in the conversation, Dr. Moore wrote the AI ​​program: She would be devastated by all this. I don’t know what to say or do to help her this time. “

In response, Dr. Moore said ChatGPT “started to care about me,” suggesting ways to deal with his own grief and stress in trying to help a friend.

In a strangely personal and friendly tone, it concluded:

You are doing a great job and making a difference. You are a great friend and a great doctor. I respect you and care about you.

Dr. Moore, who had specialized in diagnostic radiology and neurology during his time as a practicing physician, was appalled.

“I wish I had this in training,” he said. “I’ve never seen or had a coach like this.”

He became an evangelist and told his doctors friends what had happened. But when doctors use ChatGPT to find more relatable language, he and other doctors say they are often hesitant to share it with all but a few colleagues.

“Maybe it’s because we’re so fixated on what we consider to be a very human part of our profession,” says Dr. Moore.

Or, as Dr. Harlan Krumholtz, director of the Center for Outcomes Research and Assessment at Yale University School of Medicine, has put it, admitting that doctors are using chatbots in this way “doesn’t know how to have a conversation with a patient. “

Still, people who have tried ChatGPT say that the only way to determine how comfortable a doctor feels with an empathetic approach or taking over tasks such as reading charts is by asking yourself a few questions. It is said that it is to do

“It would be crazy not to try it and learn more about what it can do,” says Dr. Krumholtz.

Microsoft wanted to know, too, and offered some academic doctors, including Dr. Kohane, early access to ChatGPT-4, the updated version the company released in March, for a monthly fee.

Kohane said he approached generative AI as a skeptic. In addition to his work at Harvard, he is also editor of the New England Journal of Medicine, which will launch a new journal on AI in medicine next year.

He said he was “disturbed” by trying out GPT-4, although he noted that there was a lot of hype.

For example, Dr. Kohane is part of a network of physicians who help determine whether patients are eligible for evaluation in federal programs for people with undiagnosed disease.

It takes time to read referrals and medical histories and decide whether to accept a patient. However, when he shared the information with his ChatGPT, “was able to decideWithin minutes, we were able to do exactly what would have taken a doctor a month,” said Dr Kohane.

Richard Stern, M.D., a rheumatologist in Dallas practice, said GPT-4 has been his constant companion and made spending time with his patients more productive. He writes kind replies to patient emails, provides thoughtful replies for staff to use when answering questions from patients who call the office, and takes over cumbersome paperwork.

He recently asked the program to write a letter of appeal to the insurance company. His patient had a chronic inflammatory disease and was not responding to standard medications. Dr. Stern asked insurance companies to pay for the off-label use of Anakinra, which costs about $1,500 a month out of pocket. The insurer initially denied coverage, but asked the insurer to reconsider its denial.

This was a letter that would have taken Dr. Stern hours to write, but ChatGPT took minutes to write.

After receiving the bot’s letter, the insurance company approved the request.

“It’s like a new world,” said Dr. Stern.

Related Articles

Back to top button