The Chatbot Will See You Now

A day earlier, when her husband first noticed the drooping, the couple drove three hours to an emergency room, only for the doctor to send her home after labeling her symptoms as benign.

Update: 2024-09-27 00:45 GMT

Representative Image 

Teddy Rosenbluth

Susan Sheridan had heard of ChatGPT but had never used it, so the first question she asked the artificial intelligence chatbot was a bit garbled: “Facial droop, facial pain and dental work.” She had turned to ChatGPT in a moment of desperation. The right side of her face was sagging, she tripped over words as she spoke and her head hurt so much that she couldn’t rest her head on the pillow.

A day earlier, when her husband first noticed the drooping, the couple drove three hours to an emergency room, only for the doctor to send her home after labeling her symptoms as benign. ChatGPT disagreed. One potential explanation for her symptoms, the chatbot told her, was Bell’s palsy, which needed urgent treatment to avoid lasting damage.

She made another trip to the emergency room, where a doctor confirmed the chatbot’s suspicions and gave her steroids and antivirals. As a result, she was mostly cured. “I don’t want to replace doctors — I believe in the doctor-patient relationship, I believe in the health care system,” said Sheridan, 64, co-founder of a patient safety advocacy organization.

“But it fails sometimes. When medicine was failing us, we turned to ChatGPT.” Increasingly, people like Sheridan are reaching for AI chatbots to explain new symptoms, weigh treatment options and vet their doctors’ conclusions.

About 1 in 6 adults — and about one-quarter of adults younger than 30 — use chatbots to find medical advice and information at least once a month, according to a recent survey from KFF, a nonprofit health policy research organization.

Supporters hope AI will empower patients by giving them more comprehensive medical explanations than a simple Google search might. “Google gives you access to information. AI gives access to clinical thought,” said Dave deBronkart, a patient advocate and blogger.

Researchers know very little about how patients are using generative AI to answer their medical questions. Studies on this topic have been largely focused on hypothetical medical cases.

Dr. Ateev Mehrotra, a public health researcher and professor at Brown University who studies patient uses for AI chatbots, said he doesn’t think experts have grasped just how many people were already using the technology to answer health questions.

“We’ve always thought that this is something coming down the pipe, but isn’t being used in big numbers right now,” he said. “I was quite struck by such a high rate” in the KFF survey.

Dr. Benjamin Tolchin, a bioethicist and neurologist at the Yale School of Medicine, said that ever since ChatGPT became publicly available in 2022, a handful of patients every month have told him that they used the chatbot to research their condition.

For the most part, he thinks the tool has helped inform patients about their condition and available treatment options. But he has also seen the chatbot’s drawbacks.

The problem isn’t just that AI may provide wrong or incomplete medical information — plenty of Googlers already face that issue. It’s that the chatbots answer questions with an air of authority, which may give patients a false confidence in their advice, Tolchin said.

“It can be a lot more persuasive and appear a lot more complete to patients than the spotty fragments that they may find by Googling,” he said.

One of his patients, for example, had used a chatbot to find treatment options for seizures that were no longer responding to medications. Among the suggested treatments was stem cell transplantation — an option that the patient hadn’t yet considered.

Tolchin had to explain to a hopeful family that stem cell transplantation was an entirely experimental treatment and not an option that any neurologist would recommend, a distinction the chatbot hadn’t mentioned.

“They had the impression that they had a pretty complete picture of all the different treatment options,” he said. “But I think it may have given a somewhat misleading picture.”

In the KFF study, less than 10% of respondents said they felt “very confident” that they can tell the difference between true and false information generated by AI chatbots.

DeBronkart said that just like any tool, there is a right way to use ChatGPT for health information. The questions need to be crafted carefully and the answers need to be reviewed with skepticism, he said.

“These are powerful tools to help advance your thinking,” he said. “We don’t want people to think that this gives instant answers so they don’t have to think anymore.”

Tags:    

Similar News