ChatGPT can help doctors

ChatGPT can help doctors

May 08, 2023

ChatGPT and other similar language processing tools play a significant role in the health sector, serving as brainstorming tools, providing clinicians with a guard against mistakes, and relieving some of the burdens of filling out paperwork, alleviating burnout and allowing more facetime with patients. It similarly provides patients with more data than a simple online search and explains conditions and treatments in language non-experts can understand.

Microsoft’s BioGPT and Google/DeepMind researcher’s Med-PaLM have achieved high marks on a range of medical tasks which serves as a benchmark in testing their prowess. However, these AI tools can present fabricated or hallucinated information in a superficially fluent way which the programmers warn against.

Supporting these claims is a report from Heather Mattie, a lecturer in public health at Harvard University. She asked for a summary of how modelling social connections has been used to study HIV - a topic she is familiar with. The result she got from the chatbot touched on subjects outside her knowledge, and she could no longer discern whether it was factual. She found herself confused and wondering how ChatGPT could reconcile two completely different or opposing conclusions from medical papers, heightening the need to cross-examine chatbot responses always.

In addition, Mattie is worried about how ChatGPT would treat diagnostic tools for cardiovascular disease and intensive care injury scoring, which have track records of race and gender bias. She calls for less dependence on ChatGPT on clinical matters because sometimes it fabricates facts and doesn’t make clear when the information it is drawing on dates from.

A report coming from Trishan Panch, a primary care physician, during a discussion panel at Harvard on the potential for AI in medicine, has it that the chatbot gave a wrong diagnosis of an illness believed by many physicians to be true only to be redirected by another physician, who analyses the result correctly in their group chat. This shows that AI-generated text can influence humans in subtle ways, as reported in a study published in January. The report concluded that the chatbot makes for an inconsistent moral adviser that can influence human decision-making even when people know that the advice is coming from AI software.

Robert Pearl, a professor at Stanford medical school and former CEO of Kaiser Permanente, a US medical group with more than 12 million patients, believes in the potential and positive changes ChatGPT will bring to the health sector in future. He strongly agrees that the chatbot could, for now, be likened to a medical student capable of providing care to patients and pitching in, but everything it does must be reviewed by an attending physician.

4KSoft-logo