LLMs show potential as aids for clinicians during physical exams

Physical examinations are important diagnostic tools that can reveal critical insights into a patient’s health, but complex conditions may be overlooked if a clinician lacks specialized training in that area. While previous research has investigated using large language models (LLMs) as tools to aid in providing diagnoses, their use in physical exams remains untapped. To address this gap, researchers from Mass General Brigham prompted the LLM GPT-4 to recommend physical exam instructions based on patient symptoms. The study suggests the potential of using LLMs as aids for clinicians during physical exams. Results are published in the Journal of Medical Artificial Intelligence.

Medical professionals early in their career may face challenges in performing the appropriate patient-tailored physical exam because of their limited experience or other context-dependent factors, such as lower resourced settings. LLMs have the potential to serve as a bridge and parallel support physicians and other medical professionals with physical exam techniques and enhance their diagnostic abilities at the point of care.”


Marc D. Succi, MD, senior author, strategic innovation leader at Mass General Brigham Innovation, associate chair of innovation and commercialization for enterprise radiology and executive director of the Medically Engineered Solutions in Healthcare (MESH) Incubator at Mass General Brigham

Succi and his colleagues prompted GPT-4 to recommend physical exam instructions based on the patient’s primary symptom, for example, a painful hip. GPT-4’s responses were then evaluated by three attending physicians on a scale of 1 to 5 points based on accuracy, comprehensiveness, readability and overall quality. They found that GPT-4 performed well at providing instructions, scoring at least 80% of the possible points. The highest score was for “Leg Pain Upon Exertion” and the lowest was for “Lower Abdominal Pain.”

“GPT-4 performed well in many respects, yet its occasional vagueness or omissions in critical areas, like diagnostic specificity, remind us of the necessity of physician judgment to ensure comprehensive patient care,” said lead author Arya Rao, a student researcher in the MESH Incubator attending Harvard Medical School.

Although GPT-4 provided detailed responses, the researchers found that it occasionally left out key instructions or was overly vague, indicating the need for a human evaluator. According to researchers, the LLM’s strong performance suggests its potential as a tool to help fill gaps in physicians’ knowledge and aid in diagnosing medical conditions in the future.

Source:

Journal reference:

Rao, A. S., et al. (2024) A large language model-guided approach to the focused physical exam. Journal of Medical Artificial Intelligence. doi.org/10.21037/jmai-24-275.

Source link : News-Medica

Leave A Reply

Your email address will not be published.