Password Reset
Forgot your password? Enter the email address you used to create your account to initiate a password reset.
Forgot your password? Enter the email address you used to create your account to initiate a password reset.
Several orthopaedic experts, including UPMC Orthopaedic Care physicians Jonathan D. Hughes, MD, and Volker Musahl, MD, collaborated to conduct a study examining ChatGPT as a potential supplementary tool for providing orthopaedic information.
Large language models (LLMs), such as generative pre-trained transformer (ChatGPT), have become one of the most popular artificial intelligence (AI) tools available. But despite its potential, LLMs have also generated controversy, as scientists have expressed concerns about possible threats to scientific transparency as well as misinformation.
The purpose of this study is to investigate the potential use of LLMs in orthopaedics by presenting queries pertinent to anterior cruciate ligament (ACL) surgery to ChatGPT (specifically using its GPT-4 model of March 14, 2023). Additionally, this study aimed to evaluate the depth of the LLM's knowledge and investigate its adaptability to different user groups. It was hypothesized that the ChatGPT would be able to adapt to different target groups due to its strong language understanding and processing capabilities.
In this study, ChatGPT was presented with 20 questions and response was requested for two distinct target audiences: patients and non-orthopaedic medical doctors. Two board-certified orthopaedic sports medicine surgeons and two expert orthopaedic sports medicine surgeons independently evaluated the responses generated by ChatGPT. Mean correctness, completeness, and adaptability to the target audiences (patients and non-orthopaedic medical doctors) were determined. A three-point response scale facilitated nuanced assessment.
ChatGPT exhibited fair accuracy, with average correctness scores of 1.69 and 1.66 (on a scale from 0, incorrect, 1, partially correct, to 2, correct) for patients and medical doctors, respectively. Three of the 20 questions (15.0%) were deemed incorrect by any of the four orthopaedic sports medicine surgeon assessors. Moreover, overall completeness was calculated to be 1.51 and 1.64 for patients and medical doctors, respectively, while overall adaptiveness was determined to be 1.75 and 1.73 for patients and doctors, respectively.
Overall, ChatGPT was successful in generating correct responses in approximately 65% of the cases related to ACL surgery. The findings of this study imply that LLMs offer potential as a supplementary tool for acquiring orthopaedic knowledge. However, although ChatGPT can provide guidance and effectively adapt to diverse target audiences, it cannot supplant the expertise of orthopaedic sports medicine surgeons in diagnostic and treatment planning endeavors due to its limited understanding of orthopaedic domains and its potential for erroneous responses.
ChatGPT for knee ACL related questions should be used with caution. When in doubt, a sports medicine expert should be consulted.
Read more about this study on PubMed.
Other study authors include:
Collaborators not affiliated with the University of Pittsburgh:
Janina Kaarre, PhD
Sahlgrenska Academy, University of Gothenburg
Robert Feldt
Chalmers University of Technology
Bàlint Zsidai, MD, PhD
Sahlgrenska Academy, University of Gothenburg
Kristian Samuelsson, MD, PhD, MSc
Sahlgrenska University Hospital