Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Capabilities of GPT-4 on Medical Challenge Problems

About

Large language models (LLMs) have demonstrated remarkable capabilities in natural language understanding and generation across various domains, including medicine. We present a comprehensive evaluation of GPT-4, a state-of-the-art LLM, on medical competency examinations and benchmark datasets. GPT-4 is a general-purpose model that is not specialized for medical problems through training or engineered to solve clinical tasks. Our analysis covers two sets of official practice materials for the USMLE, a three-step examination program used to assess clinical competency and grant licensure in the United States. We also evaluate performance on the MultiMedQA suite of benchmark datasets. Beyond measuring model performance, experiments were conducted to investigate the influence of test questions containing both text and images on model performance, probe for memorization of content during training, and study probability calibration, which is of critical importance in high-stakes applications like medicine. Our results show that GPT-4, without any specialized prompt crafting, exceeds the passing score on USMLE by over 20 points and outperforms earlier general-purpose models (GPT-3.5) as well as models specifically fine-tuned on medical knowledge (Med-PaLM, a prompt-tuned version of Flan-PaLM 540B). In addition, GPT-4 is significantly better calibrated than GPT-3.5, demonstrating a much-improved ability to predict the likelihood that its answers are correct. We also explore the behavior of the model qualitatively through a case study that shows the ability of GPT-4 to explain medical reasoning, personalize explanations to students, and interactively craft new counterfactual scenarios around a medical case. Implications of the findings are discussed for potential uses of GPT-4 in medical education, assessment, and clinical practice, with appropriate attention to challenges of accuracy and safety.

Harsha Nori, Nicholas King, Scott Mayer McKinney, Dean Carignan, Eric Horvitz• 2023

Related benchmarks

TaskDatasetResultRank
Medical Question AnsweringMedMCQA (test)
Accuracy72.3
134
Question AnsweringMedQA-USMLE (test)
Accuracy86.1
101
Question AnsweringPubMedQA (test)
Accuracy81
81
Question AnsweringPubMedQA PQA-L (test)
Accuracy75.2
25
Multiple-choice Question AnsweringMMLU (Massive Multitask Language Understanding) 1.0 (test)
Accuracy (Clinical knowledge)88.7
16
Multiple-choice Question AnsweringMedMCQA (test)
Accuracy73.7
6
Medical Question AnsweringMedQuAD-style Complete Benchmark
MedQuAD Score71.07
5
Showing 7 of 7 rows

Other info

Follow for update