Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Towards Expert-Level Medical Question Answering with Large Language Models

About

Recent artificial intelligence (AI) systems have reached milestones in "grand challenges" ranging from Go to protein-folding. The capability to retrieve medical knowledge, reason over it, and answer medical questions comparably to physicians has long been viewed as one such grand challenge. Large language models (LLMs) have catalyzed significant progress in medical question answering; Med-PaLM was the first model to exceed a "passing" score in US Medical Licensing Examination (USMLE) style questions with a score of 67.2% on the MedQA dataset. However, this and other prior work suggested significant room for improvement, especially when models' answers were compared to clinicians' answers. Here we present Med-PaLM 2, which bridges these gaps by leveraging a combination of base LLM improvements (PaLM 2), medical domain finetuning, and prompting strategies including a novel ensemble refinement approach. Med-PaLM 2 scored up to 86.5% on the MedQA dataset, improving upon Med-PaLM by over 19% and setting a new state-of-the-art. We also observed performance approaching or exceeding state-of-the-art across MedMCQA, PubMedQA, and MMLU clinical topics datasets. We performed detailed human evaluations on long-form questions along multiple axes relevant to clinical applications. In pairwise comparative ranking of 1066 consumer medical questions, physicians preferred Med-PaLM 2 answers to those produced by physicians on eight of nine axes pertaining to clinical utility (p < 0.001). We also observed significant improvements compared to Med-PaLM on every evaluation axis (p < 0.001) on newly introduced datasets of 240 long-form "adversarial" questions to probe LLM limitations. While further studies are necessary to validate the efficacy of these models in real-world settings, these results highlight rapid progress towards physician-level performance in medical question answering.

Karan Singhal, Tao Tu, Juraj Gottweis, Rory Sayres, Ellery Wulczyn, Le Hou, Kevin Clark, Stephen Pfohl, Heather Cole-Lewis, Darlene Neal, Mike Schaekermann, Amy Wang, Mohamed Amin, Sami Lachgar, Philip Mansfield, Sushant Prakash, Bradley Green, Ewa Dominowska, Blaise Aguera y Arcas, Nenad Tomasev, Yun Liu, Renee Wong, Christopher Semturs, S. Sara Mahdavi, Joelle Barral, Dale Webster, Greg S. Corrado, Yossi Matias, Shekoofeh Azizi, Alan Karthikesalingam, Vivek Natarajan• 2023

Related benchmarks

TaskDatasetResultRank
Question AnsweringMedQA-USMLE (test)
Accuracy86.5
101
Question AnsweringPubMedQA (test)
Accuracy81.8
81
Medical Knowledge Question AnsweringMedical Domain (MedQA, MMLU, MedMCQA) (test)
MedQA Score85.4
45
Multiple-choice Question AnsweringMMLU Medical and Biological Sub-tasks
Clinical Knowledge Accuracy88.7
24
Multiple-choice Question AnsweringMMLU (Massive Multitask Language Understanding) 1.0 (test)
Accuracy (Clinical knowledge)88.7
16
Question AnsweringMedMCQA (dev)
Accuracy0.723
11
Medical Question AnsweringPubMedQA Reasoning Required
Accuracy81.8
10
Medical Question AnsweringMedQA US (4-option)
Accuracy86.5
8
Multiple-choice Question AnsweringMedMCQA (test)
Accuracy72.3
6
Physician evaluation of long-form medical answersMulti-MedQA 140 long-form answers
Consensus Support91.7
3
Showing 10 of 14 rows

Other info

Follow for update