HuaTuo: Tuning LLaMA Model with Chinese Medical Knowledge
About
Large Language Models (LLMs), such as the LLaMA model, have demonstrated their effectiveness in various general-domain natural language processing (NLP) tasks. Nevertheless, LLMs have not yet performed optimally in biomedical domain tasks due to the need for medical expertise in the responses. In response to this challenge, we propose HuaTuo, a LLaMA-based model that has been supervised-fine-tuned with generated QA (Question-Answer) instances. The experimental results demonstrate that HuaTuo generates responses that possess more reliable medical knowledge. Our proposed HuaTuo model is accessible at https://github.com/SCIR-HI/Huatuo-Llama-Med-Chinese.
Haochun Wang, Chi Liu, Nuwa Xi, Zewen Qiang, Sendong Zhao, Bing Qin, Ting Liu• 2023
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Medical Diagnosis | agent-CMB | Rounds16.56 | 25 | |
| Medical Diagnosis | MedQA agent | Rounds16.7 | 25 | |
| Terminology Understanding | Chemistry Domain Dataset | Recall@1020.1 | 12 | |
| Terminology Understanding | Code Domain Dataset | Recall@1013 | 12 | |
| Terminology Understanding | Medical Domain Dataset | Recall@100.346 | 12 | |
| Medical history-taking | Avey | Precision28.2 | 7 | |
| Medical history-taking | MIMIC | Precision32.2 | 7 |
Showing 7 of 7 rows