TY - GEN
T1 - Tailoring Large Language Models to Radiology
T2 - 14th International Workshop on Machine Learning in Medical Imaging, MLMI 2023
AU - Liu, Zhengliang
AU - Zhong, Aoxiao
AU - Li, Yiwei
AU - Yang, Longtao
AU - Ju, Chao
AU - Wu, Zihao
AU - Ma, Chong
AU - Shu, Peng
AU - Chen, Cheng
AU - Kim, Sekeun
AU - Dai, Haixing
AU - Zhao, Lin
AU - Zhu, Dajiang
AU - Liu, Jun
AU - Liu, Wei
AU - Shen, Dinggang
AU - Li, Quanzheng
AU - Liu, Tianming
AU - Li, Xiang
N1 - Publisher Copyright:
© 2024, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2024
Y1 - 2024
N2 - In this preliminary work, we present a domain fine-tuned LLM model for radiology, an experimental large language model adapted for radiology. This model, created through an exploratory application of instruction tuning on a comprehensive dataset of radiological information, demonstrates promising performance when compared with broader language models such as StableLM, Dolly, and LLaMA. This model exhibits initial versatility in applications related to radiological diagnosis, research, and communication. Our work contributes an early but encouraging step towards the evolution of clinical NLP by implementing a large language model that is local and domain-specific, conforming to stringent privacy norms like HIPAA. The hypothesis of creating customized, large-scale language models catering to distinct requirements of various medical specialties, presents a thought-provoking direction. The blending of conversational prowess and specific domain knowledge in these models kindles hope for future enhancements in healthcare AI. While it is still in its early stages, the potential of generative large language models is intriguing and worthy of further exploration. The demonstration code of our domain fine-tuned LLM model for radiology can be accessed at https://anonymous.4open.science/r/radiology-llm-demo-C3E2/.
AB - In this preliminary work, we present a domain fine-tuned LLM model for radiology, an experimental large language model adapted for radiology. This model, created through an exploratory application of instruction tuning on a comprehensive dataset of radiological information, demonstrates promising performance when compared with broader language models such as StableLM, Dolly, and LLaMA. This model exhibits initial versatility in applications related to radiological diagnosis, research, and communication. Our work contributes an early but encouraging step towards the evolution of clinical NLP by implementing a large language model that is local and domain-specific, conforming to stringent privacy norms like HIPAA. The hypothesis of creating customized, large-scale language models catering to distinct requirements of various medical specialties, presents a thought-provoking direction. The blending of conversational prowess and specific domain knowledge in these models kindles hope for future enhancements in healthcare AI. While it is still in its early stages, the potential of generative large language models is intriguing and worthy of further exploration. The demonstration code of our domain fine-tuned LLM model for radiology can be accessed at https://anonymous.4open.science/r/radiology-llm-demo-C3E2/.
KW - Large Language Models
KW - Natural Language Processing
KW - Radiology
UR - https://www.scopus.com/pages/publications/85176017806
UR - https://www.scopus.com/pages/publications/85176017806#tab=citedBy
U2 - 10.1007/978-3-031-45673-2_46
DO - 10.1007/978-3-031-45673-2_46
M3 - Conference contribution
AN - SCOPUS:85176017806
SN - 9783031456725
T3 - Lecture Notes in Computer Science
SP - 464
EP - 473
BT - Machine Learning in Medical Imaging - 14th International Workshop, MLMI 2023, Held in Conjunction with MICCAI 2023, Proceedings
A2 - Cao, Xiaohuan
A2 - Ouyang, Xi
A2 - Xu, Xuanang
A2 - Rekik, Islem
A2 - Cui, Zhiming
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 8 October 2023 through 8 October 2023
ER -