TY - JOUR
T1 - Summary of ChatGPT-Related research and perspective towards the future of large language models
AU - Liu, Yiheng
AU - Han, Tianle
AU - Ma, Siyuan
AU - Zhang, Jiayue
AU - Yang, Yuanyuan
AU - Tian, Jiaming
AU - He, Hao
AU - Li, Antong
AU - He, Mengshen
AU - Liu, Zhengliang
AU - Wu, Zihao
AU - Zhao, Lin
AU - Zhu, Dajiang
AU - Li, Xiang
AU - Qiang, Ning
AU - Shen, Dingang
AU - Liu, Tianming
AU - Ge, Bao
N1 - Publisher Copyright:
© 2023 The Authors
PY - 2023/9
Y1 - 2023/9
N2 - This paper presents a comprehensive survey of ChatGPT-related (GPT-3.5 and GPT-4) research, state-of-the-art large language models (LLM) from the GPT series, and their prospective applications across diverse domains. Indeed, key innovations such as large-scale pre-training that captures knowledge across the entire world wide web, instruction fine-tuning and Reinforcement Learning from Human Feedback (RLHF) have played significant roles in enhancing LLMs' adaptability and performance. We performed an in-depth analysis of 194 relevant papers on arXiv, encompassing trend analysis, word cloud representation, and distribution analysis across various application domains. The findings reveal a significant and increasing interest in ChatGPT-related research, predominantly centered on direct natural language processing applications, while also demonstrating considerable potential in areas ranging from education and history to mathematics, medicine, and physics. This study endeavors to furnish insights into ChatGPT's capabilities, potential implications, ethical concerns, and offer direction for future advancements in this field.
AB - This paper presents a comprehensive survey of ChatGPT-related (GPT-3.5 and GPT-4) research, state-of-the-art large language models (LLM) from the GPT series, and their prospective applications across diverse domains. Indeed, key innovations such as large-scale pre-training that captures knowledge across the entire world wide web, instruction fine-tuning and Reinforcement Learning from Human Feedback (RLHF) have played significant roles in enhancing LLMs' adaptability and performance. We performed an in-depth analysis of 194 relevant papers on arXiv, encompassing trend analysis, word cloud representation, and distribution analysis across various application domains. The findings reveal a significant and increasing interest in ChatGPT-related research, predominantly centered on direct natural language processing applications, while also demonstrating considerable potential in areas ranging from education and history to mathematics, medicine, and physics. This study endeavors to furnish insights into ChatGPT's capabilities, potential implications, ethical concerns, and offer direction for future advancements in this field.
UR - https://www.scopus.com/pages/publications/85203135563
UR - https://www.scopus.com/pages/publications/85203135563#tab=citedBy
U2 - 10.1016/j.metrad.2023.100017
DO - 10.1016/j.metrad.2023.100017
M3 - Review article
AN - SCOPUS:85203135563
SN - 2950-1628
VL - 1
JO - Meta-Radiology
JF - Meta-Radiology
IS - 2
M1 - 100017
ER -