TY - GEN
T1 - Enhancing patient Comprehension
T2 - 2024 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2024
AU - Dehkordi, Mahshad Koohi H.
AU - Zhou, Shuxin
AU - Perl, Yehoshua
AU - Deek, Fadi P.
AU - Einstein, Andrew J.
AU - Elhanan, Gai
AU - He, Zhe
AU - Liu, Hao
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Electronic Health Record (EHR) notes often contain complex medical language, making them difficult to understand for patients lacking medical background. Simplifying EHR notes to a 6th-grade reading level is recommended by the American Medical Association to enhance patient comprehension and engagement. Large Language Models (LLMs) show promise in achieving this goal but also face challenges, such as missing and generating false information. In our previous work, we have shown that providing LLMs with highlighted EHRs, where the important information is highlighted, results in more accurate summaries compared to summarizing unhighlighted notes. In this study, we simplify highlighted EHRs with LLMs, specifically ChatGPT-4o, using two approaches: two-step simplification (sequential) and one-step (CoT-based) simplification. In the sequential approach, we generate a structured summary of the highlighted EHR, as a first step, and then we convert this summary into language suitable for a 6th-grade reader, as a second step. In the CoT-based approach, we convert the highlighted EHR into a structured summary understandable for a 6th-grade reader in one step. Evaluating the simplified notes obtained from the two approaches, the sequential approach shows higher completeness (82.35% vs. 75.89%) and correctness, as well as better readability scores (FKGL: 7.72 vs. 10.73; Flesch: 67.71 vs. 45.31) and higher average understandability ratings from ChatGPT-4 (3.92 vs. 3.28), demonstrating its overall superiority in simplifying notes.
AB - Electronic Health Record (EHR) notes often contain complex medical language, making them difficult to understand for patients lacking medical background. Simplifying EHR notes to a 6th-grade reading level is recommended by the American Medical Association to enhance patient comprehension and engagement. Large Language Models (LLMs) show promise in achieving this goal but also face challenges, such as missing and generating false information. In our previous work, we have shown that providing LLMs with highlighted EHRs, where the important information is highlighted, results in more accurate summaries compared to summarizing unhighlighted notes. In this study, we simplify highlighted EHRs with LLMs, specifically ChatGPT-4o, using two approaches: two-step simplification (sequential) and one-step (CoT-based) simplification. In the sequential approach, we generate a structured summary of the highlighted EHR, as a first step, and then we convert this summary into language suitable for a 6th-grade reader, as a second step. In the CoT-based approach, we convert the highlighted EHR into a structured summary understandable for a 6th-grade reader in one step. Evaluating the simplified notes obtained from the two approaches, the sequential approach shows higher completeness (82.35% vs. 75.89%) and correctness, as well as better readability scores (FKGL: 7.72 vs. 10.73; Flesch: 67.71 vs. 45.31) and higher average understandability ratings from ChatGPT-4 (3.92 vs. 3.28), demonstrating its overall superiority in simplifying notes.
KW - EHR simplification
KW - Highlighted EHR notes
KW - Large Language Models
KW - Medical text simplification
KW - Prompt engineering
UR - http://www.scopus.com/inward/record.url?scp=85217276855&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85217276855&partnerID=8YFLogxK
U2 - 10.1109/BIBM62325.2024.10822313
DO - 10.1109/BIBM62325.2024.10822313
M3 - Conference contribution
AN - SCOPUS:85217276855
T3 - Proceedings - 2024 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2024
SP - 6370
EP - 6377
BT - Proceedings - 2024 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2024
A2 - Cannataro, Mario
A2 - Zheng, Huiru
A2 - Gao, Lin
A2 - Cheng, Jianlin
A2 - de Miranda, Joao Luis
A2 - Zumpano, Ester
A2 - Hu, Xiaohua
A2 - Cho, Young-Rae
A2 - Park, Taesung
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 3 December 2024 through 6 December 2024
ER -