TY - GEN
T1 - Leveraging Professional Radiologists' Expertise to Enhance LLMs' Evaluation for AI-generated Radiology Reports
AU - Zhu, Qingqing
AU - Chen, Xiuying
AU - Jin, Qiao
AU - Hou, Benjamin
AU - Mathai, Tejas Sudharshan
AU - Mukherjee, Pritam
AU - Gao, Xin
AU - Summers, Ronald M.
AU - Lu, Zhiyong
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - In radiology, Artificial Intelligence (AI) has significantly advanced report generation, but automatic evaluation of these AI-produced reports remains challenging. Current metrics, such as Conventional Natural Language Generation (NLG) and Clinical Efficacy (CE), often fall short in capturing the semantic intricacies of clinical contexts or overemphasize clinical details, undermining report clarity. To overcome these issues, our proposed method synergizes the expertise of professional radiologists with Large Language Models (LLMs), like GPT-3.5 and GPT-4. Utilizing In-Context Instruction Learning (ICIL) and Chain of Thought (CoT) reasoning, our approach aligns LLM evaluations with radiologist standards, enabling detailed comparisons between human and AI -generated reports. This is further enhanced by a Regression model that aggregates sentence evaluation scores. Experimental results show that our "Detailed GPT-4 (5-shot)"model achieves a correlation that is 0.48, outperforming the METEOR metric by 0.19, while our "Regressed GPT-4"model shows even greater alignment(0.64) with expert evaluations, exceeding the best existing metric by a 0.35 margin. Moreover, the robustness of our explanations has been validated through a thorough iterative strategy. We plan to publicly release annotations from radiology experts, setting a new standard for accuracy in future assessments. This underscores the potential of our approach in enhancing the quality assessment of AI-driven medical reports.
AB - In radiology, Artificial Intelligence (AI) has significantly advanced report generation, but automatic evaluation of these AI-produced reports remains challenging. Current metrics, such as Conventional Natural Language Generation (NLG) and Clinical Efficacy (CE), often fall short in capturing the semantic intricacies of clinical contexts or overemphasize clinical details, undermining report clarity. To overcome these issues, our proposed method synergizes the expertise of professional radiologists with Large Language Models (LLMs), like GPT-3.5 and GPT-4. Utilizing In-Context Instruction Learning (ICIL) and Chain of Thought (CoT) reasoning, our approach aligns LLM evaluations with radiologist standards, enabling detailed comparisons between human and AI -generated reports. This is further enhanced by a Regression model that aggregates sentence evaluation scores. Experimental results show that our "Detailed GPT-4 (5-shot)"model achieves a correlation that is 0.48, outperforming the METEOR metric by 0.19, while our "Regressed GPT-4"model shows even greater alignment(0.64) with expert evaluations, exceeding the best existing metric by a 0.35 margin. Moreover, the robustness of our explanations has been validated through a thorough iterative strategy. We plan to publicly release annotations from radiology experts, setting a new standard for accuracy in future assessments. This underscores the potential of our approach in enhancing the quality assessment of AI-driven medical reports.
KW - Artificial Intelligence
KW - Chain of Thought (CoT) Reasoning
KW - Evaluation Metrics
KW - Large Language Models (LLMs)
KW - Radiology
UR - http://www.scopus.com/inward/record.url?scp=85203715945&partnerID=8YFLogxK
U2 - 10.1109/ICHI61247.2024.00058
DO - 10.1109/ICHI61247.2024.00058
M3 - Conference contribution
AN - SCOPUS:85203715945
T3 - Proceedings - 2024 IEEE 12th International Conference on Healthcare Informatics, ICHI 2024
SP - 402
EP - 411
BT - Proceedings - 2024 IEEE 12th International Conference on Healthcare Informatics, ICHI 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 12th IEEE International Conference on Healthcare Informatics, ICHI 2024
Y2 - 3 June 2024 through 6 June 2024
ER -