TY - GEN
T1 - Deriving anatomical context from 4D ultrasound
AU - Müller, M.
AU - Helljesen, L. E.S.
AU - Prevost, R.
AU - Viola, I.
AU - Nylund, K.
AU - Gilja, O. H.
AU - Navab, N.
AU - Wein, W.
N1 - Funding Information:
We would like to express our appreciation to the Med-Viz Network Bergen (medviz.uib.no) and GE Vingmed Ultrasound (www.gehealthcare.com) for providing the MRI and US datasets from the study on volunteering participants. The involvement of Ivan Viola was supported by the Vienna Science and Technology Fund (WWTF) through project VRG11-010 and by EC Marie Curie Career Integration Grant through project PCIG13-GA-2013-618680.
Publisher Copyright:
© Eurographics Workshop on Visual Computing for Biology and Medicine, VCBM 2014. All rights reserved.
PY - 2014
Y1 - 2014
N2 - Real-time three-dimensional (also known as 4D) ultrasound imaging using matrix array probes has the potential to create large-volume information of entire organs such as the liver without external tracking hardware. This information can in turn be placed into the context of a CT or MRI scan of the same patient. However for such an approach many image processing challenges need to be overcome and sources of error addressed, including reconstruction drift, anatomical deformations, varying appearance of anatomy, and imaging artifacts. In this work, we present a fully automatic system including robust image-based ultrasound tracking, a novel learning-based global initialization of the anatomical context, and joint mono- and multi-modal registration. In an evaluation on 4D US sequences and MRI scans of eight volunteers we achieve automatic reconstruction and registration without any user interaction, assess the registration errors based on physician-defined landmarks, and demonstrate real-time tracking of free-breathing sequences.
AB - Real-time three-dimensional (also known as 4D) ultrasound imaging using matrix array probes has the potential to create large-volume information of entire organs such as the liver without external tracking hardware. This information can in turn be placed into the context of a CT or MRI scan of the same patient. However for such an approach many image processing challenges need to be overcome and sources of error addressed, including reconstruction drift, anatomical deformations, varying appearance of anatomy, and imaging artifacts. In this work, we present a fully automatic system including robust image-based ultrasound tracking, a novel learning-based global initialization of the anatomical context, and joint mono- and multi-modal registration. In an evaluation on 4D US sequences and MRI scans of eight volunteers we achieve automatic reconstruction and registration without any user interaction, assess the registration errors based on physician-defined landmarks, and demonstrate real-time tracking of free-breathing sequences.
UR - http://www.scopus.com/inward/record.url?scp=85046985285&partnerID=8YFLogxK
U2 - 10.2312/vcbm.20141196
DO - 10.2312/vcbm.20141196
M3 - Conference contribution
AN - SCOPUS:85046985285
T3 - Eurographics Workshop on Visual Computing for Biology and Medicine, VCBM 2014
SP - 173
EP - 180
BT - Eurographics Workshop on Visual Computing for Biology and Medicine, VCBM 2014
A2 - Viola, Ivan
A2 - Buhler, Katja
A2 - Ropinski, Timo
PB - Eurographics Association
T2 - 2014 Eurographics Workshop on Visual Computing for Biology and Medicine, VCBM 2014
Y2 - 4 September 2014 through 5 September 2014
ER -