Abstract
Modeling low level features to high level semantics in medical imaging is an important aspect in filtering anatomy objects. Bag of Visual Words (BOVW) representations have been proven effective to model these low level features to mid level representations. Convolutional neural nets are learning systems that can automatically extract high-quality representations from raw images. However, their deployment in the medical field is still a bit challenging due to the lack of training data. In this paper, learned features that are obtained by training convolutional neural networks are compared with our proposed hand-crafted HSIFT features. The HSIFT feature is a symmetric fusion of a Harris corner detector and the Scale Invariance Transform process (SIFT) with BOVW representation. The SIFT process is enhanced as well as the classification technique by adopting bagging with a surrogate split method. Quantitative evaluation shows that our proposed hand-crafted HSIFT feature outperforms the learned features from convolutional neural networks in discriminating anatomy image classes.
Original language | English (US) |
---|---|
Pages (from-to) | 1987 |
Journal | Symmetry |
Volume | 13 |
Issue number | 11 |
DOIs | |
State | Published - Oct 20 2021 |
ASJC Scopus subject areas
- Computer Science (miscellaneous)
- General Mathematics
- Physics and Astronomy (miscellaneous)
- Chemistry (miscellaneous)