Short-segment Heart Sound Classification Using an Ensemble of Deep Convolutional Neural Networks

Fuad Noman, Chee-Ming Ting, Sh-Hussain Salleh, Hernando Ombao

Research output: Chapter in Book/Report/Conference proceedingConference contribution

74 Scopus citations

Abstract

This paper proposes a framework based on deep convolutional neural networks (CNNs) for automatic heart sound classification using short-segments of individual heart beats. We design a 1D-CNN that directly learns features from raw heart-sound signals, and a 2D-CNN that takes inputs of two-dimensional time-frequency feature maps based on Mel-frequency cepstral coefficients. We further develop a time-frequency CNN ensemble (TF-ECNN) combining the 1D-CNN and 2D-CNN based on score-level fusion of the class probabilities. On the large PhysioNet CinC challenge 2016 database, the proposed CNN models outperformed traditional classifiers based on support vector machine and hidden Markov models with various hand-crafted time- and frequency-domain features. Best classification scores with 89.22% accuracy and 89.94% sensitivity were achieved by the ECNN, and 91.55% specificity and 88.82% modified accuracy by the 2D-CNN alone on the test set.
Original languageEnglish (US)
Title of host publicationICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
PublisherIEEE
Pages1318-1322
Number of pages5
ISBN (Print)9781479981311
DOIs
StatePublished - Apr 17 2019

Fingerprint

Dive into the research topics of 'Short-segment Heart Sound Classification Using an Ensemble of Deep Convolutional Neural Networks'. Together they form a unique fingerprint.

Cite this