TY - GEN
T1 - STCN-GR
T2 - 28th International Conference on Neural Information Processing, ICONIP 2021
AU - Lai, Zhiping
AU - Kang, Xiaoyang
AU - Wang, Hongbo
AU - Zhang, Weiqi
AU - Zhang, Xueze
AU - Gong, Peixian
AU - Niu, Lan
AU - Huang, Huijie
N1 - Publisher Copyright:
© 2021, Springer Nature Switzerland AG.
PY - 2021
Y1 - 2021
N2 - Gesture recognition using surface electromyography (sEMG) is the technical core of muscle-computer interface (MCI) in human-computer interaction (HCI), which aims to classify gestures according to signals obtained from human hands. Since sEMG signals are characterized by spatial relevancy and temporal nonstationarity, sEMG-based gesture recognition is a challenging task. Previous works attempt to model this structured information and extract spatial and temporal features, but the results are not satisfactory. To tackle this problem, we proposed spatial-temporal convolutional networks for sEMG-based gesture recognition (STCN-GR). In this paper, the concept of the sEMG graph is first proposed by us to represent sEMG data instead of image and vector sequence adopted by previous works, which provides a new perspective for the research of sEMG-based tasks, not just gesture recognition. Graph convolutional networks (GCNs) and temporal convolutional networks (TCNs) are used in STCN-GR to capture spatial-temporal information. Additionally, the connectivity of the graph can be adjusted adaptively in different layers of networks, which increases the flexibility of networks compared with the fixed graph structure used by original GCNs. On two high-density sEMG (HD-sEMG) datasets and a sparse armband dataset, STCN-GR outperforms previous works and achieves the state-of-the-art, which shows superior performance and powerful generalization ability.
AB - Gesture recognition using surface electromyography (sEMG) is the technical core of muscle-computer interface (MCI) in human-computer interaction (HCI), which aims to classify gestures according to signals obtained from human hands. Since sEMG signals are characterized by spatial relevancy and temporal nonstationarity, sEMG-based gesture recognition is a challenging task. Previous works attempt to model this structured information and extract spatial and temporal features, but the results are not satisfactory. To tackle this problem, we proposed spatial-temporal convolutional networks for sEMG-based gesture recognition (STCN-GR). In this paper, the concept of the sEMG graph is first proposed by us to represent sEMG data instead of image and vector sequence adopted by previous works, which provides a new perspective for the research of sEMG-based tasks, not just gesture recognition. Graph convolutional networks (GCNs) and temporal convolutional networks (TCNs) are used in STCN-GR to capture spatial-temporal information. Additionally, the connectivity of the graph can be adjusted adaptively in different layers of networks, which increases the flexibility of networks compared with the fixed graph structure used by original GCNs. On two high-density sEMG (HD-sEMG) datasets and a sparse armband dataset, STCN-GR outperforms previous works and achieves the state-of-the-art, which shows superior performance and powerful generalization ability.
KW - Gesture recognition
KW - Human-computer interaction
KW - sEMG graph
KW - Spatial-temporal convolutional networks
KW - Surface electromyography
UR - http://www.scopus.com/inward/record.url?scp=85121898163&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-92238-2_3
DO - 10.1007/978-3-030-92238-2_3
M3 - Conference contribution
AN - SCOPUS:85121898163
SN - 9783030922375
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 27
EP - 39
BT - Neural Information Processing - 28th International Conference, ICONIP 2021, Proceedings
A2 - Mantoro, Teddy
A2 - Lee, Minho
A2 - Ayu, Media Anugerah
A2 - Wong, Kok Wai
A2 - Hidayanto, Achmad Nizar
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 8 December 2021 through 12 December 2021
ER -