Investigation of input modalities based on a spatial region array for hand-gesture interfaces

Huanwei Wu, Yi Han, Yanyin Zhou, Xiangliang Zhang, Jibin Yin, Shuoyu Wang

Research output: Contribution to journalArticlepeer-review

1 Scopus citations


To improve the efficiency of computer input, extensive research has been conducted on hand movement in a spatial region. Most of it has focused on the technologies but not the users’ spatial controllability. To assess this, we analyze a users’ common operational area through partitioning, including a layered array of one dimension and a spatial region array of two dimensions. In addition, to determine the difference in spatial controllability between a sighted person and a visually impaired person, we designed two experiments: target selection under a visual and under a non-visual scenario. Furthermore, we explored two factors: the size and the position of the target. Results showed the following: the 5 × 5 target blocks, which were 60.8 mm × 48 mm, could be easily controlled by both the sighted and the visually impaired person; the sighted person could easily select the bottom-right area; however, for the visually impaired person, the easiest selected area was the upper right. Based on the results of the users’ spatial controllability, we propose two interaction techniques (non-visual selection and a spatial gesture recognition technique for surgery) and four spatial partitioning strategies for human-computer interaction designers, which can improve the users spatial controllability.
Original languageEnglish (US)
JournalElectronics (Switzerland)
Issue number24
StatePublished - Dec 1 2021
Externally publishedYes


Dive into the research topics of 'Investigation of input modalities based on a spatial region array for hand-gesture interfaces'. Together they form a unique fingerprint.

Cite this