TY - GEN
T1 - Inferring Goals with Gaze during Teleoperated Manipulation
AU - Aronson, Reuben M.
AU - Almutlak, Nadia
AU - Admoni, Henny
N1 - KAUST Repository Item: Exported on 2022-06-22
Acknowledgements: This work was supported by the Paralyzed Veterans of America, the National Science Foundation (IIS-1755823 and IIS-1943072), and the Tang Family Foundation Innovation Fund. Nadia AlMutlak was supported by the King Abdullah University of Science and Technology.
This publication acknowledges KAUST support, but has no KAUST affiliated authors.
PY - 2021/12/16
Y1 - 2021/12/16
N2 - Assistive robot manipulators help people with upper motor impairments perform tasks by themselves. However, teleoperating a robot to perform complex tasks is difficult. Shared control algorithms make this easier: these algorithms predict the user's goal, autonomously generate a plan to accomplish the goal, and fuse that plan with the user's input. To accurately predict the user's goal, these algorithms typically use the user's input command (e.g., joystick input) directly. We use another sensing modality: the user's natural eye gaze behavior, which is highly task-relevant and informative early in the task. We develop an algorithm using hidden Markov models to infer goals from natural eye gaze behavior that appears while users are teleoperating a robot. We show that gaze-based predictions outperform goal prediction based on the control input and that our sequence model improves the prediction quality relative to gaze-based aggregate models.
AB - Assistive robot manipulators help people with upper motor impairments perform tasks by themselves. However, teleoperating a robot to perform complex tasks is difficult. Shared control algorithms make this easier: these algorithms predict the user's goal, autonomously generate a plan to accomplish the goal, and fuse that plan with the user's input. To accurately predict the user's goal, these algorithms typically use the user's input command (e.g., joystick input) directly. We use another sensing modality: the user's natural eye gaze behavior, which is highly task-relevant and informative early in the task. We develop an algorithm using hidden Markov models to infer goals from natural eye gaze behavior that appears while users are teleoperating a robot. We show that gaze-based predictions outperform goal prediction based on the control input and that our sequence model improves the prediction quality relative to gaze-based aggregate models.
UR - http://hdl.handle.net/10754/679235
UR - https://ieeexplore.ieee.org/document/9636551/
UR - http://www.scopus.com/inward/record.url?scp=85124361462&partnerID=8YFLogxK
U2 - 10.1109/IROS51168.2021.9636551
DO - 10.1109/IROS51168.2021.9636551
M3 - Conference contribution
SN - 9781665417143
SP - 7307
EP - 7314
BT - 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
PB - IEEE
ER -