Exploiting eye–hand coordination to detect grasping movements

作者:

Highlights:

摘要

Human beings are very skillful at reaching for and grasping objects under multiple conditions, even when faced with an object's wide variety of positions, locations, structures and orientations. This natural ability, controlled by the human brain, is called eye–hand coordination. To understand this behavior it is necessary to study both eye and hand movements simultaneously. This paper proposes a novel approach to detect grasping movements by means of computer vision techniques. This solution fuses two viewpoints, one viewpoint which is obtained from an eye-tracker capturing the user's perspective and a second viewpoint which is captured by a wearable camera attached to a user's wrist. Utilizing information from these two viewpoints it is possible to characterize multiple hand movements in conjunction with eye-gaze movements through a Hidden–Markov Model framework. This paper shows that combining these two sources makes it possible to detect hand gestures using only the objects contained in the scene even without markers on the surface of the objects. In addition, it is possible to detect which is the desired object before the user can actually grasp said object.

论文关键词:Visual system,Grasping movements,Motion analysis,Hand posture,Hand gesture,Object recognition

论文评审过程:Received 9 September 2011, Revised 31 March 2012, Accepted 1 July 2012, Available online 25 July 2012.

论文官网地址:https://doi.org/10.1016/j.imavis.2012.07.001