Coordinating interactive vision behaviors for cognitive assistance

作者:

Highlights:

摘要

Most of the research conducted in human–computer interaction (HCI) focuses on a seamless interface between a user and an application that is separated from the user in terms of working space and/or control, like navigation in image databases, instruction of robots, or information retrieval systems. The interaction paradigm of cognitive assistance goes one step further in that the application consists of assisting the user performing everyday tasks in his or her own environment and in that the user and the system share the control of such tasks. This kind of tight bidirectional interaction in realistic environments demands cognitive system skills like context awareness, attention, learning, and reasoning about the external environment. Therefore, the system needs to integrate a wide variety of visual functions, like localization, object tracking and recognition, action recognition, interactive object learning, etc. In this paper we show how different kinds of system behaviors are realized using the Active Memory Infrastructure that provides the technical basis for distributed computation and a data- and event-driven integration approach. A running augmented reality system for cognitive assistance is presented that supports users in mixing beverages. The flexibility and generality of the system framework provides an ideal testbed for studying visual cues in human–computer interaction. We report about results from first user studies.

论文关键词:

论文评审过程:Received 5 September 2005, Accepted 13 October 2006, Available online 30 January 2007.

论文官网地址:https://doi.org/10.1016/j.cviu.2006.10.018