Single robot – Multiple human interaction via intelligent user interfaces

作者:

Highlights:

摘要

This project addresses some research issues concerning design of intelligent user interfaces for improving human–robot interaction. In some critical applications, users interact with robots via Graphical User Interfaces (GUIs), which usually contain standard components considering a large number of users. Some of these user interface components may be redundant and sometimes confusing for some users depending on their preferences, capabilities, and the context robots are used in. This paper describes an adaptive system that enables a mobile robot to learn its users’ preferences and capabilities so that it can offer a dynamic and efficient GUI for each user rather than a standard GUI for all users. The system predicts future actions of the users by generating models based on the users’ previous interactions with the robot. The system was implemented and evaluated on a Pioneer 3-AT mobile robot. About 20 participants who were assessed on spatial ability directed the robot in simple spatial navigation tasks to evaluate effectiveness of the adaptive interface. Time to complete the task, the number of steps, and the number of errors were collected. The results showed that although spatial reasoning ability plays an important role in mobile robot navigation, it is less important in the robot control with adaptive interfaces compared to that of the non-adaptive.

论文关键词:Human–robot interaction,Mobile robots,Navigation,Intelligent user interfaces

论文评审过程:Received 11 May 2006, Revised 2 March 2008, Accepted 12 March 2008, Available online 20 March 2008.

论文官网地址:https://doi.org/10.1016/j.knosys.2008.03.008