Abstract
The goals of this paper are twofold. First, we present our experiences with (1) fusion of vision, sonar, and contact sensory modalities in perceiving obstacles and targets, (2) arbitration of sensing and acting at reactive and deliberative levels, and (3) integration of asynchronous instruction taking and communication. Following a discussion of lessons learned we discuss architectural modules and issues that help standardize integration of sensor and acting modalities and solution across platforms. We have developed a robot assistant, which communicates in natural language and moves in a room using vision as its primary sensory mechanism. This system takes instruction from a supervisor to go to various agents in the room and follow them along to offer assistance. Our assistant uses a three tiered architecture which models: (a) knowledge representation and reasoning including natural language interactions, (b) routine interactions not explicitly controlled by the agent, and (c) reflexes. Our robotic assistant integrates (a) vision and sonar sensing in obstacle avoidance, (b) memory-based and reactive navigation (i.e., deliberative versus skill-based), (c) instruction taking and goal-driven behaviors, and (d) concurrent visual focusing behaviors.
Original language | English (US) |
---|---|
Title of host publication | IEEE International Symposium on Intelligent Control - Proceedings |
Publisher | IEEE |
Pages | 319-324 |
Number of pages | 6 |
State | Published - Dec 1 1998 |
Externally published | Yes |
Event | Proceedings of the 1998 IEEE International Symposium on Intelligent Control, ISIC - Gaithersburg, MD, USA Duration: Sep 14 1998 → Sep 17 1998 |
Other
Other | Proceedings of the 1998 IEEE International Symposium on Intelligent Control, ISIC |
---|---|
City | Gaithersburg, MD, USA |
Period | 9/14/98 → 9/17/98 |
All Science Journal Classification (ASJC) codes
- Hardware and Architecture
- Control and Systems Engineering