Abstract
One of the major deficiencies in current robot control schemes is the lack of high-level knowledge in the feedback loop. Typically, the sensory data acquired are fed back to the robot controller with minimal amount of processing. However, by accumulating useful sensory data and processing them intelligently, one can obtain invaluable information about the state of the task being performed by the robot. This paper presents a method based on the screw theory for interpreting the position and force sensory data into high-level assembly task constraints. The position data are obtained from the joint angle encoders of the manipulator and the force data are obtained from a wrist force sensor attached to the mounting plate of the manipulator end-effector. The interpretation of the sensory data is divided into two subproblems: representation problem and interpretation problem. Spatial and physical constraints based on the screw axis and force axis of the manipulator are used to represent the high-level task constraints. Algorithms which yield least-squared error results are developed to obtain the spatial and physical constraints from the position and force data. The spatial and physical constraints obtained from the sensory data are then compared with the desired spatial and physical constraints to interpret the state of the assembly task. Computer simulation and experimental results for verifying the validity of the algorithms are also presented and discussed.
Original language | English (US) |
---|---|
Pages (from-to) | 2-13 |
Number of pages | 12 |
Journal | Proceedings of SPIE - The International Society for Optical Engineering |
Volume | 1193 |
DOIs | |
State | Published - Mar 1 1990 |
All Science Journal Classification (ASJC) codes
- Electronic, Optical and Magnetic Materials
- Condensed Matter Physics
- Computer Science Applications
- Applied Mathematics
- Electrical and Electronic Engineering