Abstract
Dynamic optimal problems (DOPs) are often encountered in target search, emergency rescue, and object tracking. Motivated by the need to perform a search and rescue task, we clarify a DOP in a complex environment if a target unpredictably travels in an environment with general non-Gaussian distributed and time-varying noises. To solve this issue, we propose a recursive Bayesian estimation with a distributed sampling (RBEDS) model. Furthermore, two kinds of communication cooperative extensions, i.e., real-time communication and communication after finding the target, are analyzed. To balance between exploitation and exploration, an adaptive online co-search (AOCS) method, which consists of an online updating algorithm and a self-adaptive controller, is designed based on RBEDS. Simulation results demonstrates that searchers with AOCS can achieve a comparable search performance with a global sampling method, e.g., Markov Chain Monto Carlo estimation, by applying real-time communication. The local samples help searchers keep flexible and adaptive to the changes of the target. The proposed method with both communication and cooperation exhibits excellent performance when tracking a target. Another attractive result is that only a few searchers and local samples are demanded. The insensibility to the scale of samples makes the proposed method obtain a better solution with less computation cost than the existing methods.
Original language | English (US) |
---|---|
Article number | 7867053 |
Pages (from-to) | 439-451 |
Number of pages | 13 |
Journal | IEEE Transactions on Control Systems Technology |
Volume | 26 |
Issue number | 2 |
DOIs | |
State | Published - Mar 2018 |
All Science Journal Classification (ASJC) codes
- Control and Systems Engineering
- Electrical and Electronic Engineering
Keywords
- Dynamic target tracking
- distributed sample
- multiagent
- target search