Abstract
Learning automata (LA) are powerful tools for reinforcement learning. A discretized pursuit LA is the most popular one among them. During an iteration its operation consists of three basic phases: 1) selecting the next action; 2) finding the optimal estimated action; and 3) updating the state probability. However, when the number of actions is large, the learning becomes extremely slow because there are too many updates to be made at each iteration. The increased updates are mostly from phases 1 and 3. A new fast discretized pursuit LA with assured ϵ-optimality is proposed to perform both phases 1 and 3 with the computational complexity independent of the number of actions. Apart from its low computational complexity, it achieves faster convergence speed than the classical one when operating in stationary environments. This paper can promote the applications of LA toward the large-scale-action oriented area that requires efficient reinforcement learning tools with assured ϵ-optimality, fast convergence speed, and low computational complexity for each iteration.
Original language | English (US) |
---|---|
Article number | 6955789 |
Pages (from-to) | 2089-2099 |
Number of pages | 11 |
Journal | IEEE Transactions on Cybernetics |
Volume | 45 |
Issue number | 10 |
DOIs | |
State | Published - Oct 2015 |
All Science Journal Classification (ASJC) codes
- Software
- Control and Systems Engineering
- Information Systems
- Human-Computer Interaction
- Computer Science Applications
- Electrical and Electronic Engineering
Keywords
- Discretized pursuit learning automata (LA)
- Stationary environments
- low computational complexity