TY - GEN
T1 - Action evaluation hardware accelerator for next-generation real-time reinforcement learning in emerging IoT systems
AU - Sun, Jianchi
AU - Sharma, Nikhilesh
AU - Chakareski, Jacob
AU - Mastronarde, Nicholas
AU - Lao, Yingjie
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/7
Y1 - 2020/7
N2 - Internet of Things (IoT) sensors often operate in unknown dynamic environments comprising latency-sensitive data sources, dynamic processing loads, and communication channels of unknown statistics. Such settings represent a natural application domain of reinforcement learning (RL), which enables computing and learning decision policies online, with no a priori knowledge. In our previous work, we introduced a post-decision state (PDS) based RL framework, which considerably accelerates the rate of learning an optimal decision policy. The present paper formulates an efficient hardware architecture for the action evaluation step, which is the most computationally-intensive step in the PDS based learning framework. By leveraging the unique characteristics of PDS learning, we optimize its state value expectation and known cost computational blocks, to speed-up the overall computation. Our experiments show that the optimized circuit is 49 times faster than its software implementation counterpart, and six times faster than a Q-learning hardware accelerator.
AB - Internet of Things (IoT) sensors often operate in unknown dynamic environments comprising latency-sensitive data sources, dynamic processing loads, and communication channels of unknown statistics. Such settings represent a natural application domain of reinforcement learning (RL), which enables computing and learning decision policies online, with no a priori knowledge. In our previous work, we introduced a post-decision state (PDS) based RL framework, which considerably accelerates the rate of learning an optimal decision policy. The present paper formulates an efficient hardware architecture for the action evaluation step, which is the most computationally-intensive step in the PDS based learning framework. By leveraging the unique characteristics of PDS learning, we optimize its state value expectation and known cost computational blocks, to speed-up the overall computation. Our experiments show that the optimized circuit is 49 times faster than its software implementation counterpart, and six times faster than a Q-learning hardware accelerator.
KW - Action Evaluation
KW - Hardware Acceleration
KW - Reinforcement Learning
KW - Wireless Communication
UR - http://www.scopus.com/inward/record.url?scp=85090412773&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85090412773&partnerID=8YFLogxK
U2 - 10.1109/ISVLSI49217.2020.00084
DO - 10.1109/ISVLSI49217.2020.00084
M3 - Conference contribution
AN - SCOPUS:85090412773
T3 - Proceedings of IEEE Computer Society Annual Symposium on VLSI, ISVLSI
SP - 428
EP - 433
BT - Proceedings - 2020 IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2020
PB - IEEE Computer Society
T2 - 19th IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2020
Y2 - 6 July 2020 through 8 July 2020
ER -