TY - GEN
T1 - Learning-based Physical Layer Communications for Multiagent Collaboration
AU - Mostaani, Arsham
AU - Simeone, Osvaldo
AU - Chatzinotas, Symeon
AU - Ottersten, Bjorn
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/9
Y1 - 2019/9
N2 - Consider a collaborative task carried out by two autonomous agents that can communicate over a noisy channel. Each agent is only aware of its own state, while the accomplishment of the task depends on the value of the joint state of both agents. As an example, both agents must simultaneously reach a certain location of the environment, while only being aware of their own positions. Assuming the presence of feedback in the form of a common reward to the agents, a conventional approach would apply separately: (i) an off-the-shelf coding and decoding scheme in order to enhance the reliability of the communication of the state of one agent to the other; and (ii) a standard multiagent reinforcement learning strategy to learn how to act in the resulting environment. In this work, it is argued that the performance of the collaborative task can be improved if the agents learn how to jointly communicate and act. In particular, numerical results for a baseline grid world example demonstrate that the jointly learned policy carries out compression and unequal error protection by leveraging information about the action policy.
AB - Consider a collaborative task carried out by two autonomous agents that can communicate over a noisy channel. Each agent is only aware of its own state, while the accomplishment of the task depends on the value of the joint state of both agents. As an example, both agents must simultaneously reach a certain location of the environment, while only being aware of their own positions. Assuming the presence of feedback in the form of a common reward to the agents, a conventional approach would apply separately: (i) an off-the-shelf coding and decoding scheme in order to enhance the reliability of the communication of the state of one agent to the other; and (ii) a standard multiagent reinforcement learning strategy to learn how to act in the resulting environment. In this work, it is argued that the performance of the collaborative task can be improved if the agents learn how to jointly communicate and act. In particular, numerical results for a baseline grid world example demonstrate that the jointly learned policy carries out compression and unequal error protection by leveraging information about the action policy.
UR - http://www.scopus.com/inward/record.url?scp=85075888091&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85075888091&partnerID=8YFLogxK
U2 - 10.1109/PIMRC.2019.8904190
DO - 10.1109/PIMRC.2019.8904190
M3 - Conference contribution
AN - SCOPUS:85075888091
T3 - IEEE International Symposium on Personal, Indoor and Mobile Radio Communications, PIMRC
BT - 2019 IEEE 30th Annual International Symposium on Personal, Indoor and Mobile Radio Communications, PIMRC 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 30th IEEE Annual International Symposium on Personal, Indoor and Mobile Radio Communications, PIMRC 2019
Y2 - 8 September 2019 through 11 September 2019
ER -