TY - GEN
T1 - Meta-Learning to Communicate
T2 - 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020
AU - Park, Sangwoo
AU - Simeone, Osvaldo
AU - Kang, Joonhyuk
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/5
Y1 - 2020/5
N2 - When a channel model is available, learning how to communicate on fading noisy channels can be formulated as the (unsupervised) training of an autoencoder consisting of the cascade of encoder, channel, and decoder. An important limitation of the approach is that training should be generally carried out from scratch for each new channel. To cope with this problem, prior works considered joint training over multiple channels with the aim of finding a single pair of encoder and decoder that works well on a class of channels. As a result, joint training ideally mimics the operation of non-coherent transmission schemes. In this paper, we propose to obviate the limitations of joint training via meta-learning: Rather than training a common model for all channels, meta-learning finds a common initialization vector that enables fast training on any channel. The approach is validated via numerical results, demonstrating significant training speed-ups, with effective encoders and decoders obtained with as little as one iteration of Stochastic Gradient Descent.
AB - When a channel model is available, learning how to communicate on fading noisy channels can be formulated as the (unsupervised) training of an autoencoder consisting of the cascade of encoder, channel, and decoder. An important limitation of the approach is that training should be generally carried out from scratch for each new channel. To cope with this problem, prior works considered joint training over multiple channels with the aim of finding a single pair of encoder and decoder that works well on a class of channels. As a result, joint training ideally mimics the operation of non-coherent transmission schemes. In this paper, we propose to obviate the limitations of joint training via meta-learning: Rather than training a common model for all channels, meta-learning finds a common initialization vector that enables fast training on any channel. The approach is validated via numerical results, demonstrating significant training speed-ups, with effective encoders and decoders obtained with as little as one iteration of Stochastic Gradient Descent.
KW - Machine learning
KW - autoencoder
KW - fading channels
UR - http://www.scopus.com/inward/record.url?scp=85089238519&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85089238519&partnerID=8YFLogxK
U2 - 10.1109/ICASSP40776.2020.9053252
DO - 10.1109/ICASSP40776.2020.9053252
M3 - Conference contribution
AN - SCOPUS:85089238519
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 5075
EP - 5079
BT - 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 4 May 2020 through 8 May 2020
ER -