TY - GEN
T1 - Specialized embedding approximation for edge intelligence
T2 - 2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021
AU - Srivastava, Sangeeta
AU - Roy, Dhrubojyoti
AU - Cartwright, Mark
AU - Bello, Juan P.
AU - Arora, Anish
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - Embedding models that encode semantic information into lowdimensional vector representations are useful in various machine learning tasks with limited training data. However, these models are typically too large to support inference in small edge devices, which motivates training of smaller yet comparably predictive student embedding models through knowledge distillation (KD). While knowledge distillation traditionally uses the teacher's original training dataset to train the student, we hypothesize that using a dataset similar to the student's target domain allows for better compression and training efficiency for the said domain, at the cost of reduced generality across other (non-pertinent) domains. Hence, we introduce Specialized Embedding Approximation (SEA) to train a student featurizer to approximate the teacher's embedding manifold for a given target domain. We demonstrate the feasibility of SEA in the context of acoustic event classification for urban noise monitoring and show that leveraging a dataset related to this target domain not only improves the baseline performance of the original embedding model but also yields competitive students with >1 order of magnitude lesser storage and activation memory. We further investigate the impact of using random and informed sampling techniques for dimensionality reduction in SEA.
AB - Embedding models that encode semantic information into lowdimensional vector representations are useful in various machine learning tasks with limited training data. However, these models are typically too large to support inference in small edge devices, which motivates training of smaller yet comparably predictive student embedding models through knowledge distillation (KD). While knowledge distillation traditionally uses the teacher's original training dataset to train the student, we hypothesize that using a dataset similar to the student's target domain allows for better compression and training efficiency for the said domain, at the cost of reduced generality across other (non-pertinent) domains. Hence, we introduce Specialized Embedding Approximation (SEA) to train a student featurizer to approximate the teacher's embedding manifold for a given target domain. We demonstrate the feasibility of SEA in the context of acoustic event classification for urban noise monitoring and show that leveraging a dataset related to this target domain not only improves the baseline performance of the original embedding model but also yields competitive students with >1 order of magnitude lesser storage and activation memory. We further investigate the impact of using random and informed sampling techniques for dimensionality reduction in SEA.
KW - Acoustic event detection
KW - Deep audio embeddings
KW - Knowledge distillation
KW - On-device machine learning
KW - Urban noise classification
UR - https://www.scopus.com/pages/publications/85115084373
UR - https://www.scopus.com/inward/citedby.url?scp=85115084373&partnerID=8YFLogxK
U2 - 10.1109/ICASSP39728.2021.9414287
DO - 10.1109/ICASSP39728.2021.9414287
M3 - Conference contribution
AN - SCOPUS:85115084373
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 8378
EP - 8382
BT - 2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 6 June 2021 through 11 June 2021
ER -