TY - GEN
T1 - Poster
T2 - 2023 International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing, MobiHoc 2023
AU - Zhang, Tianfang
AU - Ye, Zhengkun
AU - Mahdad, Ahmed Tanvir
AU - Akanda, Md Mojibur Rahman Redoy
AU - Shi, Cong
AU - Saxena, Nitesh
AU - Wang, Yan
AU - Chen, Yingying
N1 - Publisher Copyright:
© 2023 Owner/Author(s).
PY - 2023/10/23
Y1 - 2023/10/23
N2 - Despite the rapid growth of augmented reality and virtual reality (AR/VR) in various applications, the understanding of information leakage through sensor-rich headsets remains in its infancy. In this poster, we investigate an unobtrusive privacy attack, which exposes users' vital signs and embedded sensitive information (e.g., gender, identity, body fat ratio), based on unrestricted AR/VR motion sensors. The key insight is that the headset is closely mounted on the user's face, allowing the motion sensors to detect facial vibrations produced by users' breathing and heartbeats. Specifically, we employ deep-learning techniques to reconstruct vital signs, achieving signal qualities comparable to dedicated medical instruments, as well as deriving users' gender, identity, and body fat information. Experiments on three types of commodity AR/VR headsets reveal that our attack can successfully reconstruct high-quality vital signs, detect gender (accuracy over 93.33%), re-identify users (accuracy over 97.83%), and derive body fat ratio (error less than 4.43%).
AB - Despite the rapid growth of augmented reality and virtual reality (AR/VR) in various applications, the understanding of information leakage through sensor-rich headsets remains in its infancy. In this poster, we investigate an unobtrusive privacy attack, which exposes users' vital signs and embedded sensitive information (e.g., gender, identity, body fat ratio), based on unrestricted AR/VR motion sensors. The key insight is that the headset is closely mounted on the user's face, allowing the motion sensors to detect facial vibrations produced by users' breathing and heartbeats. Specifically, we employ deep-learning techniques to reconstruct vital signs, achieving signal qualities comparable to dedicated medical instruments, as well as deriving users' gender, identity, and body fat information. Experiments on three types of commodity AR/VR headsets reveal that our attack can successfully reconstruct high-quality vital signs, detect gender (accuracy over 93.33%), re-identify users (accuracy over 97.83%), and derive body fat ratio (error less than 4.43%).
KW - AR/VR headsets
KW - motion sensors
KW - sensitive info
KW - vital sign
UR - http://www.scopus.com/inward/record.url?scp=85176129251&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85176129251&partnerID=8YFLogxK
U2 - 10.1145/3565287.3623624
DO - 10.1145/3565287.3623624
M3 - Conference contribution
AN - SCOPUS:85176129251
T3 - Proceedings of the International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc)
SP - 308
EP - 309
BT - MobiHoc 2023 - Proceedings of the 2023 International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing
PB - Association for Computing Machinery
Y2 - 23 October 2023 through 26 October 2023
ER -