We investigate UAV-IoT data capture and networking for remote scene virtual reality (VR) immersion. We characterize the delivered immersion fidelity as a function of the assigned UAV-IoT capture/network rates and study the optimization problem of maximizing it, for given system/application constraints. We explore fast reinforcement learning to discover the best dynamic UAV-IoT network placement over the scene of interest to maximize the expected remote immersion fidelity. We design scalable source-channel viewpoint coding to maximize the expected reconstruction fidelity of the data captured at every UAV location at the ground-based aggregation point. Finally, we explore layered directional networking and rate-distortion-power optimized embedded scheduling methods to effectively transmit the encoded data and overcome network transients that lead to packet buffering, which represent the fourth system component of our framework. Experimental results demonstrate considerable performance efficiency gains enabled by each system component over the respective state-of-the-art reference methods, in delivered VR immersion fidelity, application interactivity/play-out latency, and transmission power consumption.
All Science Journal Classification (ASJC) codes
- Computer Graphics and Computer-Aided Design