CCSS: Collaborative Research: Ubiquitous Sensing for VR/AR Immersive Communication: A Machine Learning Perspective

Project: Research project

Project Details

Description

Virtual and augmented reality systems comprise multi-view camera sensors that capture a scene from multiple perspectives. The captured data is then used to construct an immersive representation of the scene on the user's head mounted display. Such systems are poised to enable and enhance numerous important applications, e.g., inspection of large-scale infrastructure, archival of historical sites, search and rescue, disaster response, military reconnaissance, natural resource management, and immersive telepresence. However, due to its emerging nature, virtual/augmented reality immersive communication is presently limited to gaming or entertainment demonstrations featuring off-line captured/computer-generated content, studio-type settings, and high-end workstations to sustain its high data/computing workload. Moreover, there is little understanding of the fundamental trade-offs between the required signal acquisition density and sensor locations across space and time, the dynamics of the captured scene (motion, geometry, and textures), the available network and system resources, and the delivered immersion quality. This renders existing solutions impractical for deployment on bandwidth and energy constrained remote sensors. The project addresses these challenges via rigorous analysis and concerted algorithmic and application advances at the intersection of multi-view space-time sensing and signal representation, delay-sensitive communication, and machine learning. Education and outreach activities will immerse students in the exciting areas of visual sensing, wireless communications, and machine learning, and will engage underrepresented students spanning K-12 through undergraduate levels.

The objective of this project is to efficiently capture a remote environment using multiple camera sensors with the highest possible reconstruction quality under limited sampling and communication resources. This is achieved through four interrelated research tasks: (i) analysis of optimal space-time sampling policies that determine the sensors' locations and sampling rates to minimize the remote scene's reconstruction error; (ii) design of optimal signal representation methods that embed the sampled data jointly across space and time according to the allocated sampling rates; (iii) design of online learning sampling policies based on spectral graph theory that take sampling actions while exploring new sensor locations in the absence of a priori scene viewpoint signal knowledge; and (vi) design of computationally efficient self-organizing reinforcement learning methods that allow the wireless sensors to compute optimal transmission scheduling policies that meet the low-latency requirements of the overlaying virtual/augmented reality application while conserving their available energy. Integration, experimentation, and prototyping activities will be conducted to asses and validate the research advances in real-world settings. These technical advances will enable diverse applications of transformative impact.

StatusFinished
Effective start/end date7/1/176/30/20

Funding

  • National Science Foundation: $220,000.00

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.