TY - GEN
T1 - AirDrop
T2 - 24th International Workshop on Mobile Computing Systems and Applications, HotMobile 2023
AU - Jayarajah, Kasthuri
AU - Gart, Sean
AU - Gangopadhyay, Aryya
N1 - Publisher Copyright:
© 2023 ACM.
PY - 2023/2/22
Y1 - 2023/2/22
N2 - Driven by advances in deep neural network models that fuse multi-modal input such as RGB and depth representations to accurately understand the semantics of the environments (e.g., objects of different classes, obstacles, etc.), ground robots have gone through dramatic improvements in navigating unknown environments. Relying on their singular, limited perspective, however, can lead to suboptimal paths that are wasteful and quickly drain out their batteries, especially in the case of long-horizon navigation. We consider a special class of ground robots, that are air-deployed, and pose the central question: can we leverage aerial perspectives of differing resolutions and fields of view from air-To-ground robots to achieve superior terrain-Aware navigation? We posit that a key enabler of this direction of research is collaboration between such robots, to collectively update their route plans, leveraging advances in long-range communication and on-board computing. Whilst each robot can capture a sequence of high resolution images during their descent, intelligent, lightweight pre-processing on-board can dramatically reduce the size of the data that needs to be shared among its peers over severely bandwidth-limited long range communication channels (e.g., over sub gigahertz frequencies). In this paper, we discuss use cases and key technical challenges that must be resolved to realize our vision for collaborative, multi-resolution terrain-Awareness for air-To-ground robots.
AB - Driven by advances in deep neural network models that fuse multi-modal input such as RGB and depth representations to accurately understand the semantics of the environments (e.g., objects of different classes, obstacles, etc.), ground robots have gone through dramatic improvements in navigating unknown environments. Relying on their singular, limited perspective, however, can lead to suboptimal paths that are wasteful and quickly drain out their batteries, especially in the case of long-horizon navigation. We consider a special class of ground robots, that are air-deployed, and pose the central question: can we leverage aerial perspectives of differing resolutions and fields of view from air-To-ground robots to achieve superior terrain-Aware navigation? We posit that a key enabler of this direction of research is collaboration between such robots, to collectively update their route plans, leveraging advances in long-range communication and on-board computing. Whilst each robot can capture a sequence of high resolution images during their descent, intelligent, lightweight pre-processing on-board can dramatically reduce the size of the data that needs to be shared among its peers over severely bandwidth-limited long range communication channels (e.g., over sub gigahertz frequencies). In this paper, we discuss use cases and key technical challenges that must be resolved to realize our vision for collaborative, multi-resolution terrain-Awareness for air-To-ground robots.
KW - autonomous systems
KW - edge intelligence
UR - http://www.scopus.com/inward/record.url?scp=85149265012&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85149265012&partnerID=8YFLogxK
U2 - 10.1145/3572864.3580335
DO - 10.1145/3572864.3580335
M3 - Conference contribution
AN - SCOPUS:85149265012
T3 - HotMobile 2023 - Proceedings of the 24th International Workshop on Mobile Computing Systems and Applications
SP - 55
EP - 60
BT - HotMobile 2023 - Proceedings of the 24th International Workshop on Mobile Computing Systems and Applications
PB - Association for Computing Machinery, Inc
Y2 - 22 February 2023 through 23 February 2023
ER -