TY - GEN
T1 - Cooperative surveillance in video sensor networks
AU - Fridman, Alex
AU - Primerano, Richard
AU - Weber, Steven
AU - Kam, Moshe
PY - 2008
Y1 - 2008
N2 - In the energy-constrained medium of video sensor networks, the objective of much research has been to statistically minimize the number of nodes that will achieve a sufficient degree of coverage. We consider increasing the number of nodes beyond the threshold of full coverage, and cooperatively filtering out the high level of redundant data in the video streams to minimize pernode capacity requirements. The scenario we study is that of a swarm of robots, all with wireless communication capabilities. Some of the robots are equipped with video cameras and are thus considered sensors. A few select robots have sufficient battery and computational power to perform machine vision processing of the video stream. The goal of this scenario is to get the video from the sensors to the video-processing robots, which can then extract high-level surveillance information about the observed environment. We present an optimization framework for minimizing redundant visual data transmissions, while maximizing the throughput from sensors to processing nodes. We also characterize through simulation the performance gain on the sensor network as the video coverage increases.
AB - In the energy-constrained medium of video sensor networks, the objective of much research has been to statistically minimize the number of nodes that will achieve a sufficient degree of coverage. We consider increasing the number of nodes beyond the threshold of full coverage, and cooperatively filtering out the high level of redundant data in the video streams to minimize pernode capacity requirements. The scenario we study is that of a swarm of robots, all with wireless communication capabilities. Some of the robots are equipped with video cameras and are thus considered sensors. A few select robots have sufficient battery and computational power to perform machine vision processing of the video stream. The goal of this scenario is to get the video from the sensors to the video-processing robots, which can then extract high-level surveillance information about the observed environment. We present an optimization framework for minimizing redundant visual data transmissions, while maximizing the throughput from sensors to processing nodes. We also characterize through simulation the performance gain on the sensor network as the video coverage increases.
UR - http://www.scopus.com/inward/record.url?scp=57349168524&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=57349168524&partnerID=8YFLogxK
U2 - 10.1109/ICDSC.2008.4635686
DO - 10.1109/ICDSC.2008.4635686
M3 - Conference contribution
AN - SCOPUS:57349168524
SN - 9781424426652
T3 - 2008 2nd ACM/IEEE International Conference on Distributed Smart Cameras, ICDSC 2008
BT - 2008 2nd ACM/IEEE International Conference on Distributed Smart Cameras, ICDSC 2008
T2 - 2008 2nd ACM/IEEE International Conference on Distributed Smart Cameras, ICDSC 2008
Y2 - 7 September 2008 through 11 September 2008
ER -