Project Details
Description
Penetration of technologies such as wireless broadband and artificial intelligence (AI) is propelling a rapid adoption of network cameras across the household, industrial, and commercial sectors. These cameras such as surveillance cameras, dash cameras, and wearable cameras can capture voluminous amounts of visual data that can be turned into valuable information for public safety, autonomous driving, service robots, augmented/mixed reality, assisted living, etc. To reach the potential, new methods are needed for efficiently and effectively extracting, transferring, and sharing useful information from ubiquitous cameras while preserving user privacy. This project uses techniques and perspectives from wireless networking, computer vision, and edge computing to analyze and solve the problems in ubiquitous camera systems, fosters interdisciplinary research, provides a unique training program for undergraduate and graduate students, and has a high potential to introduce transformative technologies that enable new real-life products and services.
This project aims to realize ubiquitous machine vision (UbiVision) and enable efficient utilization of networked cameras for information extraction and sharing. Toward this end, three fundamental research problems are investigated: 1) how to dynamically manage highly coupled resources and functions across multiple technology domains: camera functions, network resources, and computation resources on edge servers; 2) how to design adaptive and efficient machine vision algorithms for resource-constrained smart cameras; and 3) how to engineer reliable machine learning frameworks for robust vision analysis on edge servers. First, a new model-free end-to-end resource orchestration method is designed to improve the efficiency of wireless networking and computing by combining the merits of conventional optimization and emerging machine learning techniques. Second, a novel universal convolution neural network (CNN) and corresponding CNN optimization methods are developed for efficient multi-task feature learning on smart cameras. Third, a teacher-student network learning paradigm is innovated to develop memory and computation efficient machine vision algorithms that are able to achieve robust performance under various adverse conditions caused by varying network conditions and limited server computation budgets.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Status | Finished |
---|---|
Effective start/end date | 7/1/21 → 9/30/23 |
Funding
- National Science Foundation: $265,726.00