Abstract
Effective communication using current video conferencing systems is severely hindered by the lack of eye contact caused by the disparity between the locations of the subject and the camera. While this problem has been partially solved for high-end expensive video conferencing systems, it has not been convincingly solved for consumer-level setups. We present a gaze correction approach based on a single Kinect sensor that preserves both the integrity and expressiveness of the face as well as the fidelity of the scene as a whole, producing nearly artifact-free imagery. Our method is suitable for mainstream home video conferencing: it uses inexpensive consumer hardware, achieves real-time performance and requires just a simple and short setup. Our approach is based on the observation that for our application it is sufficient to synthesize only the corrected face. Thus we render a gaze-corrected 3D model of the scene and, with the aid of a face tracker, transfer the gaze-corrected facial portion in a seamless manner onto the original image.
Original language | English (US) |
---|---|
Article number | 174 |
Journal | ACM Transactions on Graphics |
Volume | 31 |
Issue number | 6 |
DOIs | |
State | Published - Nov 2012 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Computer Graphics and Computer-Aided Design
Keywords
- Depth camera
- Gaze correction
- Video conferencing