Abstract
We present a new technique for passive and markerless facial performance capture based on anchor frames. Our method starts with high resolution per-frame geometry acquisition using state-of-theart stereo reconstruction, and proceeds to establish a single triangle mesh that is propagated through the entire performance. Leveraging the fact that facial performances often contain repetitive subsequences, we identify anchor frames as those which contain similar facial expressions to a manually chosen reference expression. Anchor frames are automatically computed over one or even multiple performances. We introduce a robust image-space tracking method that computes pixel matches directly from the reference frame to all anchor frames, and thereby to the remaining frames in the sequence via sequential matching. This allows us to propagate one reconstructed frame to an entire sequence in parallel, in contrast to previous sequential methods. Our anchored reconstruction approach also limits tracker drift and robustly handles occlusions and motion blur. The parallel tracking and mesh propagation offer low computation times. Our technique will even automatically match anchor frames across different sequences captured on different occasions, propagating a single mesh to all performances.
Original language | English (US) |
---|---|
Pages (from-to) | 1-10 |
Number of pages | 10 |
Journal | ACM Transactions on Graphics |
Volume | 30 |
Issue number | 4 |
DOIs | |
State | Published - Jul 1 2011 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Computer Graphics and Computer-Aided Design
Keywords
- Facial performance capture
- motion capture
- space-time geometry reconstruction