An output-sensitive visibility algorithm is one whose runtime is proportional to the number of visible graphic primitives in a scene model - not to the total number of primitives, which can be much greater. The known practical output-sensitive visibility algorithms are suitable only for static scenes, because they include a heavy preprocessing stage that constructs a spatial data structure which relies on the model objects' positions. Any changes to the scene geometry might cause significant modifications to this data structure. We show how these algorithms may be adapted to dynamic scenes. Two main ideas are used: first, update the spatial data structure to reflect the dynamic objects' current positions; make this update efficient by restricting it to a small part of the data structure. Second, use temporal bounding volumes (TBVs) to avoid having to consider every dynamic object in each frame. The combination of these techniques yields efficient, output-sensitive visibility algorithms for scenes with multiple dynamic objects. The performance of our methods is shown to be significantly better than previous output-sensitive algorithms, intended for static scenes. TBVs can be adapted to applications where no prior knowledge of the objects' trajectories is available, such as virtual reality (VR), simulations etc. Furthermore, they save updates of the scene model itself, not just of the auxiliary data structure used by the visibility algorithm. They can therefore be used to greatly reduce the communications overhead in client-server VR systems, as well as in general distributed virtual environments.
All Science Journal Classification (ASJC) codes
- Computer Graphics and Computer-Aided Design
- Client-server design
- Distributed multi-user virtual environments
- Output-sensitive hidden surface removal
- Virtual reality
- Visibility culling