Virtual view synthesis is a key component of multi-view imaging systems that enable visual immersion environments for emerging applications, e.g., virtual reality and 360-degree video. Using a small collection of captured reference viewpoints, this technique reconstructs any view of a remote scene of interest navigated by a user, to enhance the perceived immersion experience. We carry out a convexity characterization analysis of the virtual view reconstruction error that is caused by compression of the captured multi-view content. This error is expressed as a function of the virtual viewpoint coordinate relative to the captured reference viewpoints. We derive fundamental insights about the nature of this dependency and formulate a prediction framework that is able to accurately predict the specific dependency shape, convex or concave, for given reference views, multi-view content and compression settings. We are able to integrate our analysis into a proof-of-concept coding framework and demonstrate considerable benefits over a baseline approach.