By transmitting texture and depth videos from two adjacent captured viewpoints, a client can synthesize via depth-image-based rendering (DIBR) any intermediate virtual view of the scene, determined by the dynamic movement of the client's head. In so doing, depth perception of the 3D scene will be created through motion parallax. Due to the stringent playback deadline of interactive free viewpoint video, burst packet losses in the texture and depth video streams caused by transmission over unreliable channels are difficult to overcome and can severely degrade the synthesized view quality at the client. We propose a multiple description coding (MDC) of free viewpoint video in texture-plus-depth format that will be transmitted on two disjoint network paths. Specifically, we encode even frames of the left view and odd frames of the right view separately as one description and transmit it on path one. Similarly, we encode odd frames of the left view and even frames of the right view as the second description and transmit it on path two. Appropriate quantization parameters (QP) are selected for each description, such that its data rate matches optimally the available transmission bandwidth on each of the two paths. If the receiver receives one description but not the other due to burst loss on one of the paths, it can still partially reconstruct the missing frames in the loss-corrupted description using a computationally efficient DIBR-based recovery scheme that we design. Extensive experimental results show that our MDC streaming system can outperform the traditional single-path single-description transmission scheme by up to 7dB in Peak Signal-to-Noise Ratio (PSNR) of the synthesized intermediate view at the receiving client.