Today, HPC clusters commonly use Resource Management Systems such as PBS and TORQUE to share physical resources. These systems enable resources to be shared by assigning nodes to users exclusively in non-overlapping time slots. With virtualization technology, users can run their applications on the same node with low mutual interference. However, the overhead introduced by the virtual machine monitor or hypervisor is too high to be accepted, because efficiency is key to many HPC applications. OS-level virtualization (such as Linux Containers) offers a lightweight virtualization layer, which promises a near-native performance and is adopted by some BigData resource sharing platforms such as Mesos. Nevertheless, OS-level virtualization's overhead and isolation on block devices have not been completely evaluated, especially when applied to a shared distributed/parallel file system (D/PFS) such as HDFS or Lustre. In this paper, we thoroughly evaluate the overhead and isolation involved in sharing block I/O via OS-level virtualization on the local disk and D/PFSs. Meanwhile, to assign D/PFS storage resources to users, a middleware system is proposed and implemented to bridge the configuration gap between virtual clusters and remote D/PFSs.