Non-volatile analog memory devices such as phase-change memory (PCM) enable designing dedicated connectivity matrices for the hardware implementation of deep neural networks (DNN). In this in-memory computing approach, the analog conductance states of the memory device can be gradually updated to train DNNs on-chip or software trained connection strengths may be programmed one-time to the devices to create efficient inference engines. Reliable and computationally simple models that capture the non-ideal programming and temporal evolution of the devices are needed for evaluating the training and inference performance of the deep learning hardware based on in-memory computing. In this paper, we present statistically accurate models for PCM, based on the characterization of more than 10, 000 devices, that capture the state-dependent nature and variability of the conductance update, conductance drift, and read noise. Integrating the computationally simple device models with deep learning frameworks such as TensorFlow enables us to realistically evaluate training and inference performance of the PCM array based hardware implementations of DNNs.