A big-data-related application like a Terminal Interaction Pattern Analysis System (TIPAS) commonly involves massive entities interacting with each other dynamically. Such interactions can be represented by a Dynamically Weighted Directed Network (DWDN). A large number of involved entities results in a high-dimensional and incomplete (HDI) network, which can be represented by an HDI tensor with numerous missing entries. In spite of its HDI nature, this tensor contains much useful knowledge regarding various desired patterns like unobserved links in DWDN. However, due to its extremely high dimensionality and low data density, it is very challenging to build a learning model that can precisely represent an HDI tensor. To address this issue, this work proposes a Neural Latent Factorization of Tensors (NeuLFoT) model with three interesting ideas: a) adopting the principle of density-oriented modeling and Canonical Polyadic tensor factorization to build rank-one tensor series relying on three-dimensional latent factors for precisely representing an HDI tensor's known data; b) treating the obtained rank-one tensors as neurons to form a novel neural tensor network model; and c) proposing a novel Backward Propagation algorithm for Latent factorization of tensors (BPL) to ensure high training efficiency. Experimental results on two large-scale DWDNs generated by a real TIPAS demonstrate that compared with state-of-the-art models, the proposed model achieves significant gain in prediction accuracy for missing links of a DWDN and achieves highly competitive computational efficiency.