Deep neural network (DNN) has become an effective computational tool because of its superior performance in practice. However, the generalization of DNN still largely depends on the training data, no matter in quantity or quality. In this paper, we propose a knowledge instillation framework, named NeuKI, for feed-forward DNN, aiming to enhance learning performance with the aid of knowledge. This task is particularly challenging due to the complicated nature of knowledge and numerous variants of DNN architectures. To bridge the gap, we construct a separate knowledge-DNN faithfully encoding the instilled knowledge for joint training. The core idea is to regularize the training of target-DNN with the constructed knowledge-DNN, so that the instilled knowledge can guide the model training. The proposed NeuKI is demonstrated to be applicable to both knowledge rules and constraints, where rules are encoded by structure and constraints are handled by loss. Experiments are conducted on several real-world datasets from different domains, and the results demonstrate the effectiveness of NeuKI in improving learning performance, as well as relevant data efficiency and model interpretability.