Concept drift refers to the inevitable phenomenon that influences the statistical features of the data stream. Detecting concept drift in data streams quickly and precisely remains challenging, and failure to detect it will render model trained on historical data ineffective. Current drift detection methods suffer from the following problems: detection delay, many false detected drifts, and rarely utilize the deep neural network directly in the field of concept drift detection, which is considerably competent at addressing the classification problems of data stream. Furthermore, the output of the model is usually taken as a metric to detect drift, while changes in the model parameters are often ignored which contain highly useful information. In this paper, we propose a model-centric framework for concept drift detection that uses deep neural network to detect concept drift by focusing on the change in the model itself, rather than the model output. In addition, transfer learning is developed to accelerate the drift detection process and reduce the computational complexity by freezing parts of the network. To further reduce false detected drifts, we propose long and short time windows method to determine the real drift from the potential detected drift. Experiments with real-world and artificial datasets have been undertaken to demonstrate the effectiveness of the proposed framework.