The ever-evolving mobile applications need more and more computing resources to smooth user experience and sometimes meet delay requirements. Therefore, mobile devices (MDs) are gradually having difficulties to complete all tasks in time due to the limitations of computing power and battery life. To cope with this problem, mobile edge computing (MEC) systems were created to help with task processing for MDs at nearby edge servers. Existing works have been devoted to solving MEC task offloading problems, including those with simple delay constraints, but most of them neglect the coexistence of deadline-constrained and delay- sensitive tasks (i.e., the diverse delay sensitivities of tasks). In this paper, we propose an actor-critic based deep reinforcement learning (ADRL) model that takes the diverse delay sensitivities into account and offloads tasks adaptively to minimize the total penalty caused by deadline misses of deadline-constrained tasks and the lateness of delay-sensitive tasks. We train the ADRL model using a real data set that consists of the diverse delay sensitivities of tasks. Our simulation results show that the proposed solution outperforms several heuristic algorithms in terms of total penalty, and it also retains its performance gains under different system settings.