Fault detection is a crucial technology to improve the performance of cloud systems. Its fixed detection cycle tends to be problematic since it faces high overhead if a small detection cycle is used for well-performing services; while risks missing many faults if a large cycle is adopted for some poorly-performing services. To solve such problems, an algorithm for adaptively adjusting dynamic detection cycle is proposed to decrease the overhead and increase fault detection performance in a cloud environment. It shortens a detection cycle for cloud systems with large fault probability, thus boosting fault detection performance. Otherwise, it increases it, thus decreasing the overhead. The algorithm is based on the proposed detection model by using a decision tree and support vector machine to increase detection performance. Experimental results show that the method is feasible and effective in comparison with some representative methods.