Abstract
As software systems become increasingly large and complex, automated parameter tuning of software systems (PTSS) has been the focus of research and many tuning algorithms have been proposed recently. However, due to the lack of a unified platform for comparing and reproducing existing tuning algorithms, it remains a significant challenge for a user to choose an appropriate algorithm for a given software system. There are multiple reasons for this challenge, including diverse experimental conditions, lack of evaluations for different tasks, and excessive evaluation costs of tuning algorithms. In this paper, we propose an extensible and efficient benchmark, referred to as PTSSBench, which provides a unified platform for supporting a comparative study of different tuning algorithms via surrogate models and actual systems. We demonstrate the usability and efficiency of PTSSBench through comparative experiments of six state-of-the-art tuning algorithms from a holistic perspective and a task-oriented perspective. The experimental results show the necessity and effectiveness of parameter tuning for software systems and indicate that the PTSS problem remains an open problem. Moreover, PTSSBench allows extensive runs and in-depth analyses of parameter tuning algorithms, hence providing an efficient and effective way for researchers to develop new tuning algorithms and for users to choose appropriate tuning algorithms for their systems. The proposed PTSSBench benchmark together with the experimental results is made publicly available online as an open-source project.
Original language | English (US) |
---|---|
Article number | 4 |
Journal | Automated Software Engineering |
Volume | 31 |
Issue number | 1 |
DOIs | |
State | Published - May 2024 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Software
Keywords
- Benchmark
- Comparability
- Parameter tuning
- Reproducibility