Project Details
Description
High-performance computing opens new avenues in science, planning, health, and defense that are not economically achievable by pure experimentation. New progress areas require new evaluation methods, packaged in benchmarks, for selecting computational hardware and systems. EMBRACE will guide selection of computing technologies for different scientific areas with different needs. Current approaches assume that one kind of platform addresses all problems, but growing use in bioinformatics, urban planning, medicine, and more fields have demonstrated new, uncovered system requirements. EMBRACE opens the process through community competitions. These competitions will both build benchmarks useful for multiple scientific areas as well as build a community around benchmarking science. The evolving measurements will help scientists best match their applications with computing platforms.
Despite the long-standing interest and important impacts of benchmarking, there is no resolution on the most rational way to design, develop, and evolve forward-looking benchmarks, and to correctly interpret their results. Over time, the community has broadened, the platforms have changed dramatically, and the applications have evolved, yet today's benchmarks have not necessarily evolved with them. The primary objective of this proposal is to sustainably advance the science of benchmarking based on current and future needs. A second objective is to create sustainable incentives that drive research and innovation. A third objective is to produce a select set of the most useful and evolving benchmarks, benchmarking methods, or benchmarking analyses that can be accepted by the community. The vision and objectives will be realized by establishing a new, sustainable, and community-driven competition, the Evolvable Methods for Benchmarking Realism through Application and Community Engagement (EMBRACE) Workshop, in which teams compete in categories directly relevant to performance measurement and benchmarking. Submissions to the contest will include code, technical papers and presentations. In consultation with leaders in various application domains and with vendor representatives, the EMBRACE team will be responsible for defining the contest categories, framing the rules, recruiting both the participants and the judges, and monitoring and managing the process as it takes place. It will be an open forum to advance the broader scientific community's understanding of fundamental questions in the area of benchmarking.
Status | Finished |
---|---|
Effective start/end date | 9/1/15 → 8/31/17 |
Funding
- National Science Foundation: $125,000.00