Many tools for computer-assisted composition contain built-in music-theoretical assumptions that may constrain the output to particular styles. In contrast, this article presents a new musical representation that contains almost no built-in knowledge, but that allows even musically untrained users to generate polyphonic textures that are derived from the user's own initial compositions. This representation, called functional scaffolding for musical composition (FSMC), exploits a simple yet powerful property of multipart compositions: The pattern of notes and rhythms in different instrumental parts of the same song are functionally related. That is, in principle, one part can be expressed as a function of another. Music in FSMC is represented accordingly as a functional relationship between an existing human composition, or scaffold, and a generated set of one or more additional musical voices. A human user without any musical expertise can then explore how the generated voice (or voices) should relate to the scaffold through an interactive evolutionary process akin to animal breeding. By inheriting from the intrinsic style and texture of the piece provided by the user, this approach can generate additional voices for potentially any style of music without the need for extensive musical expertise.
All Science Journal Classification (ASJC) codes
- Media Technology
- Computer Science Applications