Functional scaffolding for composing additional musical voices

Amy K. Hoover, Paul A. Szerlip, Kenneth O. Stanley

Research output: Contribution to journalArticlepeer-review

12 Scopus citations


Many tools for computer-assisted composition contain built-in music-theoretical assumptions that may constrain the output to particular styles. In contrast, this article presents a new musical representation that contains almost no built-in knowledge, but that allows even musically untrained users to generate polyphonic textures that are derived from the user's own initial compositions. This representation, called functional scaffolding for musical composition (FSMC), exploits a simple yet powerful property of multipart compositions: The pattern of notes and rhythms in different instrumental parts of the same song are functionally related. That is, in principle, one part can be expressed as a function of another. Music in FSMC is represented accordingly as a functional relationship between an existing human composition, or scaffold, and a generated set of one or more additional musical voices. A human user without any musical expertise can then explore how the generated voice (or voices) should relate to the scaffold through an interactive evolutionary process akin to animal breeding. By inheriting from the intrinsic style and texture of the piece provided by the user, this approach can generate additional voices for potentially any style of music without the need for extensive musical expertise.

Original languageEnglish (US)
Pages (from-to)80-99
Number of pages20
JournalComputer Music Journal
Issue number4
StatePublished - Dec 26 2014
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Media Technology
  • Music
  • Computer Science Applications


Dive into the research topics of 'Functional scaffolding for composing additional musical voices'. Together they form a unique fingerprint.

Cite this