Seeing sound: Investigating the effects of visualizations and complexity on crowdsourced audio annotations

Mark Cartwright, Ayanna Seals, Justin Salamon, Alex Williams, Stefanie Mikloska, Duncan MacConnell, Edith Law, Juan P. Bello, Oded Nov

Research output: Contribution to journalArticlepeer-review

49 Scopus citations

Abstract

Audio annotation is key to developing machine-listening systems; yet, effective ways to accurately and rapidly obtain crowdsourced audio annotations is understudied. In this work, we seek to quantify the reliability/redundancy trade-off in crowdsourced soundscape annotation, investigate how visualizations affect accuracy and efficiency, and characterize how performance varies as a function of audio characteristics. Using a controlled experiment, we varied sound visualizations and the complexity of soundscapes presented to human annotators. Results show that more complex audio scenes result in lower annotator agreement, and spectrogram visualizations are superior in producing higher quality annotations at lower cost of time and human labor. We also found recall is more affected than precision by soundscape complexity, and mistakes can be often attributed to certain sound event characteristics. These findings have implications not only for how we should design annotation tasks and interfaces for audio data, but also how we train and evaluate machine-listening systems.

Original languageEnglish (US)
Article number29
JournalProceedings of the ACM on Human-Computer Interaction
Volume1
Issue numberCSCW
DOIs
StatePublished - Nov 2017
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Social Sciences (miscellaneous)
  • Human-Computer Interaction
  • Computer Networks and Communications

Keywords

  • Annotation
  • Sound event detection

Fingerprint

Dive into the research topics of 'Seeing sound: Investigating the effects of visualizations and complexity on crowdsourced audio annotations'. Together they form a unique fingerprint.

Cite this