TY - GEN
T1 - Conceptualizing Visual Analytic Interventions for Content Moderation
AU - Vaidya, Sahaj
AU - Cai, Jie
AU - Basu, Soumyadeep
AU - Naderi, Azadeh
AU - Wohn, Donghee Yvette
AU - Dasgupta, Aritra
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - Modern social media platforms like Twitch, YouTube, etc., embody an open space for content creation and consumption. However, an unintended consequence of such content democratization is the proliferation of toxicity and abuse that content creators get subjected to. Commercial and volunteer content moderators play an indispensable role in identifying bad actors and minimizing the scale and degree of harmful content. Moderation tasks are often laborious, complex, and even if semi-automated, they involve high-consequence human decisions that affect the safety and popular perception of the platforms. In this paper, through an interdisciplinary collaboration among researchers from social science, human-computer interaction, and visualization, we present a systematic understanding of how visual analytics can help in human-in-the-loop content moderation. We contribute a characterization of the data-driven problems and needs for proactive moderation and present a mapping between the needs and visual analytic tasks through a task abstraction framework. We discuss how the task abstraction framework can be used for transparent moderation, design interventions for moderators' well-being, and ultimately, for creating futuristic human-machine interfaces for data-driven content moderation.
AB - Modern social media platforms like Twitch, YouTube, etc., embody an open space for content creation and consumption. However, an unintended consequence of such content democratization is the proliferation of toxicity and abuse that content creators get subjected to. Commercial and volunteer content moderators play an indispensable role in identifying bad actors and minimizing the scale and degree of harmful content. Moderation tasks are often laborious, complex, and even if semi-automated, they involve high-consequence human decisions that affect the safety and popular perception of the platforms. In this paper, through an interdisciplinary collaboration among researchers from social science, human-computer interaction, and visualization, we present a systematic understanding of how visual analytics can help in human-in-the-loop content moderation. We contribute a characterization of the data-driven problems and needs for proactive moderation and present a mapping between the needs and visual analytic tasks through a task abstraction framework. We discuss how the task abstraction framework can be used for transparent moderation, design interventions for moderators' well-being, and ultimately, for creating futuristic human-machine interfaces for data-driven content moderation.
KW - Content Moderation
KW - Real-time Decision-Making
KW - Social Media
KW - Task Abstractions
UR - http://www.scopus.com/inward/record.url?scp=85123776441&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85123776441&partnerID=8YFLogxK
U2 - 10.1109/VIS49827.2021.9623288
DO - 10.1109/VIS49827.2021.9623288
M3 - Conference contribution
AN - SCOPUS:85123776441
T3 - Proceedings - 2021 IEEE Visualization Conference - Short Papers, VIS 2021
SP - 191
EP - 195
BT - Proceedings - 2021 IEEE Visualization Conference - Short Papers, VIS 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2021 IEEE Visualization Conference, VIS 2021
Y2 - 24 October 2021 through 29 October 2021
ER -