Project Details
Description
Around the world, users of social media platforms generate millions of comments, videos, and photos per day. Within this content is dangerous material such as child pornography, sex trafficking, and terrorist propaganda. Though platforms leverage algorithmic systems to facilitate detection and removal of problematic content, decisions about whether to remove content, whether it's as benign as an off-topic comment or as dangerous as self-harm or abuse videos, are often made by humans. Companies are hiring moderators by the thousands and tens of thousands work as volunteer moderators. This work involves economic, emotional, and often physical safety risks. With social media content moderation as the focus of work and the content moderators as the workers, this project facilitates the human-technology partnership by designing new technologies to augment moderator performance. The project will improve moderators' quality of life, augment their capabilities, and help society understand how moderation decisions are made and how to support the workers who help keep the internet open and enjoyable. These advances will enable moderation efforts to keep pace with user-generated content and ensure that problematic content does not overwhelm internet users. The project includes outreach and engagement activities with academic, industry, policy-makers, and the public that ensure the project's findings and tools support broad stakeholders impacted by user-generated content and its moderation.
Specifically, the project involves five main research objectives that will be met through qualitative, historical, experimental, and computational research approaches. First, the project will improve understanding of human-in-the-loop decision making practices and mental models of moderation by conducting interviews and observations with moderators across different content domains. Second, it will assess the socioeconomic impact of technology-augmented moderation through industry personnel interviews. Third, the project will test interventions to decrease the emotional toll on human moderators and optimize their performance through a series of experiments utilizing theories of stress alleviation. Fourth, the project will design, develop, and test a suite of cognitive assistance tools for live streaming moderators. These tools will focus on removing easy decisions and helping moderators dynamically manage their emotional and cognitive capabilities. Finally, the project will employ a historical perspective to analyze companies' content moderation policies to inform legal and platform policies.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Status | Finished |
---|---|
Effective start/end date | 10/1/19 → 9/30/23 |
Funding
- National Science Foundation: $849,024.00