TY - JOUR
T1 - A variational-autoencoder approach to solve the hidden profile task in hybrid human-machine teams
AU - Pescetelli, Niccolo
AU - Reichert, Patrik
AU - Rutherford, Alex
N1 - Publisher Copyright:
© 2022 Pescetelli et al.
PY - 2022/8
Y1 - 2022/8
N2 - Algorithmic agents, popularly known as bots, have been accused of spreading misinformation online and supporting fringe views. Collectives are vulnerable to hidden-profile environments, where task-relevant information is unevenly distributed across individuals. To do well in this task, information aggregation must equally weigh minority and majority views against simple but inefficient majority-based decisions. In an experimental design, human volunteers working in teams of 10 were asked to solve a hidden-profile prediction task. We trained a variational auto-encoder (VAE) to learn people’s hidden information distribution by observing how people’s judgments correlated over time. A bot was designed to sample responses from the VAE latent embedding to selectively support opinions proportionally to their under-representation in the team. We show that the presence of a single bot (representing 10% of team members) can significantly increase the polarization between minority and majority opinions by making minority opinions less prone to social influence. Although the effects on hybrid team performance were small, the bot presence significantly influenced opinion dynamics and individual accuracy. These findings show that self-supervized machine learning techniques can be used to design algorithms that can sway opinion dynamics and group outcomes.
AB - Algorithmic agents, popularly known as bots, have been accused of spreading misinformation online and supporting fringe views. Collectives are vulnerable to hidden-profile environments, where task-relevant information is unevenly distributed across individuals. To do well in this task, information aggregation must equally weigh minority and majority views against simple but inefficient majority-based decisions. In an experimental design, human volunteers working in teams of 10 were asked to solve a hidden-profile prediction task. We trained a variational auto-encoder (VAE) to learn people’s hidden information distribution by observing how people’s judgments correlated over time. A bot was designed to sample responses from the VAE latent embedding to selectively support opinions proportionally to their under-representation in the team. We show that the presence of a single bot (representing 10% of team members) can significantly increase the polarization between minority and majority opinions by making minority opinions less prone to social influence. Although the effects on hybrid team performance were small, the bot presence significantly influenced opinion dynamics and individual accuracy. These findings show that self-supervized machine learning techniques can be used to design algorithms that can sway opinion dynamics and group outcomes.
UR - http://www.scopus.com/inward/record.url?scp=85135452795&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85135452795&partnerID=8YFLogxK
U2 - 10.1371/journal.pone.0272168
DO - 10.1371/journal.pone.0272168
M3 - Article
C2 - 35917306
AN - SCOPUS:85135452795
SN - 1932-6203
VL - 17
JO - PloS one
JF - PloS one
IS - 8 August
M1 - e0272168
ER -