Algorithmic agents, popularly known as bots, have been accused of spreading misinformation online and supporting fringe views. Collectives are vulnerable to hidden-profile environments, where task-relevant information is unevenly distributed across individuals. To do well in this task, information aggregation must equally weigh minority and majority views against simple but inefficient majority-based decisions. In an experimental design, human volunteers working in teams of 10 were asked to solve a hidden-profile prediction task. We trained a variational auto-encoder (VAE) to learn people’s hidden information distribution by observing how people’s judgments correlated over time. A bot was designed to sample responses from the VAE latent embedding to selectively support opinions proportionally to their under-representation in the team. We show that the presence of a single bot (representing 10% of team members) can significantly increase the polarization between minority and majority opinions by making minority opinions less prone to social influence. Although the effects on hybrid team performance were small, the bot presence significantly influenced opinion dynamics and individual accuracy. These findings show that self-supervized machine learning techniques can be used to design algorithms that can sway opinion dynamics and group outcomes.
All Science Journal Classification (ASJC) codes