TY - GEN
T1 - RELINK
T2 - 34th ACM International Conference on Information and Knowledge Management, CIKM 2025
AU - Arya, Shivvrat
AU - Ghosh, Smita
AU - Maruyama, Bryan
AU - Srinivasan, Venkatesh
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/11/10
Y1 - 2025/11/10
N2 - Influence Maximization aims to select a subset of elements in a social network to maximize information spread under a diffusion model. While existing work primarily focuses on selecting influential nodes, these approaches assume unrestricted message propagation-an assumption that fails in closed social networks, where content visibility is constrained and node-level activations may be infeasible. Motivated by the growing adoption of privacy-focused platforms such as Signal, Discord, Instagram, and Slack, our work addresses the following fundamental question: How can we learn effective edge activation strategies for influence maximization in closed networks? To answer this question we introduce Reinforcement Learning for Link Activation (RELINK), the first DRL framework for edge-level influence maximization in privacy-constrained networks. It models edge selection as a Markov Decision Process, where the agent learns to activate edges under budget constraints. Unlike prior node-based DRL methods, RELINK uses an edge-centric Q-learning approach that accounts for structural constraints and constrained information propagation. Our framework combines a rich node embedding pipeline with an edge-aware aggregation module. The agent is trained using an n-step Double DQN objective, guided by dense reward signals that capture marginal gains in influence spread. Extensive experiments on real-world networks show that RELINK consistently outperforms existing edge-based methods, achieving up to 15% higher influence spread and improved scalability across diverse settings.
AB - Influence Maximization aims to select a subset of elements in a social network to maximize information spread under a diffusion model. While existing work primarily focuses on selecting influential nodes, these approaches assume unrestricted message propagation-an assumption that fails in closed social networks, where content visibility is constrained and node-level activations may be infeasible. Motivated by the growing adoption of privacy-focused platforms such as Signal, Discord, Instagram, and Slack, our work addresses the following fundamental question: How can we learn effective edge activation strategies for influence maximization in closed networks? To answer this question we introduce Reinforcement Learning for Link Activation (RELINK), the first DRL framework for edge-level influence maximization in privacy-constrained networks. It models edge selection as a Markov Decision Process, where the agent learns to activate edges under budget constraints. Unlike prior node-based DRL methods, RELINK uses an edge-centric Q-learning approach that accounts for structural constraints and constrained information propagation. Our framework combines a rich node embedding pipeline with an edge-aware aggregation module. The agent is trained using an n-step Double DQN objective, guided by dense reward signals that capture marginal gains in influence spread. Extensive experiments on real-world networks show that RELINK consistently outperforms existing edge-based methods, achieving up to 15% higher influence spread and improved scalability across diverse settings.
KW - closed networks
KW - deep reinforcement learning
KW - edge selection
KW - influence maximization
KW - social network analysis
UR - https://www.scopus.com/pages/publications/105023184918
UR - https://www.scopus.com/pages/publications/105023184918#tab=citedBy
U2 - 10.1145/3746252.3761006
DO - 10.1145/3746252.3761006
M3 - Conference contribution
AN - SCOPUS:105023184918
T3 - CIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management
SP - 65
EP - 76
BT - CIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management
PB - Association for Computing Machinery, Inc
Y2 - 10 November 2025 through 14 November 2025
ER -