TY - GEN
T1 - Partitioning Prompts for Higher Efficacy in Network Design with Large Language Model
AU - Komanduri, Vishnu
AU - Alessio, Scott
AU - Estropia, Sebastian
AU - Yerdelen, Gokhan
AU - Ferreira, Tyler
AU - Gunti, Murali
AU - Dong, Ziqian
AU - Rojas-Cessa, Roberto
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - In this paper, we propose deliverable partitioning in prompt design to assist Large Language Models (LLMs) in improving response correctness for network design and configuration. While recent research has explored the use of LLMs to enhance network management efficiency, their responses often remain inconsistent, incomplete, or inaccurate. Often, LLM-generated configurations contain missing or erroneous configuration commands, which can lead to operational failures. Our proposed partitioning methodology aims to mitigate these issues by decomposing complex network configuration tasks into simplified and focused tasks. To evaluate the effectiveness of this approach, we introduce a scoring policy and conduct extensive experiments across three levels of network complexity and varying degrees of design choice ambiguity. We also compare the performance of leading LLMs, including ChatGPT, Copilot, and DeepSeek. Our findings indicate that partitioning the inquiry process leads to more accurate and consistent responses than non-partitioned approaches, especially in scenarios where design parameters are explicitly defined and leave some but small room, as ambiguity, for inference.
AB - In this paper, we propose deliverable partitioning in prompt design to assist Large Language Models (LLMs) in improving response correctness for network design and configuration. While recent research has explored the use of LLMs to enhance network management efficiency, their responses often remain inconsistent, incomplete, or inaccurate. Often, LLM-generated configurations contain missing or erroneous configuration commands, which can lead to operational failures. Our proposed partitioning methodology aims to mitigate these issues by decomposing complex network configuration tasks into simplified and focused tasks. To evaluate the effectiveness of this approach, we introduce a scoring policy and conduct extensive experiments across three levels of network complexity and varying degrees of design choice ambiguity. We also compare the performance of leading LLMs, including ChatGPT, Copilot, and DeepSeek. Our findings indicate that partitioning the inquiry process leads to more accurate and consistent responses than non-partitioned approaches, especially in scenarios where design parameters are explicitly defined and leave some but small room, as ambiguity, for inference.
UR - https://www.scopus.com/pages/publications/105009595054
UR - https://www.scopus.com/pages/publications/105009595054#tab=citedBy
U2 - 10.1109/HPSR64165.2025.11038889
DO - 10.1109/HPSR64165.2025.11038889
M3 - Conference contribution
AN - SCOPUS:105009595054
T3 - IEEE International Conference on High Performance Switching and Routing, HPSR
BT - 2025 IEEE 26th International Conference on High Performance Switching and Routing, HPSR 2025
PB - IEEE Computer Society
T2 - 26th IEEE International Conference on High Performance Switching and Routing, HPSR 2025
Y2 - 20 May 2025 through 22 May 2025
ER -