TY - GEN
T1 - FedChip
T2 - 1st IEEE International Conference on LLM-Aided Design, ICLAD 2025
AU - Nazzal, Mahmoud
AU - Nguyen, Khoa
AU - Vungarala, Deepak
AU - Zand, Ramtin
AU - Angizi, Shaahin
AU - Phan, Hai
AU - Khreishah, Abdallah
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - AI hardware design is advancing rapidly, driven by the promise of design automation to make chip development faster, more efficient, and more accessible to a wide range of users. Amongst automation tools, Large Language Models (LLMs) offer a promising solution by automating and streamlining parts of the design process. However, their potential is hindered by data privacy concerns and the lack of domain-specific training. To address this, we introduce FedChip, a Federated fine-tuning approach that enables multiple Chip design parties to collaboratively enhance a shared LLM dedicated for automated hardware design generation while protecting proprietary data. FedChip enables parties to train the model on proprietary local data and improve the shared LLM's performance. To exemplify FedChip's deployment, we create and release APTPU-Gen, a dataset of 30k design variations spanning various performance metric values such as power, performance, and area (PPA). To encourage the LLM to generate designs that achieve a balance across multiple quality metrics, we propose a new design evaluation metric, Chip@k, which statistically evaluates the quality of generated designs against predefined acceptance criteria. Experimental results show that FedChip improves design quality by more than 77% over high-end LLMs while maintaining data privacy.
AB - AI hardware design is advancing rapidly, driven by the promise of design automation to make chip development faster, more efficient, and more accessible to a wide range of users. Amongst automation tools, Large Language Models (LLMs) offer a promising solution by automating and streamlining parts of the design process. However, their potential is hindered by data privacy concerns and the lack of domain-specific training. To address this, we introduce FedChip, a Federated fine-tuning approach that enables multiple Chip design parties to collaboratively enhance a shared LLM dedicated for automated hardware design generation while protecting proprietary data. FedChip enables parties to train the model on proprietary local data and improve the shared LLM's performance. To exemplify FedChip's deployment, we create and release APTPU-Gen, a dataset of 30k design variations spanning various performance metric values such as power, performance, and area (PPA). To encourage the LLM to generate designs that achieve a balance across multiple quality metrics, we propose a new design evaluation metric, Chip@k, which statistically evaluates the quality of generated designs against predefined acceptance criteria. Experimental results show that FedChip improves design quality by more than 77% over high-end LLMs while maintaining data privacy.
KW - Federated learning
KW - hardware design automation
KW - large language models (LLMs)
KW - PPA optimization
UR - https://www.scopus.com/pages/publications/105015891730
UR - https://www.scopus.com/pages/publications/105015891730#tab=citedBy
U2 - 10.1109/ICLAD65226.2025.00019
DO - 10.1109/ICLAD65226.2025.00019
M3 - Conference contribution
AN - SCOPUS:105015891730
T3 - Proceedings - 2025 IEEE International Conference on LLM-Aided Design, ICLAD 2025
SP - 93
EP - 99
BT - Proceedings - 2025 IEEE International Conference on LLM-Aided Design, ICLAD 2025
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 26 June 2025 through 27 June 2025
ER -