TY - GEN
T1 - From Pixels to Reasoning
T2 - 28th IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2025
AU - Najafi, Deniz
AU - Morsali, Mehrdad
AU - Barkam, Hamza Errahmouni
AU - Reidy, Brendan
AU - Tabrizchi, Sepehr
AU - Roohi, Arman
AU - Nikdast, Mahdi
AU - Zand, Ramtin
AU - Imani, Mohsen
AU - Angizi, Shaahin
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Advancing energy-efficient, real-time visual intelligence at the sensor edge is crucial for applications such as autonomous systems, surveillance, drones, and augmented reality. In this paper, we present a unified summary of our recent photonic accelerator designs OISA, Lightator, and NeuroPhotonix which collectively demonstrate a new paradigm for near-sensor vision processing through silicon photonic neural networks and neuro-symbolic computing. These works adopt a cross-layer hardware-software co-design methodology to process visual data directly at the sensor, dramatically reducing the need for data transmission to the cloud. OISA and Lightator achieve power reductions of 24× and 73×, respectively, over conventional photonic and GPU-based baselines. Neuro-Photonix extends this foundation by enabling neuro-symbolic AI at the edge, combining analog photonic computation with efficient, granularity-controllable convolutions, a low-cost ADC, and native generation of HyperDimensional (HD) vectors for symbolic reasoning. Together, these designs pave the way for explainable, ultralow-power, and high-performance edge AI, with Neuro-Photonix reaching up to 30 GOPS/W and achieving 20.8× and 4.1×
AB - Advancing energy-efficient, real-time visual intelligence at the sensor edge is crucial for applications such as autonomous systems, surveillance, drones, and augmented reality. In this paper, we present a unified summary of our recent photonic accelerator designs OISA, Lightator, and NeuroPhotonix which collectively demonstrate a new paradigm for near-sensor vision processing through silicon photonic neural networks and neuro-symbolic computing. These works adopt a cross-layer hardware-software co-design methodology to process visual data directly at the sensor, dramatically reducing the need for data transmission to the cloud. OISA and Lightator achieve power reductions of 24× and 73×, respectively, over conventional photonic and GPU-based baselines. Neuro-Photonix extends this foundation by enabling neuro-symbolic AI at the edge, combining analog photonic computation with efficient, granularity-controllable convolutions, a low-cost ADC, and native generation of HyperDimensional (HD) vectors for symbolic reasoning. Together, these designs pave the way for explainable, ultralow-power, and high-performance edge AI, with Neuro-Photonix reaching up to 30 GOPS/W and achieving 20.8× and 4.1×
UR - https://www.scopus.com/pages/publications/105016136005
UR - https://www.scopus.com/pages/publications/105016136005#tab=citedBy
U2 - 10.1109/ISVLSI65124.2025.11130318
DO - 10.1109/ISVLSI65124.2025.11130318
M3 - Conference contribution
AN - SCOPUS:105016136005
T3 - Proceedings of IEEE Computer Society Annual Symposium on VLSI, ISVLSI
BT - IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2025 - Conference Proceedings
PB - IEEE Computer Society
Y2 - 6 July 2025 through 9 July 2025
ER -