A Unified Review of Advanced Paradigms in Spiking Neural Networks From Efficiency and Latency to Novel Architecture and Robustness

Authors

  • Vartika Pandey Department of Computer Science, UPES, Dehradun Author
  • Abhinav Malkoti Department of Computer Science, UPES, Dehradun Author
  • Parv Saini Department of Computer Science, UPES, Dehradun Author
  • Divyansh Upadhyay Department of Computer Science, UPES, Dehradun Author
  • Hardik Saxena Department of Computer Science, UPES, Dehradun Author

Keywords:

Spiking Neural Networks, Large Language Models, Neuromorphic Computing, Hardware-Software Co-Design, Transformer Architecture, Riemannian Manifolds, Optimization, Robustness

Abstract

The domain of Spiking Neural Networks (SNNs) has undergone a paradigm shift in the 2024-2025 research cycle, moving beyond theoretical validation to successfully integrate into complex domains like Large Language Models (LLMs) and advanced robotics. This review provides an exhaustive synthesis of state-of-the-art SNN optimization, focusing on Architectural Innovation, Training Algorithm Refinement, and Hardware-Software Co-Design. Key breakthroughs include the "Transformerization" of SNNs via addition-only attention $(A^{2}OS^{2}A)$, the emergence of Saliency-Based Spiking LLMs (SpikeLLM), and the theoretical re-grounding of training dynamics on Riemannian Manifolds (MSG). By detailing these advancements, this paper delineates the current performance-oriented neuromorphic frontier and identifies the critical need for unified, robust, and hardware-aware optimization frameworks.

Downloads

Published

13-03-2026

How to Cite

Pandey, V. ., Malkoti, A. ., Saini, P. ., Upadhyay, D. ., & Saxena, H. . (2026). A Unified Review of Advanced Paradigms in Spiking Neural Networks From Efficiency and Latency to Novel Architecture and Robustness. DMPedia Lecture Notes in Multidisciplinary Research, IMPACT26, 539-544. https://digitalmanuscriptpedia.com/conferences/index.php/DMP-LNMR/article/view/96