ENGAGE: (E)nergy-efficient (N)ovel Al(g)orithms and (A)rchitectures for (G)raph L(e)arnini
Research Team: Dr. Thomas E. Potok, Oak Ridge National Laboratory, Lead PI
Oak Ridge National Laboratory Dr. Guojing Cong Co-Pl Dr. Robert Patton Co-Pl
Dr. Ramakrishnan Kannan Co-Pl
Dr. Chathika Gunaratne Co-Investigator
Dr. Prasanna Date Co-Investigator
Dr. Mark Coletti Co-Investigator
Dr. Ashish Gautam Postdoc Investigator Dr. Seung-Hwan Lim Co-Investigator Dr. Avishek Bose Postdoc
Investigator
University of Tennessee
Assistant Prof. Catherine Schuman Co-PI
George Mason University
Assistant Prof. Maryam Parsa Co-PI
George Washington University
Assistant Prof. Gina Adam Co-Pl
To reign in the intractable energy consumption with large AI models, we propose to create
a pioneering full-stack co-design methodology for graph learning that is ubiquitous in science
applications. We propose novel paradigms of graph learning using Spiking Neural Networks (SNNs) and
novel hardware beyond digital systems to reduce energy usage by orders of magnitude in comparison
to graph neural (GNNs), without compromising predictive performance. In fact, our methodology may solve
some of the algorithmic issues related to GNNs.
Our approach innovates through five key strategies: 1) Developing science-aware SNN architectures
tailored for graph learning rather than mimicking existing GNN frameworks, 2) Implementing advanced
training algorithms that extend beyond traditional gradient-descent methods, such as spike timing
dependent plasticity (STDP), to enhance learning efficiency and reduce energy costs, 3) Powerful
simulators that bridge the gap between logical models and hardware realities for SNNs for co-design
that can optimize for both performance and energy usage, 4) Scaling the simulations to millions of
nodes/neurons on leadership computing facilities leveraging deep expertise in exascale computing,
and 5) Designing new energy-efficient hardware that surpasses traditional digital complementary
metal-oxide semiconductor (CMOS), accommodating common algorithmic patterns in graph learning.
Our approach diverges markedly from current popular deep learning practices on GPUs and opens up a
new landscape for research on native learning on novel hardware. We aim to establish an open,
scalable ecosystem for building highly efficient graph learning algorithms and hardware tailored
for scientific applications supported by the Department of Energy (DOE), and we believe the lessons
learned can be translated to other AI domains such as foundational models with long-lasting impact
for energy-efficient AI systems.