About Me
I am a fourth year PhD student at MIT EECS, where I am grateful to be under the guidance of Professor Nir Shavit as a member of the Shavit Lab. Prior to my graduate studies, I received my BA in computer science and in neuroscience from Columbia University, where I conducted research at the Peter Sims Lab.
I am curious about how computation arises on the scale of individual or small groups of neurons in both biological and artificial neural networks. I approach these questions through connectomics and mechanistic interpretability, with the goal of informing more efficient machine learning systems.
My research is supported by an IBM AI Research Grant.
News
- April 2026: My work Expand neurons, not parameters was accepted into ICML 2026; I gave a talk on my work Negative pre-activations differentiate syntax at MIT in Cambridge
- March 2026: My work The feature-space alignment hypothesis for neural network sparsity was accepted into ICLR 2026 Workshop Sci4DL
- January 2026: My work Negative pre-activations differentiate syntax was accepted into ICLR 2026
- October 2025: I served as the primary author of a proposal selected for an IBM AI Research Grant to support my PhD research in the Shavit Lab
- June 2025: My work Input differentiation via negative computation was accepted into ICML 2025 Workshop HiLD
- May 2025: My work A connectomics-driven analysis reveals novel characterization of border regions in mouse visual cortex was accepted for publication in the journal Neural Networks
- March 2025: I gave a talk on my work Wasserstein distances, neuronal entanglement, and sparsity at Red Hat in Boston
- January 2025: My work Wasserstein distances, neuronal entanglement, and sparsity was accepted into ICLR 2025 as a Spotlight Presentation
- October 2024: I was selected as a Cerebras Research Fellow
- May 2024: I received my SM from MIT EECS on Sparse Expansion and neuronal disentanglement
- April 2024: I received an Honorable Mention from the NSF GFRP
Selected Works
Expand neurons, not parameters
Published in ICML 2026
Linghao Kong*, Inimai Subramanian*, Yonadav Shavit, Micah Adler, Dan Alistarh, & Nir N. Shavit
Download Paper
Negative pre-activations differentiate syntax
Published in ICLR 2026
Linghao Kong, Angelina Ning, Micah Adler, & Nir N. Shavit
Download Paper | arXiv
The feature-space alignment hypothesis for neural network sparsity
Presented at ICLR 2026 Workshop Sci4DL
Linghao Kong, Micah Adler, & Nir N. Shavit
Download Paper
Wasserstein distances, neuronal entanglement, and sparsity
Published in ICLR 2025 as a Spotlight Presentation
Shashata Sawmya*, Linghao Kong*, Ilia Markov, Dan Alistarh, & Nir N. Shavit
Download Paper | arXiv
