About Me
I am a Research Scientist at Meta Superintelligence Labs (MSL) where I currently focus on speech synthesis from Meta’s multimodal models, particularly focusing on diffusion modeling, few-step generative modeling, neural codec training and multimodal LLM post-training with an emphasis on reward modeling for RLHF.
I was previously a PhD student in the EECS department at MIT advised by Professor Gregory Wornell. I defended my Ph.D. thesis titled “Score Estimation for Generative Modeling” in May 2025. Before that I completed my M.S. degree in EECS from MIT in January 2022 and my B.S. degree in Electrical Engineering from the University of Illinois at Urbana-Champaign in 2019.
Current Research Interests
My current research interests include:
Efficient Generative Modeling and Score Estimation: I conduct foundational research on score estimation, flow matching and distribution matching to develop new algorithms to improve training and sampling efficiency in generative models (e.g., ICML ‘25 Spotlight) and solving inverse problems (e.g., NeurIPS ‘23).
Reward Modeling and Multimodal LLM Post-training: At MSL, I also focus on training reward models and designing judges to enhance the aesthetic and semantic quality of multimodal LLM outputs using RLHF and DPO.
Neural Data Compression: I explore how generative models can serve as density estimators and powerful priors to advance low-bitrate compression for source coding, or in modern day tokenization. My previous contributions include video compression research at Google Research, variable rate speech codecs at Meta, and a patented model-code separation architecture developed at MIT.



