About Me
I am a Research Scientist at Meta Superintelligence Labs (MSL) where I currently focus on speech synthesis from Meta’s multimodal models, particularly on diffusion modeling, few-step generative modeling, neural codec training and multimodal LLM post-training with an emphasis on reward modeling for RLHF.
I was previously a PhD student in the EECS department at MIT advised by Professor Gregory Wornell. I defended my Ph.D. thesis titled “Score Estimation for Generative Modeling” in May 2025. Before that I completed my M.S. degree in EECS from MIT in January 2022 and my B.S. degree in Electrical Engineering from the University of Illinois at Urbana-Champaign in 2019.
Research Interests
- Efficient Generative Modeling: Developing new algorithms for score estimation, flow matching, and distribution matching to improve training and sampling in generative models (Score of Mixture Training, ICML ‘25 Spotlight) and inverse problems α-RGS, NeurIPS ‘23).
- Reward Modeling & LLM Post-training: Training reward models and judges to enhance multimodal LLM outputs using RLHF and DPO (Generative Speech Reward Model, MSL ‘26).
- Neural Data Compression: Using generative models as priors for low-bitrate compression and tokenization (Efficient Video Compression Transformers, Google Research ‘23 and Model Code Separation, MIT ‘24).



