I'm a Ph.D candidate in Computer Science at Brown University focusing on Artificial Intelligence, advised by Prof. Michael Littman.
My research investigates the foundations of Artificial Intelligence and applications thereof to scientific and societal challenges.
I'm currently focused on understanding abstraction and its role in intelligence. I study how intelligent agents model the worlds they inhabit, focusing on the representational practices that underly effective learning and planning. I typically work with the Reinforcement Learning paradigm, drawing on tools from computational learning theory, probability, and information theory.
I additionally care deeply about responsible applications of AI to problems of relevance in the world, with a current focus on the mission of computational sustainability.
We study state abstractions that trade-off between compression and optimality through rate-distortion theory.
Finding Options that Minimize Planning Time
We prove that the problem of finding options that minimize planning time is NP-Hard.
We explore different approaches to avoid planning overfitting in model-based RL.
By Dilip Arumugam, myself, Kavosh Asadi, Nakul Gopalan, Christopher Grimm, Jun Ki Lee, Lucas Lehnert, Michael Littman.
For fun, I'm a big fan of basketball, hiking, fitness, cooking, snowboarding, and music (I play guitar/violin and mostly listen to folk, progressive metal, and classical).
I'm an advocate of a few specific causes: sustainability efforts, existential risk minimization, space exploration, and improving the diversity, quality, and accessibility of STEM education.
Always up for a chat, and happy to visit labs or companies to give talks - shoot me an email if you'd like to discuss further!