David Abel



I'm a Ph.D candidate in Computer Science at Brown University focusing on Reinforcement Learning, advised by Prof. Michael Littman.

I am on the academic job market.



My research investigates the foundations of machine learning and applications thereof to scientific and societal challenges.

I'm currently focused on understanding abstraction and its role in agency. I study how rational agents model the worlds they inhabit, focusing on the representational practices that underly effective learning and planning. I typically work with the Reinforcement Learning paradigm, drawing on tools from computational learning theory, probability, and information theory.

I also care deeply about responsible applications of ML to problems of relevance in the world, as in the mission of computational sustainability.

Featured Work

Expected-Length Option Model

We introduce and motivate the Expected-Length Model of Options, a simpler alternative for characterizing the transition and reward functions of options.

State Abstr for Lifelong RL

We study state abstractions that trade-off between compression and optimality through rate-distortion theory.

Point Options

We prove that the problem of finding options that minimize planning time is NP-Hard.

Covering Options

We propose an option discovery method for exploration based on minimizing cover time.

By Yuu Jinnai, Jee Won Park, myself, and George Konidaris.

Planning Regularization

We explore different approaches to avoid planning overfitting in model-based RL.


For fun, I'm a big fan of basketball, rock climbing, snowboarding, games, and music (I play violin/guitar and mostly listen to progressive metal).

I'm an advocate of a few specific causes: sustainability efforts, existential risk minimization, space exploration, and improving the diversity, quality, and accessibility of STEM education.

Always up for a chat -- shoot me an email if you'd like to discuss anything!