David Abel



I am a Research Scientist at DeepMind in London. Before that, I completed my Ph.D in Computer Science at Brown University, where I was fortunate to be advised by Prof. Michael Littman.



My research focuses on bringing clarity to the central philosophical questions surrounding computation and learning.

I am currently interested in characterizing the nature of computational worlds that can contain sophisticated phenomena such as intelligent agents. Previously, my dissertation investigated how rational agents model the worlds they inhabit, focusing on the representational practices that underly effective learning and planning.

I typically work with the reinforcement learning problem, drawing on tools and perspectives from computational learning theory, computational complexity, and analytic philosophy. I value research that concentrates on providing new understanding, and tend to get excited by simple but foundational questions.

Featured Work

Thesis overview

My dissertation, aimed at understanding abstraction and its role in effective reinforcement learning.

Advised by Michael L. Littman.

Value Preserving Abstractions

We prove which combinations of state abstractions and options are guaranteed to preserve representation of near-optimal policies in any finite Markov Decision Process.

Lipschitz Lifelong Reinforcement Learning

We examine the Lipschitz continuity of value functions and MDPs, then exploit these properties to develop a PAC-MDP algorithm for lifelong RL called Lipschitz RMax.

Affordances in RL

We develop a theory of affordances in the context of RL and planning.

Planned Information Processing

We develop a model that characterizes the planned use of information processing as a meta-reasoning problem and study this model's capacity to predict human reaction times in simple tasks.

The process of abstraction
The Value of Abstraction
Current Opinions in Behavioral Science 2019

We discuss the vital role that abstraction plays in efficient decision making.

Expected-Length Option Model

We introduce and motivate the Expected-Length Model of Options, a simpler alternative for characterizing the transition and reward functions of options.

State Abstr for Lifelong RL

We study state abstractions that trade-off between compression and optimality through rate-distortion theory.

Point Options

We prove that the problem of finding options that minimize planning time is NP-Hard.


For fun, I'm a big fan of basketball, snowboarding, baking, games, and music (I play guitar/piano/violin and mostly listen to progressive metal).

I am an advocate of a few specific causes: sustainability, existential risk minimization, space exploration, and improving the diversity, quality, and accessibility of education.

Always up for a chat -- shoot me an email if you'd like to discuss anything!