David Abel



I am a Research Scientist at DeepMind in London. Before that, I completed my Ph.D in Computer Science and Masters in Philosophy at Brown University, where I was fortunate to be advised by Prof. Michael Littman (CS), and Prof. Joshua Schechter (Philosophy).



My research focuses on bringing clarity to the central philosophical questions surrounding computation and learning.

I value research that concentrates on providing new understanding, and tend to get excited by simple but foundational questions. I typically work with the reinforcement learning problem, drawing on tools and perspectives from computational learning theory, computational complexity, and analytic philosophy.

I am currently interested in better defining the AI problem. Previously, my dissertation studied how effective agents model the worlds they inhabit, focusing on the representational practices that underly effective learning and planning.

Featured Research

Alice, Bob, and RL

We study the expressivity of Markov reward functions in finite environments by analysing what kinds of tasks such functions can express.

Thesis overview

My dissertation, aimed at understanding abstraction and its role in effective reinforcement learning.

Advised by Michael L. Littman.

Value Preserving Abstractions

We prove which combinations of state abstractions and options are guaranteed to preserve representation of near-optimal policies in any finite Markov Decision Process.

Lipschitz Lifelong Reinforcement Learning

We examine the Lipschitz continuity of value functions and MDPs, then exploit these properties to develop a PAC-MDP algorithm for lifelong RL called Lipschitz RMax.

Affordances in RL

We develop a theory of affordances in the context of RL and planning.

Planned Information Processing

We develop a model that characterizes the planned use of information processing as a meta-reasoning problem and study this model's capacity to predict human reaction times in simple tasks.

The process of abstraction
The Value of Abstraction
Current Opinions in Behavioral Science 2019

We discuss the vital role that abstraction plays in efficient decision making.

Expected-Length Option Model

We introduce and motivate the Expected-Length Model of Options, a simpler alternative for characterizing the transition and reward functions of options.

State Abstr for Lifelong RL

We study state abstractions that trade-off between compression and optimality through rate-distortion theory.

Point Options

We prove that the problem of finding options that minimize planning time is NP-Hard.


For fun, I'm a big fan of basketball, snowboarding, games, and music (I play guitar/piano/violin and mostly listen to progressive metal). I now live in London, UK, with my wife Elizabeth and our dog Barley.

Always up for a chat -- shoot me an email if you'd like to discuss anything!