I am a Senior Research Scientist at DeepMind on the Agency team led by Will Dabney, and an Honorary Fellow at the University of Edinburgh where I work closely with the Autonomous Agents Research Group.
I will be co-supervising a small number of PhD students at the University of Edinburgh: for more info, see here.
If you are interested in working together at Google DeepMind, see open roles here. Unfortunately, I do not have any current openings for direct reports or interns.
My research focuses on bringing clarity to the central philosophical questions surrounding agency, computation, and learning.
I value research that provides new understanding, and tend to get excited by simple but foundational questions. I typically work with the reinforcement learning problem, drawing on tools and perspectives from across philosophy, math, and computer science.
I am currently interested in better defining the main concepts of AI, such as learning, agency, and goals. Previously, my dissertation studied how agents model the worlds they inhabit, focusing on the representational practices that underly effective learning and planning.
We reflect on the paradigm of RL and suggest three departures from our current thinking.
Joint with Mark Ho and Anna Harutyunyan.
|
|
A Definition of Continual Reinforcement Learning
NeurIPS 2023
We present a precise definition of the continual reinforcement learning problem.
|
|
Settling the Reward Hypothesis
ICML 2023
We illustrate the implicit requirements on goals and purposes under which the reward hypothesis holds.
|
|
We develop a new theory describing how people simplify and represent problems when planning.
Led by Mark K. Ho, joint with Carlos G. Correa, Jonathan D. Cohen, Michael L. Littman, Thomas L. Griffiths.
|
|
On the Expressivity of Markov Reward
NeurIPS 2021 (Outstanding Paper Award)
We study the expressivity of Markov reward functions in finite environments by analysing what kinds of tasks such functions can express.
Joint work with Will Dabney, Anna Harutyunyan, Mark K. Ho, Michael L. Littman, Doina Precup, Satinder Singh.
|
|
A Theory of Abstraction in Reinforcement Learning
Ph.D Thesis, 2020
My dissertation, aimed at understanding abstraction and its role in effective reinforcement learning.
Advised by Michael L. Littman.
|
|
Value Preserving State-Action Abstractions
AISTATS 2020
We prove which combinations of state abstractions and options are guaranteed to preserve representation of near-optimal policies in any finite Markov Decision Process.
|
|
The Value of Abstraction
Current Opinions in Behavioral Science 2019
We discuss the vital role that abstraction plays in efficient decision making.
|
|
We prove that the problem of finding options that minimize planning time is NP-Hard.
|
Before joining DeepMind, I completed my Ph.D in Computer Science at Brown University where I was fortunate to be advised by Prof. Michael Littman. I got my start in research working with Prof. Stefanie Tellex at Brown, and before that studied Philosophy and Computer Science at Carleton College.
I'm a big fan of basketball, lifting, baking, reading, games, snowboarding, and music (I play guitar/piano/violin and love listening to just about everything). I live in Edinburgh, Scotland with my wife Elizabeth and our dog Barley.
Q: What should I call you? A: I usually go by "Dave", but I take no offense to "David". If I'm teaching your class, "Dave" / "Professor Abel" are both okay.
I am always up for a chat -- please reach out over email if you would like to discuss anything. If you want to arrange a call, I have a recurring open slot in my calendar here.
If you have feedback of any kind, please feel free to fill out this anonymous feedback form.