I am a research scientist at DeepMind, an adjunct professor at the University of Alberta, and a Canada CIFAR AI Chair through Amii. My research interests lie broadly in machine learning, specifically in reinforcement learning, representation learning, optimization, and real-world applications of all the above. [Google Scholar] [DBLP]
Jun. 2021: I’ve been appointed Canada CIFAR AI Chair.
Jun. 2021: I’ve been appointed Amii Fellow.
May 2021: We had a paper accepted at ICML’21.
Beyond Variance Reduction: Understanding the True Impact of Baselines on Policy Optimization (w/ Chung*, Thomas*, & Le Roux).
Jan. 2021: We had a paper accepted at ICLR’21.
Contrastive Behavioral Similarity Embeddings for Generalization in Reinforcement Learning (w/ Agarwal, Castro, & Bellemare).
Jan. 2021: I started as a Research Scientist at DeepMind.
Jan. 2021: I became an adjunct professor at the University of Alberta.
Dec. 2020: We had a paper accepted at Nature.
Autonomous Navigation of Stratospheric Balloons using Reinforcement Learning (w/ Bellemare, Candido, Castro, Gong, Moitra, Ponda, & Wang).
Sep. 2020: We had a paper accepted at NeurIPS’20.
An Operator View of Policy Gradient Methods (w/ Ghosh & Le Roux).
Feb. 2020: I gave a talk at Stanford University.
Temporal Abstraction in RL with the Successor Representation.
Feb. 2020: I attended AAAI’20 in New York City.
Dec. 2019: We had two papers accepted at ICLR’20.
Exploration in Reinforcement Learning with DeepCoveringOptions (w/ Jinnai, Park, & Konidaris).
On Bonus Based Exploration Methods In The Arcade Learning Environment (w/ Taiga, Fedus, Courville, & Bellemare).
Nov. 2019: We had a paper accepted at AAAI’20.
Count-Based Exploration with the Successor Representation (w/ Bellemare & Bowling).