Marlos C. Machado

I am an adjunct professor at the University of Alberta, and a Canada CIFAR AI Chair through Amii. My research interests lie broadly in machine learning, specifically in reinforcement learning, representation learning, and real-world applications of all the above.
[Google Scholar] [DBLP]


Prospective students, click here.

Marlos C. Machado

Recent News

    • Marc 2o23: Our recent paper is now available on arXiv.
      • Loss of Plasticity in Continual Deep Reinforcement Learning (w/ Abbas, Zhao, Modayil, & White).
    • Feb. 2023: We had a paper accepted at JMLR.
      • Temporal Abstraction in Reinforcement Learning with the Successor Representation (w/ Barreto, Precup, & Bowling).
    • Jan. 2023: Our recent paper is now available on arXiv.
      • Deep Laplacian-based Options for TemporallyExtended Exploration (w/ Klissarov).
    • Jan. 2023: Our recent paper is now available on arXiv.
      • Trajectory-Aware Eligibility Traces for Off-Policy Reinforcement Learning (w/ Daley, White, & Amato).
    • Nov. 2022: I gave a talk as invited keynote at the 11th Brazilian Conference on Intelligent Systems (BRACIS).
      • Temporal Abstraction in RL with the Successor Representation.
    • Nov. 2022: Our recent paper is now available on arXiv.
      • Agent-State Construction with Auxiliary Inputs (w/ Tao, & White).
    • Aug. 2022: Erfan Miahi and David Tao successfully defended their M.Sc. theses!
    • May 2022: We had a paper accepted at UAI’22.
      • Temporal Abstractions-Augmented Temporally Contrastive Learning: An Alternative to the Laplacian in RL (w/ Erraqabi, Zhao, Sukhbaatar, Lazaric, Denoyer, & Bengio).

    • Mar. 2022: Our paper was nominated for the best paper award at AISTATS’22!
      • A General Class of Surrogate Functions for Stable and Efficient Reinforcement Learning (w/ Vaswani, Bachem, Totaro, Mueller, Garg, Geist, Castro, & Le Roux).
    • Mar. 2022: Our recent paper is now available on arXiv.
      • Investigating the Properties of Neural Network Representations in Reinforcement Learning (w/ Wang, Miahi, White, Abbas, Kumaraswamy, Liu,  & White).
    • Feb. 2022: Our recent paper is now available on arXiv.
      • Reward-Respecting Subtasks for Model-Based Reinforcement Learning (w/ Sutton, Holland, Szepesvari, Timbers, Tanner, & White).
    • Jan. 2022: We had a paper accepted at AISTATS’22.
      • A General Class of Surrogate Functions for Stable and Efficient Reinforcement Learning (w/ Vaswani, Bachem, Totaro, Mueller, Garg, Geist, Castro, & Le Roux).
    • Oct. 2021: Our recent paper is now available on arXiv.
      • Temporal Abstraction in Reinforcement Learning with the Successor Representation (w/ Barreto & Precup).
    • Oct. 2021: I gave a talk and participated in a panel at the Microsoft Summit Workshop on RL, Forwards and Backwards: Insights from Neuroscience.
      • Temporal Abstraction in RL with the Successor Representation.
    • Jun. 2021: I’ve been appointed Canada CIFAR AI Chair.
    • Jun. 2021: I’ve been appointed Amii Fellow.
    • May 2021: We had a paper accepted at ICML’21.
      • Beyond Variance Reduction: Understanding the True Impact of Baselines on Policy Optimization (w/ Chung*, Thomas*, & Le Roux).
    • Jan. 2021: We had a paper accepted at ICLR’21.
      • Contrastive Behavioral Similarity Embeddings for Generalization in Reinforcement Learning (w/ Agarwal, Castro, & Bellemare).
    • Jan. 2021: I started as a Research Scientist at DeepMind.
    • Jan. 2021: I became an adjunct professor at the University of Alberta.
    • Dec. 2020: We had a paper accepted at Nature.
      • Autonomous Navigation of Stratospheric Balloons using Reinforcement Learning (w/ Bellemare, Candido, Castro, Gong, Moitra, Ponda, & Wang).