Publications
For news about publications, follow us on X:
Click on any author names or tags to filter publications.
All topic tags:
surveydeep-rlmulti-agent-rlagent-modellingad-hoc-teamworkautonomous-drivinggoal-recognitionexplainable-aicausalgeneralisationsecurityemergent-communicationiterated-learningintrinsic-rewardsimulatorstate-estimationdeep-learningtransfer-learning
Selected tags (click to remove):
generalisationLukas-Schäfer
2022
Trevor McInroe, Lukas Schäfer, Stefano V. Albrecht
Learning Representations for Reinforcement Learning with Hierarchical Forward Models
NeurIPS Workshop on Deep Reinforcement Learning, 2022
Abstract | BibTex | arXiv
NeurIPSdeep-rlgeneralisation
Abstract:
Learning control from pixels is difficult for reinforcement learning (RL) agents because representation learning and policy learning are intertwined. Previous approaches remedy this issue with auxiliary representation learning tasks, but they either do not consider the temporal aspect of the problem or only consider single-step transitions, which may miss relevant information if important environmental changes take many steps to manifest. We propose Hierarchical k-Step Latent (HKSL), an auxiliary task that learns representations via a hierarchy of forward models that operate at varying magnitudes of step skipping while also learning to communicate between levels in the hierarchy. We evaluate HKSL in a suite of 30 robotic control tasks with and without distractors and a task of our creation. We find that HKSL either converges to higher or optimal episodic returns more quickly than several alternative representation learning approaches. Furthermore, we find that HKSL's representations capture task-relevant details accurately across timescales (even in the presence of distractors) and that communication channels between hierarchy levels organize information based on both sides of the communication process, both of which improve sample efficiency.
@inproceedings{mcinroe2022hksl,
title={Learning Representations for Reinforcement Learning with Hierarchical Forward Models},
author={Trevor McInroe and Lukas Schäfer and Stefano V. Albrecht},
booktitle={NeurIPS Workshop on Deep RL},
year={2022}
}