Publications
For news about publications, follow us on X:
Click on any author names or tags to filter publications.
All topic tags:
surveydeep-rlmulti-agent-rlagent-modellingad-hoc-teamworkautonomous-drivinggoal-recognitionexplainable-aicausalgeneralisationsecurityemergent-communicationiterated-learningintrinsic-rewardsimulatorstate-estimationdeep-learningtransfer-learning
Selected tags (click to remove):
RLC
2024
Mhairi Dunion, Stefano V. Albrecht
Multi-view Disentanglement for Reinforcement Learning with Multiple Cameras
Reinforcement Learning Conference, 2024
Abstract | BibTex | arXiv | Code
RLCdeep-rlgeneralisation
Abstract:
The performance of image-based Reinforcement Learning (RL) agents can vary depending on the position of the camera used to capture the images. Training on multiple cameras simultaneously, including a first-person egocentric camera, can leverage information from different camera perspectives to improve the performance of RL. However, hardware constraints may limit the availability of multiple cameras in real-world deployment. Additionally, cameras may become damaged in the real-world preventing access to all cameras that were used during training. To overcome these hardware constraints, we propose Multi-View Disentanglement (MVD), which uses multiple cameras to learn a policy that achieves zero-shot generalisation to any single camera from the training set. Our approach is a self-supervised auxiliary task for RL that learns a disentangled representation from multiple cameras, with a shared representation that is aligned across all cameras to allow generalisation to a single camera, and a private representation that is camera-specific. We show experimentally that an RL agent trained on a single third-person camera is unable to learn an optimal policy in many control tasks; but, our approach, benefiting from multiple cameras during training, is able to solve the task using only the same single third-person camera.
@inproceedings{dunion2024mvd,
title={Multi-view Disentanglement for Reinforcement Learning with Multiple Cameras},
author={Mhairi Dunion and Stefano V. Albrecht},
booktitle={1st Reinforcement Learning Conference},
year={2024}
}
Trevor McInroe, Adam Jelley, Stefano V. Albrecht, Amos Storkey
Planning to Go Out-of-Distribution in Offline-to-Online Reinforcement Learning
Reinforcement Learning Conference, 2024
Abstract | BibTex | arXiv
RLCdeep-rl
Abstract:
Offline pretraining with a static dataset followed by online fine-tuning (offline-to-online, or OtO) is a paradigm well matched to a real-world RL deployment process. In this scenario, we aim to find the best-performing policy within a limited budget of online interactions. Previous work in the OtO setting has focused on correcting for bias introduced by the policy-constraint mechanisms of offline RL algorithms. Such constraints keep the learned policy close to the behavior policy that collected the dataset, but we show this can unnecessarily limit policy performance if the behavior policy is far from optimal. Instead, we forgo constraints and frame OtO RL as an exploration problem that aims to maximize the benefit of online data-collection. We first study the major online RL exploration methods based on intrinsic rewards and UCB in the OtO setting, showing that intrinsic rewards add training instability through reward-function modification, and UCB methods are myopic and it is unclear which learned-component's ensemble to use for action selection. We then introduce an algorithm for planning to go out-of-distribution (PTGOOD) that avoids these issues. PTGOOD uses a non-myopic planning procedure that targets exploration in relatively high-reward regions of the state-action space unlikely to be visited by the behavior policy. By leveraging concepts from the Conditional Entropy Bottleneck, PTGOOD encourages data collected online to provide new information relevant to improving the final deployment policy without altering rewards. We show empirically in several continuous control tasks that PTGOOD significantly improves agent returns during online fine-tuning and avoids the suboptimal policy convergence that many of our baselines exhibit in several environments.
@inproceedings{mcinroe2024planning,
title={Planning to Go Out-of-Distribution in Offline-to-Online Reinforcement Learning},
author={Trevor McInroe and Adam Jelley and Stefano V. Albrecht and Amos Storkey},
booktitle={1st Reinforcement Learning Conference},
year={2024}
}
Aditya Kapoor, Sushant Swamy, Kale-ab Tessera, Mayank Baranwal, Mingfei Sun, Harshad Khadilkar, Stefano V. Albrecht
Agent-Temporal Credit Assignment for Optimal Policy Preservation in Sparse Multi-Agent Reinforcement Learning
RLC Workshop on Coordination and Cooperation for Multi-Agent Reinforcement Learning Methods, 2024
Abstract | BibTex | Paper
RLCdeep-rlmulti-agent-rl
Abstract:
The ability of agents to learn optimal policies is hindered in multi-agent environments where all agents receive a global reward signal sparsely or only at the end of an episode. The delayed nature of these rewards, especially in long-horizon tasks, makes it challenging for agents to evaluate their actions at intermediate time steps. In this paper, we propose Agent-Temporal Reward Redistribution (ATRR), a novel approach to tackle the agent-temporal credit assignment problem by redistributing sparse environment rewards both temporally and at the agent level. ATRR first decomposes the sparse global rewards into rewards for each time step and then calculates agent-specific rewards by determining each agent's relative contribution to these decomposed temporal rewards. We theoretically prove that there exists a redistribution method equivalent to potential-based reward shaping, ensuring that the optimal policy remains unchanged. Empirically, we demonstrate that ATRR stabilizes and expedites the learning process. We also show that ATRR, when used alongside single-agent reinforcement learning algorithms, performs as well as or better than their multi-agent counterparts.
@inproceedings{kapoor2024agenttemporal,
title={Agent-Temporal Credit Assignment for Optimal Policy Preservation in Sparse Multi-Agent Reinforcement Learning},
author={Aditya Kapoor and Sushant Swamy and Kale-ab Tessera and Mayank Baranwal and Mingfei Sun and Harshad Khadilkar and Stefano V Albrecht},
booktitle={Coordination and Cooperation for Multi-Agent Reinforcement Learning Methods Workshop},
year={2024},
url={https://openreview.net/forum?id=dGS1e3FXUH}
}