Publications
For news about publications, follow us on X:
Click on any author names or tags to filter publications.
All topic tags:
surveydeep-rlmulti-agent-rlagent-modellingad-hoc-teamworkautonomous-drivinggoal-recognitionexplainable-aicausalgeneralisationsecurityemergent-communicationiterated-learningintrinsic-rewardsimulatorstate-estimationdeep-learningtransfer-learning
Selected tags (click to remove):
Ignacio-CarluchoJMLR
2023
Arrasy Rahman, Ignacio Carlucho, Niklas Höpner, Stefano V. Albrecht
A General Learning Framework for Open Ad Hoc Teamwork Using Graph-based Policy Learning
Journal of Machine Learning Research, 2023
Abstract | BibTex | arXiv | Publisher | Code
JMLRad-hoc-teamworkdeep-rlagent-modellingmulti-agent-rl
Abstract:
Open ad hoc teamwork is the problem of training a single agent to efficiently collaborate with an unknown group of teammates whose composition may change over time. A variable team composition creates challenges for the agent, such as the requirement to adapt to new team dynamics and dealing with changing state vector sizes. These challenges are aggravated in real-world applications where the controlled agent has no access to the full state of the environment. In this work, we develop a class of solutions for open ad hoc teamwork under full and partial observability. We start by developing a solution for the fully observable case that leverages graph neural network architectures to obtain an optimal policy based on reinforcement learning. We then extend this solution to partially observable scenarios by proposing different methodologies that maintain belief estimates over the latent environment states and team composition. These belief estimates are combined with our solution for the fully observable case to compute an agent's optimal policy under partial observability in open ad hoc teamwork. Empirical results demonstrate that our approach can learn efficient policies in open ad hoc teamwork in full and partially observable cases. Further analysis demonstrates that our methods' success is a result of effectively learning the effects of teammates' actions while also inferring the inherent state of the environment under partial observability.
@article{JRahman2022POGPL,
author = {Arrasy Rahman and Ignacio Carlucho and Niklas H\"opner and Stefano V. Albrecht},
title = {A General Learning Framework for Open Ad Hoc Teamwork Using Graph-based Policy Learning},
journal = {Journal of Machine Learning Research},
year = {2023},
volume = {24},
number = {298},
pages = {1--74},
url = {http://jmlr.org/papers/v24/22-099.html}
}