Publications
For news about publications, follow us on X:
Click on any author names or tags to filter publications.
All topic tags:
surveydeep-rlmulti-agent-rlagent-modellingad-hoc-teamworkautonomous-drivinggoal-recognitionexplainable-aicausalgeneralisationsecurityemergent-communicationiterated-learningintrinsic-rewardsimulatorstate-estimationdeep-learningtransfer-learning
Selected tags (click to remove):
Niklas-Höpner
2023
Arrasy Rahman, Ignacio Carlucho, Niklas Höpner, Stefano V. Albrecht
A General Learning Framework for Open Ad Hoc Teamwork Using Graph-based Policy Learning
Journal of Machine Learning Research, 2023
Abstract | BibTex | arXiv | Publisher | Code
JMLRad-hoc-teamworkdeep-rlagent-modellingmulti-agent-rl
Abstract:
Open ad hoc teamwork is the problem of training a single agent to efficiently collaborate with an unknown group of teammates whose composition may change over time. A variable team composition creates challenges for the agent, such as the requirement to adapt to new team dynamics and dealing with changing state vector sizes. These challenges are aggravated in real-world applications where the controlled agent has no access to the full state of the environment. In this work, we develop a class of solutions for open ad hoc teamwork under full and partial observability. We start by developing a solution for the fully observable case that leverages graph neural network architectures to obtain an optimal policy based on reinforcement learning. We then extend this solution to partially observable scenarios by proposing different methodologies that maintain belief estimates over the latent environment states and team composition. These belief estimates are combined with our solution for the fully observable case to compute an agent's optimal policy under partial observability in open ad hoc teamwork. Empirical results demonstrate that our approach can learn efficient policies in open ad hoc teamwork in full and partially observable cases. Further analysis demonstrates that our methods' success is a result of effectively learning the effects of teammates' actions while also inferring the inherent state of the environment under partial observability.
@article{JRahman2022POGPL,
author = {Arrasy Rahman and Ignacio Carlucho and Niklas H\"opner and Stefano V. Albrecht},
title = {A General Learning Framework for Open Ad Hoc Teamwork Using Graph-based Policy Learning},
journal = {Journal of Machine Learning Research},
year = {2023},
volume = {24},
number = {298},
pages = {1--74},
url = {http://jmlr.org/papers/v24/22-099.html}
}
2022
Arrasy Rahman, Ignacio Carlucho, Niklas Höpner, Stefano V. Albrecht
A General Learning Framework for Open Ad Hoc Teamwork Using Graph-based Policy Learning
arXiv:2210.05448, 2022
Abstract | BibTex | arXiv
ad-hoc-teamworkdeep-rlagent-modelling
Abstract:
Open ad hoc teamwork is the problem of training a single agent to efficiently collaborate with an unknown group of teammates whose composition may change over time. A variable team composition creates challenges for the agent, such as the requirement to adapt to new team dynamics and dealing with changing state vector sizes. These challenges are aggravated in real-world applications where the controlled agent has no access to the full state of the environment. In this work, we develop a class of solutions for open ad hoc teamwork under full and partial observability. We start by developing a solution for the fully observable case that leverages graph neural network architectures to obtain an optimal policy based on reinforcement learning. We then extend this solution to partially observable scenarios by proposing different methodologies that maintain belief estimates over the latent environment states and team composition. These belief estimates are combined with our solution for the fully observable case to compute an agent's optimal policy under partial observability in open ad hoc teamwork. Empirical results demonstrate that our approach can learn efficient policies in open ad hoc teamwork in full and partially observable cases. Further analysis demonstrates that our methods' success is a result of effectively learning the effects of teammates' actions while also inferring the inherent state of the environment under partial observability.
@misc{Rahman2022POGPL,
title={A General Learning Framework for Open Ad Hoc Teamwork Using Graph-based Policy Learning},
author={Arrasy Rahman and Ignacio Carlucho and Niklas H\"opner and Stefano V. Albrecht},
year={2022},
eprint={2210.05448},
archivePrefix={arXiv}
}
2021
Arrasy Rahman, Niklas Höpner, Filippos Christianos, Stefano V. Albrecht
Towards Open Ad Hoc Teamwork Using Graph-based Policy Learning
International Conference on Machine Learning, 2021
Abstract | BibTex | arXiv | Video | Code
ICMLdeep-rlagent-modellingad-hoc-teamwork
Abstract:
Ad hoc teamwork is the challenging problem of designing an autonomous agent which can adapt quickly to collaborate with teammates without prior coordination mechanisms, including joint training. Prior work in this area has focused on closed teams in which the number of agents is fixed. In this work, we consider open teams by allowing agents with different fixed policies to enter and leave the environment without prior notification. Our solution builds on graph neural networks to learn agent models and joint-action value models under varying team compositions. We contribute a novel action-value computation that integrates the agent model and joint-action value model to produce action-value estimates. We empirically demonstrate that our approach successfully models the effects other agents have on the learner, leading to policies that robustly adapt to dynamic team compositions and significantly outperform several alternative methods.
@inproceedings{rahman2021open,
title={Towards Open Ad Hoc Teamwork Using Graph-based Policy Learning},
author={Arrasy Rahman and Niklas H\"opner and Filippos Christianos and Stefano V. Albrecht},
booktitle={International Conference on Machine Learning (ICML)},
year={2021}
}
2020
Arrasy Rahman, Niklas Höpner, Filippos Christianos, Stefano V. Albrecht
Open Ad Hoc Teamwork using Graph-based Policy Learning
arXiv:2006.10412, 2020
Abstract | BibTex | arXiv
deep-rlagent-modellingad-hoc-teamwork
Abstract:
Ad hoc teamwork is the challenging problem of designing an autonomous agent which can adapt quickly to collaborate with previously unknown teammates. Prior work in this area has focused on closed teams in which the number of agents is fixed. In this work, we consider open teams by allowing agents of varying types to enter and leave the team without prior notification. Our proposed solution builds on graph neural networks to learn scalable agent models and value decompositions under varying team sizes, which can be jointly trained with a reinforcement learning agent using discounted returns objectives. We demonstrate empirically that our approach results in agent policies which can robustly adapt to dynamic team composition, and is able to effectively generalize to larger teams than were seen during training.
@misc{rahman2020open,
title={Open Ad Hoc Teamwork using Graph-based Policy Learning},
author={Arrasy Rahman and Niklas H\"opner and Filippos Christianos and Stefano V. Albrecht},
year={2020},
eprint={2006.10412},
archivePrefix={arXiv},
primaryClass={cs.LG}
}