Publications
For news about publications, follow us on X:
Click on any author names or tags to filter publications.
All topic tags:
surveydeep-rlmulti-agent-rlagent-modellingad-hoc-teamworkautonomous-drivinggoal-recognitionexplainable-aicausalgeneralisationsecurityemergent-communicationiterated-learningintrinsic-rewardsimulatorstate-estimationdeep-learningtransfer-learning
Selected tags (click to remove):
Filippos-ChristianosAAMAS
2023
Filippos Christianos, Georgios Papoudakis, Stefano V. Albrecht
Pareto Actor-Critic for Equilibrium Selection in Multi-Agent Reinforcement Learning
AAMAS Workshop on Optimization and Learning in Multiagent Systems, 2023
Abstract | BibTex | arXiv
AAMASdeep-rlmulti-agent-rl
Abstract:
This work focuses on equilibrium selection in no-conflict multi-agent games, where we specifically study the problem of selecting a Pareto-optimal equilibrium among several existing equilibria. It has been shown that many state-of-the-art multi-agent reinforcement learning (MARL) algorithms are prone to converging to Pareto-dominated equilibria due to the uncertainty each agent has about the policy of the other agents during training. To address suboptimal equilibrium selection, we propose Pareto Actor-Critic (Pareto-AC), an actor-critic algorithm that utilises a simple property of no-conflict games (a superset of cooperative games with identical rewards): each agent can assume the others will choose actions that will lead to a Pareto-optimal equilibrium. We evaluate Pareto-AC in a diverse set of multi-agent games and show that it converges to higher episodic returns compared to alternative MARL algorithms, as well as successfully converging to a Pareto-optimal equilibrium in a range of matrix games.
@inproceedings{christianos2023pareto,
title={Pareto Actor-Critic for Equilibrium Selection in Multi-Agent Reinforcement Learning},
author={Filippos Christianos and Georgios Papoudakis and Stefano V. Albrecht},
booktitle={AAMAS Workshop on Optimization and Learning in Multiagent Systems},
year={2023}
}
Adam Michalski, Filippos Christianos, Stefano V. Albrecht
SMAClite: A Lightweight Environment for Multi-Agent Reinforcement Learning
AAMAS Workshop on Multiagent Sequential Decision Making Under Uncertainty, 2023
Abstract | BibTex | arXiv | Code
AAMASdeep-rlmulti-agent-rl
Abstract:
There is a lack of standard benchmarks for Multi-Agent Reinforcement Learning (MARL) algorithms. The Starcraft Multi-Agent Challenge (SMAC) has been widely used in MARL research, but is built on top of a heavy, closed-source computer game, StarCraft II. Thus, SMAC is computationally expensive and requires knowledge and the use of proprietary tools specific to the game for any meaningful alteration or contribution to the environment. We introduce SMAClite -- a challenge based on SMAC that is both decoupled from Starcraft II and open-source, along with a framework which makes it possible to create new content for SMAClite without any special knowledge. We conduct experiments to show that SMAClite is equivalent to SMAC, by training MARL algorithms on SMAClite and reproducing SMAC results. We then show that SMAClite outperforms SMAC in both runtime speed and memory.
@inproceedings{michalski2023smaclite,
title={SMAClite: A Lightweight Environment for Multi-Agent Reinforcement Learning},
author={Adam Michalski and Filippos Christianos and Stefano V. Albrecht},
booktitle={AAMAS workshop on Multiagent Sequential Decision Making Under Uncertainty (MSDM)},
year={2023}
}
Callum Tilbury, Filippos Christianos, Stefano V. Albrecht
Revisiting the Gumbel-Softmax in MADDPG
AAMAS Workshop on Adaptive and Learning Agents, 2023
Abstract | BibTex | arXiv | Code
AAMASmulti-agent-rldeep-rl
Abstract:
MADDPG is an algorithm in multi-agent reinforcement learning (MARL) that extends the popular single-agent method, DDPG, to multi-agent scenarios. Importantly, DDPG is an algorithm designed for continuous action spaces, where the gradient of the state-action value function exists. For this algorithm to work in discrete action spaces, discrete gradient estimation must be performed. For MADDPG, the Gumbel-Softmax (GS) estimator is used -- a reparameterisation which relaxes a discrete distribution into a similar continuous one. This method, however, is statistically biased, and a recent MARL benchmarking paper suggests that this bias makes MADDPG perform poorly in grid-world situations, where the action space is discrete. Fortunately, many alternatives to the GS exist, boasting a wide range of properties. This paper explores several of these alternatives and integrates them into MADDPG for discrete grid-world scenarios. The corresponding impact on various performance metrics is then measured and analysed. It is found that one of the proposed estimators performs significantly better than the original GS in several tasks, achieving up to 55\% higher returns, along with faster convergence.
@inproceedings{tilbury2023revisitingmaddpg,
title={Revisiting the Gumbel-Softmax in MADDPG},
author={Callum Tilbury and Filippos Christianos and Stefano V. Albrecht},
year={2023},
booktitle={AAMAS Workshop on Adaptive and Learning Agents (ALA)},
}
2022
Lukas Schäfer, Filippos Christianos, Josiah P. Hanna, Stefano V. Albrecht
Decoupled Reinforcement Learning to Stabilise Intrinsically-Motivated Exploration
International Conference on Autonomous Agents and Multi-Agent Systems, 2022
Abstract | BibTex | arXiv | Code
AAMASdeep-rlintrinsic-reward
Abstract:
Intrinsic rewards can improve exploration in reinforcement learning, but the exploration process may suffer from instability caused by non-stationary reward shaping and strong dependency on hyperparameters. In this work, we introduce Decoupled RL (DeRL) as a general framework which trains separate policies for intrinsically-motivated exploration and exploitation. Such decoupling allows DeRL to leverage the benefits of intrinsic rewards for exploration while demonstrating improved robustness and sample efficiency. We evaluate DeRL algorithms in two sparse-reward environments with multiple types of intrinsic rewards. Our results show that DeRL is more robust to varying scale and rate of decay of intrinsic rewards and converges to the same evaluation returns than intrinsically-motivated baselines in fewer interactions. Lastly, we discuss the challenge of distribution shift and show that divergence constraint regularisers can successfully minimise instability caused by divergence of exploration and exploitation policies.
@inproceedings{schaefer2022derl,
title={Decoupled Reinforcement Learning to Stabilise Intrinsically-Motivated Exploration},
author={Lukas Schäfer and Filippos Christianos and Josiah P. Hanna and Stefano V. Albrecht},
booktitle={International Conference on Autonomous Agents and Multiagent Systems (AAMAS)},
year={2022}
}
Filippos Christianos
Collaborative Training of Multiple Autonomous Agents
International Conference on Autonomous Agents and Multiagent Systems, Doctoral Consortium, 2022
Abstract | BibTex | Paper
AAMASmulti-agent-rl
Abstract:
Exploration in multi-agent reinforcement learning is a challenging problem, especially with a large number of agents. Parameter sharing between agents is often used since it significantly decreases the number of trainable parameters, shortening training times to tractable levels and improving exploration efficiency. We present two algorithms that aim to be a middle ground between not sharing parameters and fully sharing parameters. These proposed algorithms show the advantages of the baselines at the two ends of the spectrum and minimise their drawbacks. First, Shared Experience Actor-Critic [Christianos et al. 2020], applies the basic idea of off-policy correction via importance weighting and combines the experiences generated by different agents into more informative and effective learning gradients. Then, Selective Parameter Sharing [Christianos et al. 2021], based on rigorous empirical analysis of the impact of parameter sharing proposes a novel parameter sharing method that can be coupled with existing multi-agent reinforcement learning algorithms.
@inproceedings{christianos2022collaborative,
title={Collaborative Training of Multiple Autonomous Agents},
author={Filippos Christianos},
booktitle={Doctoral Consortium at the International Conference on Autonomous Agents and Multiagent Systems},
year={2022}
}