Publications
For news about publications, follow us on X:
Click on any author names or tags to filter publications.
All topic tags:
surveydeep-rlmulti-agent-rlagent-modellingad-hoc-teamworkautonomous-drivinggoal-recognitionexplainable-aicausalgeneralisationsecurityemergent-communicationiterated-learningintrinsic-rewardsimulatorstate-estimationdeep-learningtransfer-learning
Selected tags (click to remove):
AIJagent-modelling
2020
Stefano V. Albrecht, Peter Stone, Michael P. Wellman
Special Issue on Autonomous Agents Modelling Other Agents: Guest Editorial
Artificial Intelligence, 2020
Abstract | BibTex | Publisher | Special Issue
AIJsurveyagent-modelling
Abstract:
Much research in artificial intelligence is concerned with enabling autonomous agents to reason about various aspects of other agents (such as their beliefs, goals, plans, or decisions) and to utilise such reasoning for effective interaction. This special issue contains new technical contributions addressing open problems in autonomous agents modelling other agents, as well as research perspectives about current developments, challenges, and future directions.
@article{albrecht2020special,
title = {Special Issue on Autonomous Agents Modelling Other Agents: Guest Editorial},
author = {Stefano V. Albrecht and Peter Stone and Michael P. Wellman},
journal = {Artificial Intelligence},
volume = {285},
year = {2020},
publisher = {Elsevier},
url = {https://doi.org/10.1016/j.artint.2020.103292}
}
2018
Stefano V. Albrecht, Peter Stone
Autonomous Agents Modelling Other Agents: A Comprehensive Survey and Open Problems
Artificial Intelligence, 2018
Abstract | BibTex | arXiv | Publisher
AIJsurveyagent-modellinggoal-recognition
Abstract:
Much research in artificial intelligence is concerned with the development of autonomous agents that can interact effectively with other agents. An important aspect of such agents is the ability to reason about the behaviours of other agents, by constructing models which make predictions about various properties of interest (such as actions, goals, beliefs) of the modelled agents. A variety of modelling approaches now exist which vary widely in their methodology and underlying assumptions, catering to the needs of the different sub-communities within which they were developed and reflecting the different practical uses for which they are intended. The purpose of the present article is to provide a comprehensive survey of the salient modelling methods which can be found in the literature. The article concludes with a discussion of open problems which may form the basis for fruitful future research.
@article{ albrecht2018modelling,
title = {Autonomous Agents Modelling Other Agents: A Comprehensive Survey and Open Problems},
author = {Stefano V. Albrecht and Peter Stone},
journal = {Artificial Intelligence},
volume = {258},
pages = {66--95},
year = {2018},
publisher = {Elsevier},
note = {DOI: 10.1016/j.artint.2018.01.002}
}
2016
Stefano V. Albrecht, Jacob W. Crandall, Subramanian Ramamoorthy
Belief and Truth in Hypothesised Behaviours
Artificial Intelligence, 2016
Abstract | BibTex | arXiv | Publisher
AIJagent-modellingad-hoc-teamwork
Abstract:
There is a long history in game theory on the topic of Bayesian or “rational” learning, in which each player maintains beliefs over a set of alternative behaviours, or types, for the other players. This idea has gained increasing interest in the artificial intelligence (AI) community, where it is used as a method to control a single agent in a system composed of multiple agents with unknown behaviours. The idea is to hypothesise a set of types, each specifying a possible behaviour for the other agents, and to plan our own actions with respect to those types which we believe are most likely, given the observed actions of the agents. The game theory literature studies this idea primarily in the context of equilibrium attainment. In contrast, many AI applications have a focus on task completion and payoff maximisation. With this perspective in mind, we identify and address a spectrum of questions pertaining to belief and truth in hypothesised types. We formulate three basic ways to incorporate evidence into posterior beliefs and show when the resulting beliefs are correct, and when they may fail to be correct. Moreover, we demonstrate that prior beliefs can have a significant impact on our ability to maximise payoffs in the long-term, and that they can be computed automatically with consistent performance effects. Furthermore, we analyse the conditions under which we are able complete our task optimally, despite inaccuracies in the hypothesised types. Finally, we show how the correctness of hypothesised types can be ascertained during the interaction via an automated statistical analysis.
@article{ albrecht2016belief,
title = {Belief and Truth in Hypothesised Behaviours},
author = {Stefano V. Albrecht and Jacob W. Crandall and Subramanian Ramamoorthy},
journal = {Artificial Intelligence},
volume = {235},
pages = {63--94},
year = {2016},
publisher = {Elsevier},
note = {DOI: 10.1016/j.artint.2016.02.004}
}