Publications
For news about publications, follow us on X:
Click on any author names or tags to filter publications.
All topic tags:
surveydeep-rlmulti-agent-rlagent-modellingad-hoc-teamworkautonomous-drivinggoal-recognitionexplainable-aicausalgeneralisationsecurityemergent-communicationiterated-learningintrinsic-rewardsimulatorstate-estimationdeep-learningtransfer-learning
Selected tags (click to remove):
causalSubramanian-Ramamoorthy
2018
Craig Innes, Alex Lascarides, Stefano V. Albrecht, Subramanian Ramamoorthy, Benjamin Rosman
Reasoning about Unforeseen Possibilities During Policy Learning
arXiv:1801.03331, 2018
Abstract | BibTex | arXiv
causal
Abstract:
Methods for learning optimal policies in autonomous agents often assume that the way the domain is conceptualised - its possible states and actions and their causal structure - is known in advance and does not change during learning. This is an unrealistic assumption in many scenarios, because new evidence can reveal important information about what is possible, possibilities that the agent was not aware existed prior to learning. We present a model of an agent which both discovers and learns to exploit unforeseen possibilities using two sources of evidence: direct interaction with the world and communication with a domain expert. We use a combination of probabilistic and symbolic reasoning to estimate all components of the decision problem, including its set of random variables and their causal dependencies. Agent simulations show that the agent converges on optimal polices even when it starts out unaware of factors that are critical to behaving optimally.
@misc{innes2018reasoning,
title={Reasoning about Unforeseen Possibilities During Policy Learning},
author={Craig Innes and Alex Lascarides and Stefano V. Albrecht and Subramanian Ramamoorthy and Benjamin Rosman},
year={2018},
eprint={1801.03331},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
2017
Stefano V. Albrecht, Subramanian Ramamoorthy
Exploiting Causality for Selective Belief Filtering in Dynamic Bayesian Networks (Extended Abstract)
International Joint Conference on Artificial Intelligence, 2017
Abstract | BibTex | arXiv
IJCAIstate-estimationcausal
Abstract:
Dynamic Bayesian networks (DBNs) are a general model for stochastic processes with partially observed states. Belief filtering in DBNs is the task of inferring the belief state (i.e. the probability distribution over process states) based on incomplete and uncertain observations. In this article, we explore the idea of accelerating the filtering task by automatically exploiting causality in the process. We consider a specific type of causal relation, called passivity, which pertains to how state variables cause changes in other variables. We present the Passivity-based Selective Belief Filtering (PSBF) method, which maintains a factored belief representation and exploits passivity to perform selective updates over the belief factors. PSBF is evaluated in both synthetic processes and a simulated multi-robot warehouse, where it outperformed alternative filtering methods by exploiting passivity.
@inproceedings{ albrecht2017causality,
title = {Exploiting Causality for Selective Belief Filtering in Dynamic {B}ayesian Networks (Extended Abstract)},
author = {Stefano V. Albrecht and Subramanian Ramamoorthy},
booktitle = {Proceedings of the 26th International Joint Conference on Artificial Intelligence},
address = {Melbourne, Australia},
month = {August},
year = {2017}
}
2016
Stefano V. Albrecht, Subramanian Ramamoorthy
Exploiting Causality for Selective Belief Filtering in Dynamic Bayesian Networks
Journal of Artificial Intelligence Research, 2016
Abstract | BibTex | arXiv | Publisher
JAIRstate-estimationcausal
Abstract:
Dynamic Bayesian networks (DBNs) are a general model for stochastic processes with partially observed states. Belief filtering in DBNs is the task of inferring the belief state (i.e. the probability distribution over process states) based on incomplete and noisy observations. This can be a hard problem in complex processes with large state spaces. In this article, we explore the idea of accelerating the filtering task by automatically exploiting causality in the process. We consider a specific type of causal relation, called passivity, which pertains to how state variables cause changes in other variables. We present the Passivity-based Selective Belief Filtering (PSBF) method, which maintains a factored belief representation and exploits passivity to perform selective updates over the belief factors. PSBF produces exact belief states under certain assumptions and approximate belief states otherwise, where the approximation error is bounded by the degree of uncertainty in the process. We show empirically, in synthetic processes with varying sizes and degrees of passivity, that PSBF is faster than several alternative methods while achieving competitive accuracy. Furthermore, we demonstrate how passivity occurs naturally in a complex system such as a multi-robot warehouse, and how PSBF can exploit this to accelerate the filtering task.
@article{ albrecht2016causality,
title = {Exploiting Causality for Selective Belief Filtering in Dynamic {B}ayesian Networks},
author = {Stefano V. Albrecht and Subramanian Ramamoorthy},
journal = {Journal of Artificial Intelligence Research},
volume = {55},
pages = {1135--1178},
year = {2016},
publisher = {AI Access Foundation},
note = {DOI: 10.1613/jair.5044}
}