Publications
For news about publications, follow us on X:
Click on any author names or tags to filter publications.
All topic tags:
surveydeep-rlmulti-agent-rlagent-modellingad-hoc-teamworkautonomous-drivinggoal-recognitionexplainable-aicausalgeneralisationsecurityemergent-communicationiterated-learningintrinsic-rewardsimulatorstate-estimationdeep-learningtransfer-learning
Selected tags (click to remove):
Balint-Gyevnar
2024
Anton Kuznietsov, Balint Gyevnar, Cheng Wang, Steven Peters, Stefano V. Albrecht
Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review
IEEE Transactions on Intelligent Transportation Systems, 2024
Abstract | BibTex | arXiv
T-ITSautonomous-drivingexplainable-aisurvey
Abstract:
Artificial Intelligence (AI) shows promising applications for the perception and planning tasks in autonomous driving (AD) due to its superior performance compared to conventional methods. However, inscrutable AI systems exacerbate the existing challenge of safety assurance of AD. One way to mitigate this challenge is to utilize explainable AI (XAI) techniques. To this end, we present the first comprehensive systematic literature review of explainable methods for safe and trustworthy AD. We begin by analyzing the requirements for AI in the context of AD, focusing on three key aspects: data, model, and agency. We find that XAI is fundamental to meeting these requirements. Based on this, we explain the sources of explanations in AI and describe a taxonomy of XAI. We then identify five key contributions of XAI for safe and trustworthy AI in AD, which are interpretable design, interpretable surrogate models, interpretable monitoring, auxiliary explanations, and interpretable validation. Finally, we propose a modular framework called SafeX to integrate these contributions, enabling explanation delivery to users while simultaneously ensuring the safety of AI models.
@article{kuznietsov2024avreview,
title={Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review},
author={Anton Kuznietsov and Balint Gyevnar and Cheng Wang and Steven Peters and Stefano V. Albrecht},
journal={IEEE Transactions on Intelligent Transportation Systems (T-ITS)},
year={2024}
}
Balint Gyevnar, Cheng Wang, Christopher G. Lucas, Shay B. Cohen, Stefano V. Albrecht
Causal Explanations for Sequential Decision-Making in Multi-Agent Systems
International Conference on Autonomous Agents and Multi-Agent Systems, 2024
Abstract | BibTex | arXiv | Code | Dataset
AAMASexplainable-aiautonomous-drivingcausal
Abstract:
We present CEMA: Causal Explanations in Multi-Agent systems; a framework for creating causal natural language explanations of an agent's decisions in dynamic sequential multi-agent systems to build more trustworthy autonomous agents. Unlike prior work that assumes a fixed causal structure, CEMA only requires a probabilistic model for forward-simulating the state of the system. Using such a model, CEMA simulates counterfactual worlds that identify the salient causes behind the agent's decisions. We evaluate CEMA on the task of motion planning for autonomous driving and test it in diverse simulated scenarios. We show that CEMA correctly and robustly identifies the causes behind the agent's decisions, even when a large number of other agents is present, and show via a user study that CEMA's explanations have a positive effect on participants' trust in autonomous vehicles and are rated as high as high-quality baseline explanations elicited from other participants.
@inproceedings{gyevnar2024cema,
title={Causal Explanations for Sequential Decision-Making in Multi-Agent Systems},
author={Balint Gyevnar and Cheng Wang and Christopher G. Lucas and Shay B. Cohen and Stefano V. Albrecht},
booktitle = {Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems},
year={2024}
}
2023
Balint Gyevnar, Cheng Wang, Christopher G. Lucas, Shay B. Cohen, Stefano V. Albrecht
Causal Social Explanations for Stochastic Sequential Multi-Agent Decision-Making
AAMAS Workshop on Explainable and Transparent AI and Multi-Agent Systems, 2023
Abstract | BibTex | arXiv | Code
AAMASautonomous-drivingexplainable-aicausal
Abstract:
We present a novel framework to generate causal explanations for the decisions of agents in stochastic sequential multi-agent environments. Explanations are given via natural language conversations answering a wide range of user queries and requiring associative, interventionist, or counterfactual causal reasoning. Instead of assuming any specific causal graph, our method relies on a generative model of interactions to simulate counterfactual worlds which are used to identify the salient causes behind decisions. We implement our method for motion planning for autonomous driving and test it in simulated scenarios with coupled interactions. Our method correctly identifies and ranks the relevant causes and delivers concise explanations to the users' queries.
@inproceedings{gyevnar2023causal,
title={Causal Social Explanations for Stochastic Sequential Multi-Agent Decision-Making},
author={Balint Gyevnar and Cheng Wang and Christopher G. Lucas and Shay B. Cohen and Stefano V. Albrecht},
booktitle={5th International Workshop on EXplainable and TRAnsparent AI and Multi-Agent Systems},
year={2023}
}
2022
Ibrahim H. Ahmed, Cillian Brewitt, Ignacio Carlucho, Filippos Christianos, Mhairi Dunion, Elliot Fosong, Samuel Garcin, Shangmin Guo, Balint Gyevnar, Trevor McInroe, Georgios Papoudakis, Arrasy Rahman, Lukas Schäfer, Massimiliano Tamborski, Giuseppe Vecchio, Cheng Wang, Stefano V. Albrecht
Deep Reinforcement Learning for Multi-Agent Interaction
AI Communications, 2022
Abstract | BibTex | arXiv | Publisher
AICsurveydeep-rlmulti-agent-rlad-hoc-teamworkagent-modellinggoal-recognitionsecurityexplainable-aiautonomous-driving
Abstract:
The development of autonomous agents which can interact with other agents to accomplish a given task is a core area of research in artificial intelligence and machine learning. Towards this goal, the Autonomous Agents Research Group develops novel machine learning algorithms for autonomous systems control, with a specific focus on deep reinforcement learning and multi-agent reinforcement learning. Research problems include scalable learning of coordinated agent policies and inter-agent communication; reasoning about the behaviours, goals, and composition of other agents from limited observations; and sample-efficient learning based on intrinsic motivation, curriculum learning, causal inference, and representation learning. This article provides a broad overview of the ongoing research portfolio of the group and discusses open problems for future directions.
@article{albrecht2022aic,
author = {Ahmed, Ibrahim H. and Brewitt, Cillian and Carlucho, Ignacio and Christianos, Filippos and Dunion, Mhairi and Fosong, Elliot and Garcin, Samuel and Guo, Shangmin and Gyevnar, Balint and McInroe, Trevor and Papoudakis, Georgios and Rahman, Arrasy and Schäfer, Lukas and Tamborski, Massimiliano and Vecchio, Giuseppe and Wang, Cheng and Albrecht, Stefano V.},
title = {Deep Reinforcement Learning for Multi-Agent Interaction},
journal = {AI Communications, Special Issue on Multi-Agent Systems Research in the UK},
year = {2022}
}
Balint Gyevnar, Massimiliano Tamborski, Cheng Wang, Christopher G. Lucas, Shay B. Cohen, Stefano V. Albrecht
A Human-Centric Method for Generating Causal Explanations in Natural Language for Autonomous Vehicle Motion Planning
IJCAI Workshop on Artificial Intelligence for Autonomous Driving, 2022
Abstract | BibTex | arXiv | Code
IJCAIautonomous-drivingexplainable-aicausal
Abstract:
Inscrutable AI systems are difficult to trust, especially if they operate in safety-critical settings like autonomous driving. Therefore, there is a need to build transparent and queryable systems to increase trust levels. We propose a transparent, human-centric explanation generation method for autonomous vehicle motion planning and prediction based on an existing white-box system called IGP2. Our method integrates Bayesian networks with context-free generative rules and can give causal natural language explanations for the high-level driving behaviour of autonomous vehicles. Preliminary testing on simulated scenarios shows that our method captures the causes behind the actions of autonomous vehicles and generates intelligible explanations with varying complexity.
@inproceedings{gyevnar2022humancentric,
title={A Human-Centric Method for Generating Causal Explanations in Natural Language for Autonomous Vehicle Motion Planning},
author={Balint Gyevnar and Massimiliano Tamborski and Cheng Wang and Christopher G. Lucas and Shay B. Cohen and Stefano V. Albrecht},
booktitle={IJCAI Workshop on Artificial Intelligence for Autonomous Driving},
year={2022}
}
2021
Stefano V. Albrecht, Cillian Brewitt, John Wilhelm, Balint Gyevnar, Francisco Eiras, Mihai Dobre, Subramanian Ramamoorthy
Interpretable Goal-based Prediction and Planning for Autonomous Driving
IEEE International Conference on Robotics and Automation, 2021
Abstract | BibTex | arXiv | Video | Code
ICRAautonomous-drivinggoal-recognitionexplainable-ai
Abstract:
We propose an integrated prediction and planning system for autonomous driving which uses rational inverse planning to recognise the goals of other vehicles. Goal recognition informs a Monte Carlo Tree Search (MCTS) algorithm to plan optimal maneuvers for the ego vehicle. Inverse planning and MCTS utilise a shared set of defined maneuvers and macro actions to construct plans which are explainable by means of rationality principles. Evaluation in simulations of urban driving scenarios demonstrate the system's ability to robustly recognise the goals of other vehicles, enabling our vehicle to exploit non-trivial opportunities to significantly reduce driving times. In each scenario, we extract intuitive explanations for the predictions which justify the system's decisions.
@inproceedings{albrecht2020igp2,
title={Interpretable Goal-based Prediction and Planning for Autonomous Driving},
author={Stefano V. Albrecht and Cillian Brewitt and John Wilhelm and Balint Gyevnar and Francisco Eiras and Mihai Dobre and Subramanian Ramamoorthy},
booktitle={IEEE International Conference on Robotics and Automation (ICRA)},
year={2021}
}
Cillian Brewitt, Balint Gyevnar, Samuel Garcin, Stefano V. Albrecht
GRIT: Fast, Interpretable, and Verifiable Goal Recognition with Learned Decision Trees for Autonomous Driving
IEEE/RSJ International Conference on Intelligent Robots and Systems, 2021
Abstract | BibTex | arXiv | Video | Code
IROSautonomous-drivinggoal-recognitionexplainable-ai
Abstract:
It is important for autonomous vehicles to have the ability to infer the goals of other vehicles (goal recognition), in order to safely interact with other vehicles and predict their future trajectories. This is a difficult problem, especially in urban environments with interactions between many vehicles. Goal recognition methods must be fast to run in real time and make accurate inferences. As autonomous driving is safety-critical, it is important to have methods which are human interpretable and for which safety can be formally verified. Existing goal recognition methods for autonomous vehicles fail to satisfy all four objectives of being fast, accurate, interpretable and verifiable. We propose Goal Recognition with Interpretable Trees (GRIT), a goal recognition system which achieves these objectives. GRIT makes use of decision trees trained on vehicle trajectory data. We evaluate GRIT on two datasets, showing that GRIT achieved fast inference speed and comparable accuracy to two deep learning baselines, a planning-based goal recognition method, and an ablation of GRIT. We show that the learned trees are human interpretable and demonstrate how properties of GRIT can be formally verified using a satisfiability modulo theories (SMT) solver.
@inproceedings{brewitt2021grit,
title={{GRIT:} Fast, Interpretable, and Verifiable Goal Recognition with Learned Decision Trees for Autonomous Driving},
author={Cillian Brewitt and Balint Gyevnar and Samuel Garcin and Stefano V. Albrecht},
booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
year={2021}
}
2020
Stefano V. Albrecht, Cillian Brewitt, John Wilhelm, Balint Gyevnar, Francisco Eiras, Mihai Dobre, Subramanian Ramamoorthy
Interpretable Goal-based Prediction and Planning for Autonomous Driving
arXiv:2002.02277, 2020
Abstract | BibTex | arXiv
autonomous-drivinggoal-recognitionexplainable-ai
Abstract:
We propose an integrated prediction and planning system for autonomous driving which uses rational inverse planning to recognise the goals of other vehicles. Goal recognition informs a Monte Carlo Tree Search (MCTS) algorithm to plan optimal maneuvers for the ego vehicle. Inverse planning and MCTS utilise a shared set of defined maneuvers and macro actions to construct plans which are explainable by means of rationality principles. Evaluation in simulations of urban driving scenarios demonstrate the system's ability to robustly recognise the goals of other vehicles, enabling our vehicle to exploit non-trivial opportunities to significantly reduce driving times. In each scenario, we extract intuitive explanations for the predictions which justify the system's decisions.
@misc{albrecht2020integrating,
title={Interpretable Goal-based Prediction and Planning for Autonomous Driving},
author={Stefano V. Albrecht and Cillian Brewitt and John Wilhelm and Balint Gyevnar and Francisco Eiras and Mihai Dobre and Subramanian Ramamoorthy},
year={2020},
eprint={2002.02277},
archivePrefix={arXiv},
primaryClass={cs.RO}
}