Publications
For news about publications, follow us on X:
Click on any author names or tags to filter publications.
All topic tags:
surveydeep-rlmulti-agent-rlagent-modellingad-hoc-teamworkautonomous-drivinggoal-recognitionexplainable-aicausalgeneralisationsecurityemergent-communicationiterated-learningintrinsic-rewardsimulatorstate-estimationdeep-learningtransfer-learning
Selected tags (click to remove):
autonomous-driving
2024
Anton Kuznietsov, Balint Gyevnar, Cheng Wang, Steven Peters, Stefano V. Albrecht
Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review
IEEE Transactions on Intelligent Transportation Systems, 2024
Abstract | BibTex | arXiv
T-ITSautonomous-drivingexplainable-aisurvey
Abstract:
Artificial Intelligence (AI) shows promising applications for the perception and planning tasks in autonomous driving (AD) due to its superior performance compared to conventional methods. However, inscrutable AI systems exacerbate the existing challenge of safety assurance of AD. One way to mitigate this challenge is to utilize explainable AI (XAI) techniques. To this end, we present the first comprehensive systematic literature review of explainable methods for safe and trustworthy AD. We begin by analyzing the requirements for AI in the context of AD, focusing on three key aspects: data, model, and agency. We find that XAI is fundamental to meeting these requirements. Based on this, we explain the sources of explanations in AI and describe a taxonomy of XAI. We then identify five key contributions of XAI for safe and trustworthy AI in AD, which are interpretable design, interpretable surrogate models, interpretable monitoring, auxiliary explanations, and interpretable validation. Finally, we propose a modular framework called SafeX to integrate these contributions, enabling explanation delivery to users while simultaneously ensuring the safety of AI models.
@article{kuznietsov2024avreview,
title={Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review},
author={Anton Kuznietsov and Balint Gyevnar and Cheng Wang and Steven Peters and Stefano V. Albrecht},
journal={IEEE Transactions on Intelligent Transportation Systems (T-ITS)},
year={2024}
}
Balint Gyevnar, Cheng Wang, Christopher G. Lucas, Shay B. Cohen, Stefano V. Albrecht
Causal Explanations for Sequential Decision-Making in Multi-Agent Systems
International Conference on Autonomous Agents and Multi-Agent Systems, 2024
Abstract | BibTex | arXiv | Code | Dataset
AAMASexplainable-aiautonomous-drivingcausal
Abstract:
We present CEMA: Causal Explanations in Multi-Agent systems; a framework for creating causal natural language explanations of an agent's decisions in dynamic sequential multi-agent systems to build more trustworthy autonomous agents. Unlike prior work that assumes a fixed causal structure, CEMA only requires a probabilistic model for forward-simulating the state of the system. Using such a model, CEMA simulates counterfactual worlds that identify the salient causes behind the agent's decisions. We evaluate CEMA on the task of motion planning for autonomous driving and test it in diverse simulated scenarios. We show that CEMA correctly and robustly identifies the causes behind the agent's decisions, even when a large number of other agents is present, and show via a user study that CEMA's explanations have a positive effect on participants' trust in autonomous vehicles and are rated as high as high-quality baseline explanations elicited from other participants.
@inproceedings{gyevnar2024cema,
title={Causal Explanations for Sequential Decision-Making in Multi-Agent Systems},
author={Balint Gyevnar and Cheng Wang and Christopher G. Lucas and Shay B. Cohen and Stefano V. Albrecht},
booktitle = {Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems},
year={2024}
}
Anthony Knittel, Majd Hawasly, Stefano V. Albrecht, John Redford, Subramanian Ramamoorthy
DiPA: Probabilistic Multi-Modal Interactive Prediction for Autonomous Driving
IEEE International Conference on Robotics and Automation, 2024
Abstract | BibTex | arXiv | Publisher
ICRAautonomous-drivingstate-estimation
Abstract:
Accurate prediction is important for operating an autonomous vehicle in
interactive scenarios. Prediction must be fast, to support multiple
requests from a planner exploring a range of possible futures. The
generated predictions must accurately represent the probabilities of
predicted trajectories, while also capturing different modes of
behaviour (such as turning left vs continuing straight at a junction).
To this end, we present DiPA, an interactive predictor that addresses
these challenging requirements. Previous interactive prediction methods
use an encoding of k-mode-samples, which under-represents the full
distribution. Other methods optimise closest-mode evaluations, which
test whether one of the predictions is similar to the ground-truth, but
allow additional unlikely predictions to occur, over-representing
unlikely predictions. DiPA addresses these limitations by using a
Gaussian-Mixture-Model to encode the full distribution, and optimising
predictions using both probabilistic and closest-mode measures. These
objectives respectively optimise probabilistic accuracy and the ability
to capture distinct behaviours, and there is a challenging trade-off
between them. We are able to solve both together using a novel training
regime. DiPA achieves new state-of-the-art performance on the
INTERACTION and NGSIM datasets, and improves over the baseline (MFP)
when both closest-mode and probabilistic evaluations are used. This
demonstrates effective prediction for supporting a planner on
interactive scenarios.
@article{Knittel2023dipa,
title={{DiPA:} Probabilistic Multi-Modal Interactive Prediction for Autonomous Driving},
author={Anthony Knittel and Majd Hawasly and Stefano V. Albrecht and John Redford and Subramanian Ramamoorthy},
journal={IEEE Robotics and Automation Letters},
volume={8},
number={8},
pages={4887--4894},
year={2023}
}
2023
Anthony Knittel, Majd Hawasly, Stefano V. Albrecht, John Redford, Subramanian Ramamoorthy
DiPA: Probabilistic Multi-Modal Interactive Prediction for Autonomous Driving
IEEE Robotics and Automation Letters, 2023
Abstract | BibTex | arXiv | Publisher
RA-Lautonomous-drivingstate-estimation
Abstract:
Accurate prediction is important for operating an autonomous vehicle in
interactive scenarios. Prediction must be fast, to support multiple
requests from a planner exploring a range of possible futures. The
generated predictions must accurately represent the probabilities of
predicted trajectories, while also capturing different modes of
behaviour (such as turning left vs continuing straight at a junction).
To this end, we present DiPA, an interactive predictor that addresses
these challenging requirements. Previous interactive prediction methods
use an encoding of k-mode-samples, which under-represents the full
distribution. Other methods optimise closest-mode evaluations, which
test whether one of the predictions is similar to the ground-truth, but
allow additional unlikely predictions to occur, over-representing
unlikely predictions. DiPA addresses these limitations by using a
Gaussian-Mixture-Model to encode the full distribution, and optimising
predictions using both probabilistic and closest-mode measures. These
objectives respectively optimise probabilistic accuracy and the ability
to capture distinct behaviours, and there is a challenging trade-off
between them. We are able to solve both together using a novel training
regime. DiPA achieves new state-of-the-art performance on the
INTERACTION and NGSIM datasets, and improves over the baseline (MFP)
when both closest-mode and probabilistic evaluations are used. This
demonstrates effective prediction for supporting a planner on
interactive scenarios.
@article{Knittel2023dipa,
title={{DiPA:} Probabilistic Multi-Modal Interactive Prediction for Autonomous Driving},
author={Anthony Knittel and Majd Hawasly and Stefano V. Albrecht and John Redford and Subramanian Ramamoorthy},
journal={IEEE Robotics and Automation Letters},
volume={8},
number={8},
pages={4887--4894},
year={2023}
}
Cillian Brewitt, Massimiliano Tamborski, Cheng Wang, Stefano V. Albrecht
Verifiable Goal Recognition for Autonomous Driving with Occlusions
IEEE/RSJ International Conference on Intelligent Robots and Systems, 2023
Abstract | BibTex | arXiv
IROSautonomous-drivinggoal-recognitionexplainable-ai
Abstract:
Goal recognition (GR) allows the future behaviour of vehicles to be more accurately predicted. GR involves inferring the goals of other vehicles, such as a certain junction exit. In autonomous driving, vehicles can encounter many different scenarios and the environment is partially observable due to occlusions. We present a novel GR method named Goal Recognition with Interpretable Trees under Occlusion (OGRIT). We demonstrate that OGRIT can handle missing data due to occlusions and make inferences across multiple scenarios using the same learned decision trees, while still being fast, accurate, interpretable and verifiable. We also present the inDO and rounDO datasets of occluded regions used to evaluate OGRIT.
@inproceedings{brewitt2023ogrit,
title={Verifiable Goal Recognition for Autonomous Driving with Occlusions},
author={Cillian Brewitt and Massimiliano Tamborski and Cheng Wang and Stefano V. Albrecht},
booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems},
year={2023}
}
Filippos Christianos, Peter Karkus, Boris Ivanovic, Stefano V. Albrecht, Marco Pavone
Planning with Occluded Traffic Agents using Bi-Level Variational Occlusion Models
IEEE International Conference on Robotics and Automation, 2023
Abstract | BibTex | arXiv
ICRAdeep-rlautonomous-driving
Abstract:
Reasoning with occluded traffic agents is a significant open challenge for planning for autonomous vehicles. Recent deep learning models have shown impressive results for predicting occluded agents based on the behaviour of nearby visible agents; however, as we show in experiments, these models are difficult to integrate into downstream planning. To this end, we propose Bi-level Variational Occlusion Models (BiVO), a two-step generative model that first predicts likely locations of occluded agents, and then generates likely trajectories for the occluded agents. In contrast to existing methods, BiVO outputs a trajectory distribution which can then be sampled from and integrated into standard downstream planning. We evaluate the method in closed-loop replay simulation using the real-world nuScenes dataset. Our results suggest that BiVO can successfully learn to predict occluded agent trajectories, and these predictions lead to better subsequent motion plans in critical scenarios.
@inproceedings{christianos2023planning,
title={Planning with Occluded Traffic Agents using Bi-Level Variational Occlusion Models},
author={Filippos Christianos and Peter Karkus and Boris Ivanovic and Stefano V. Albrecht and Marco Pavone},
booktitle={International Conference on Robotics and Automation (ICRA)},
year={2023}
}
Cillian Brewitt, Massimiliano Tamborski, Cheng Wang, Stefano V. Albrecht
Verifiable Goal Recognition for Autonomous Driving with Occlusions
ICRA Workshop on Scalable Autonomous Driving, 2023
Abstract | BibTex | arXiv
ICRAautonomous-drivinggoal-recognitionexplainable-ai
Abstract:
Goal recognition (GR) allows the future behaviour of vehicles to be more accurately predicted. GR involves inferring the goals of other vehicles, such as a certain junction exit. In autonomous driving, vehicles can encounter many different scenarios and the environment is partially observable due to occlusions. We present a novel GR method named Goal Recognition with Interpretable Trees under Occlusion (OGRIT). We demonstrate that OGRIT can handle missing data due to occlusions and make inferences across multiple scenarios using the same learned decision trees, while still being fast, accurate, interpretable and verifiable. We also present the inDO and rounDO datasets of occluded regions used to evaluate OGRIT.
@misc{brewitt2023verifiable,
title={Verifiable Goal Recognition for Autonomous Driving with Occlusions},
author={Cillian Brewitt and Massimiliano Tamborski and Cheng Wang and Stefano V. Albrecht},
booktitle={ICRA 2023 Workshop on Scalable Autonomous Driving},
year={2023}
}
Balint Gyevnar, Cheng Wang, Christopher G. Lucas, Shay B. Cohen, Stefano V. Albrecht
Causal Social Explanations for Stochastic Sequential Multi-Agent Decision-Making
AAMAS Workshop on Explainable and Transparent AI and Multi-Agent Systems, 2023
Abstract | BibTex | arXiv | Code
AAMASautonomous-drivingexplainable-aicausal
Abstract:
We present a novel framework to generate causal explanations for the decisions of agents in stochastic sequential multi-agent environments. Explanations are given via natural language conversations answering a wide range of user queries and requiring associative, interventionist, or counterfactual causal reasoning. Instead of assuming any specific causal graph, our method relies on a generative model of interactions to simulate counterfactual worlds which are used to identify the salient causes behind decisions. We implement our method for motion planning for autonomous driving and test it in simulated scenarios with coupled interactions. Our method correctly identifies and ranks the relevant causes and delivers concise explanations to the users' queries.
@inproceedings{gyevnar2023causal,
title={Causal Social Explanations for Stochastic Sequential Multi-Agent Decision-Making},
author={Balint Gyevnar and Cheng Wang and Christopher G. Lucas and Shay B. Cohen and Stefano V. Albrecht},
booktitle={5th International Workshop on EXplainable and TRAnsparent AI and Multi-Agent Systems},
year={2023}
}
2022
Ibrahim H. Ahmed, Cillian Brewitt, Ignacio Carlucho, Filippos Christianos, Mhairi Dunion, Elliot Fosong, Samuel Garcin, Shangmin Guo, Balint Gyevnar, Trevor McInroe, Georgios Papoudakis, Arrasy Rahman, Lukas Schäfer, Massimiliano Tamborski, Giuseppe Vecchio, Cheng Wang, Stefano V. Albrecht
Deep Reinforcement Learning for Multi-Agent Interaction
AI Communications, 2022
Abstract | BibTex | arXiv | Publisher
AICsurveydeep-rlmulti-agent-rlad-hoc-teamworkagent-modellinggoal-recognitionsecurityexplainable-aiautonomous-driving
Abstract:
The development of autonomous agents which can interact with other agents to accomplish a given task is a core area of research in artificial intelligence and machine learning. Towards this goal, the Autonomous Agents Research Group develops novel machine learning algorithms for autonomous systems control, with a specific focus on deep reinforcement learning and multi-agent reinforcement learning. Research problems include scalable learning of coordinated agent policies and inter-agent communication; reasoning about the behaviours, goals, and composition of other agents from limited observations; and sample-efficient learning based on intrinsic motivation, curriculum learning, causal inference, and representation learning. This article provides a broad overview of the ongoing research portfolio of the group and discusses open problems for future directions.
@article{albrecht2022aic,
author = {Ahmed, Ibrahim H. and Brewitt, Cillian and Carlucho, Ignacio and Christianos, Filippos and Dunion, Mhairi and Fosong, Elliot and Garcin, Samuel and Guo, Shangmin and Gyevnar, Balint and McInroe, Trevor and Papoudakis, Georgios and Rahman, Arrasy and Schäfer, Lukas and Tamborski, Massimiliano and Vecchio, Giuseppe and Wang, Cheng and Albrecht, Stefano V.},
title = {Deep Reinforcement Learning for Multi-Agent Interaction},
journal = {AI Communications, Special Issue on Multi-Agent Systems Research in the UK},
year = {2022}
}
Majd Hawasly, Jonathan Sadeghi, Morris Antonello, Stefano V. Albrecht, John Redford, Subramanian Ramamoorthy
Perspectives on the System-level Design of a Safe Autonomous Driving Stack
AI Communications, 2022
Abstract | BibTex | arXiv | Publisher
AICsurveyautonomous-drivinggoal-recognitionexplainable-ai
Abstract:
Achieving safe and robust autonomy is the key bottleneck on the path towards broader adoption of autonomous vehicles technology. This motivates going beyond extrinsic metrics such as miles between disengagement, and calls for approaches that embody safety by design. In this paper, we address some aspects of this challenge, with emphasis on issues of motion planning and prediction. We do this through description of novel approaches taken to solving selected sub-problems within an autonomous driving stack, in the process introducing the design philosophy being adopted within Five. This includes safe-by-design planning, interpretable as well as verifiable prediction, and modelling of perception errors to enable effective sim-to-real and real-to-sim transfer within the testing pipeline of a realistic autonomous system.
@article{albrecht2022aic,
author = {Majd Hawasly and Jonathan Sadeghi and Morris Antonello and Stefano V. Albrecht and John Redford and Subramanian Ramamoorthy},
title = {Perspectives on the System-level Design of a Safe Autonomous Driving Stack},
journal = {AI Communications, Special Issue on Multi-Agent Systems Research in the UK},
year = {2022}
}
Cillian Brewitt, Massimiliano Tamborski, Stefano V. Albrecht
Verifiable Goal Recognition for Autonomous Driving with Occlusions
NeurIPS Workshop on Machine Learning for Autonomous Driving, 2022
Abstract | BibTex | arXiv | Code
NeurIPSautonomous-drivinggoal-recognitionexplainable-ai
Abstract:
Goal recognition (GR) allows the future behaviour of vehicles to be more accurately predicted. GR involves inferring the goals of other vehicles, such as a certain junction exit. In autonomous driving, vehicles can encounter many different scenarios and the environment is partially observable due to occlusions. We present a novel GR method named Goal Recognition with Interpretable Trees under Occlusion (OGRIT). We demonstrate that OGRIT can handle missing data due to occlusions and make inferences across multiple scenarios using the same learned decision trees, while still being fast, accurate, interpretable and verifiable. We also present the inDO and rounDO datasets of occluded regions used to evaluate OGRIT.
@inproceedings{brewitt2022,
title={Verifiable Goal Recognition for Autonomous Driving with Occlusions},
author={Cillian Brewitt and Massimiliano Tamborski and Stefano V. Albrecht},
booktitle={NeurIPS Workshop on Machine Learning for Autonomous Driving},
year={2022}
}
Francisco Eiras, Majd Hawasly, Stefano V. Albrecht, Subramanian Ramamoorthy
A Two-Stage Optimization-based Motion Planner for Safe Urban Driving
IEEE Transactions on Robotics, 2022
Abstract | BibTex | arXiv | Publisher | Video
T-ROautonomous-driving
Abstract:
Recent road trials have shown that guaranteeing the safety of driving decisions is essential for the wider adoption of autonomous vehicle technology. One promising direction is to pose safety requirements as planning constraints in nonlinear, non-convex optimization problems of motion synthesis. However, many implementations of this approach are limited by uncertain convergence and local optimality of the solutions achieved, affecting overall robustness. To improve upon these issues, we propose a novel two-stage optimization framework: in the first stage, we find a solution to a Mixed-Integer Linear Programming (MILP) formulation of the motion synthesis problem, the output of which initializes a second Nonlinear Programming (NLP) stage. The MILP stage enforces hard constraints of safety and road rule compliance generating a solution in the right subspace, while the NLP stage refines the solution within the safety bounds for feasibility and smoothness. We demonstrate the effectiveness of our framework via simulated experiments of complex urban driving scenarios, outperforming a state-of-the-art baseline in metrics of convergence, comfort and progress.
@article{eiras2021twostage,
title = {A Two-Stage Optimization-based Motion Planner for Safe Urban Driving},
author = {Francisco Eiras and Majd Hawasly and Stefano V. Albrecht and Subramanian Ramamoorthy},
journal = {IEEE Transactions on Robotics},
volume = {38},
number = {2},
pages = {822--834},
year = {2022},
doi = {10.1109/TRO.2021.3088009}
}
Morris Antonello, Mihai Dobre, Stefano V. Albrecht, John Redford, Subramanian Ramamoorthy
Flash: Fast and Light Motion Prediction for Autonomous Driving with Bayesian Inverse Planning and Learned Motion Profiles
IEEE/RSJ International Conference on Intelligent Robots and Systems, 2022
Abstract | BibTex | arXiv
IROSautonomous-drivingstate-estimation
Abstract:
Motion prediction of road users in traffic scenes is critical for autonomous driving systems that must take safe and robust decisions in complex dynamic environments. We present a novel motion prediction system for autonomous driving. Our system is based on the Bayesian inverse planning framework, which efficiently orchestrates map-based goal extraction, a classical control-based trajectory generator and an ensemble of light-weight neural networks specialised in motion profile prediction. In contrast to many alternative methods, this modularity helps isolate performance factors and better interpret results, without compromising performance. This system addresses multiple aspects of interest, namely multi-modality, motion profile uncertainty and trajectory physical feasibility. We report on several experiments with the popular highway dataset NGSIM, demonstrating state-of-the-art performance in terms of trajectory error. We also perform a detailed analysis of our system's components, along with experiments that stratify the data based on behaviours, such as change lane versus follow lane, to provide insights into the challenges in this domain. Finally, we present a qualitative analysis to show other benefits of our approach, such as the ability to interpret the outputs.
@inproceedings{antonello2022flash,
title={Flash: Fast and Light Motion Prediction for Autonomous Driving with {Bayesian} Inverse Planning and Learned Motion Profiles},
author={Morris Antonello, Mihai Dobre, Stefano V. Albrecht, John Redford, Subramanian Ramamoorthy},
booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
year={2022}
}
Balint Gyevnar, Massimiliano Tamborski, Cheng Wang, Christopher G. Lucas, Shay B. Cohen, Stefano V. Albrecht
A Human-Centric Method for Generating Causal Explanations in Natural Language for Autonomous Vehicle Motion Planning
IJCAI Workshop on Artificial Intelligence for Autonomous Driving, 2022
Abstract | BibTex | arXiv | Code
IJCAIautonomous-drivingexplainable-aicausal
Abstract:
Inscrutable AI systems are difficult to trust, especially if they operate in safety-critical settings like autonomous driving. Therefore, there is a need to build transparent and queryable systems to increase trust levels. We propose a transparent, human-centric explanation generation method for autonomous vehicle motion planning and prediction based on an existing white-box system called IGP2. Our method integrates Bayesian networks with context-free generative rules and can give causal natural language explanations for the high-level driving behaviour of autonomous vehicles. Preliminary testing on simulated scenarios shows that our method captures the causes behind the actions of autonomous vehicles and generates intelligible explanations with varying complexity.
@inproceedings{gyevnar2022humancentric,
title={A Human-Centric Method for Generating Causal Explanations in Natural Language for Autonomous Vehicle Motion Planning},
author={Balint Gyevnar and Massimiliano Tamborski and Cheng Wang and Christopher G. Lucas and Shay B. Cohen and Stefano V. Albrecht},
booktitle={IJCAI Workshop on Artificial Intelligence for Autonomous Driving},
year={2022}
}
Filippos Christianos, Peter Karkus, Boris Ivanovic, Stefano V. Albrecht, Marco Pavone
Planning with Occluded Traffic Agents using Bi-Level Variational Occlusion Models
arXiv:2210.14584, 2022
Abstract | BibTex | arXiv
autonomous-driving
Abstract:
Reasoning with occluded traffic agents is a significant open challenge for planning for autonomous vehicles. Recent deep learning models have shown impressive results for predicting occluded agents based on the behaviour of nearby visible agents; however, as we show in experiments, these models are difficult to integrate into downstream planning. To this end, we propose Bi-level Variational Occlusion Models (BiVO), a two-step generative model that first predicts likely locations of occluded agents, and then generates likely trajectories for the occluded agents. In contrast to existing methods, BiVO outputs a trajectory distribution which can then be sampled from and integrated into standard downstream planning. We evaluate the method in closed-loop replay simulation using the real-world nuScenes dataset. Our results suggest that BiVO can successfully learn to predict occluded agent trajectories, and these predictions lead to better subsequent motion plans in critical scenarios.
@misc{christianos2022bivo,
title={Planning with Occluded Traffic Agents using Bi-Level Variational Occlusion Models},
author={Filippos Christianos and Peter Karkus and Boris Ivanovic and Stefano V. Albrecht and Marco Pavone},
year={2022},
eprint={2210.14584},
archivePrefix={arXiv}
}
Anthony Knittel, Majd Hawasly, Stefano V. Albrecht, John Redford, Subramanian Ramamoorthy
DiPA: Diverse and Probabilistically Accurate Interactive Prediction
arXiv:2210.06106, 2022
Abstract | BibTex | arXiv
autonomous-drivingstate-estimation
Abstract:
Accurate prediction is important for operating an autonomous vehicle in interactive scenarios. Previous interactive predictors have used closest-mode evaluations, which test if one of a set of predictions covers the ground-truth, but not if additional unlikely predictions are made. The presence of unlikely predictions can interfere with planning, by indicating conflict with the ego plan when it is not likely to occur. Closest-mode evaluations are not sufficient for showing a predictor is useful, an effective predictor also needs to accurately estimate mode probabilities, and to be evaluated using probabilistic measures. These two evaluation approaches, eg. predicted-mode RMS and minADE/FDE, are analogous to precision and recall in binary classification, and there is a challenging trade-off between prediction strategies for each. We present DiPA, a method for producing diverse predictions while also capturing accurate probabilistic estimates. DiPA uses a flexible representation that captures interactions in widely varying road topologies, and uses a novel training regime for a Gaussian Mixture Model that supports diversity of predicted modes, along with accurate spatial distribution and mode probability estimates. DiPA achieves state-of-the-art performance on INTERACTION and NGSIM, and improves over a baseline (MFP) when both closest-mode and probabilistic evaluations are used at the same time.
@misc{brewitt2022verifiable,
title={{DiPA:} Diverse and Probabilistically Accurate Interactive Prediction},
author={Anthony Knittel and Majd Hawasly and Stefano V. Albrecht and John Redford and Subramanian Ramamoorthy},
year={2022},
eprint={2210.06106},
archivePrefix={arXiv},
primaryClass={cs.RO}
}
2021
Stefano V. Albrecht, Cillian Brewitt, John Wilhelm, Balint Gyevnar, Francisco Eiras, Mihai Dobre, Subramanian Ramamoorthy
Interpretable Goal-based Prediction and Planning for Autonomous Driving
IEEE International Conference on Robotics and Automation, 2021
Abstract | BibTex | arXiv | Video | Code
ICRAautonomous-drivinggoal-recognitionexplainable-ai
Abstract:
We propose an integrated prediction and planning system for autonomous driving which uses rational inverse planning to recognise the goals of other vehicles. Goal recognition informs a Monte Carlo Tree Search (MCTS) algorithm to plan optimal maneuvers for the ego vehicle. Inverse planning and MCTS utilise a shared set of defined maneuvers and macro actions to construct plans which are explainable by means of rationality principles. Evaluation in simulations of urban driving scenarios demonstrate the system's ability to robustly recognise the goals of other vehicles, enabling our vehicle to exploit non-trivial opportunities to significantly reduce driving times. In each scenario, we extract intuitive explanations for the predictions which justify the system's decisions.
@inproceedings{albrecht2020igp2,
title={Interpretable Goal-based Prediction and Planning for Autonomous Driving},
author={Stefano V. Albrecht and Cillian Brewitt and John Wilhelm and Balint Gyevnar and Francisco Eiras and Mihai Dobre and Subramanian Ramamoorthy},
booktitle={IEEE International Conference on Robotics and Automation (ICRA)},
year={2021}
}
Cillian Brewitt, Balint Gyevnar, Samuel Garcin, Stefano V. Albrecht
GRIT: Fast, Interpretable, and Verifiable Goal Recognition with Learned Decision Trees for Autonomous Driving
IEEE/RSJ International Conference on Intelligent Robots and Systems, 2021
Abstract | BibTex | arXiv | Video | Code
IROSautonomous-drivinggoal-recognitionexplainable-ai
Abstract:
It is important for autonomous vehicles to have the ability to infer the goals of other vehicles (goal recognition), in order to safely interact with other vehicles and predict their future trajectories. This is a difficult problem, especially in urban environments with interactions between many vehicles. Goal recognition methods must be fast to run in real time and make accurate inferences. As autonomous driving is safety-critical, it is important to have methods which are human interpretable and for which safety can be formally verified. Existing goal recognition methods for autonomous vehicles fail to satisfy all four objectives of being fast, accurate, interpretable and verifiable. We propose Goal Recognition with Interpretable Trees (GRIT), a goal recognition system which achieves these objectives. GRIT makes use of decision trees trained on vehicle trajectory data. We evaluate GRIT on two datasets, showing that GRIT achieved fast inference speed and comparable accuracy to two deep learning baselines, a planning-based goal recognition method, and an ablation of GRIT. We show that the learned trees are human interpretable and demonstrate how properties of GRIT can be formally verified using a satisfiability modulo theories (SMT) solver.
@inproceedings{brewitt2021grit,
title={{GRIT:} Fast, Interpretable, and Verifiable Goal Recognition with Learned Decision Trees for Autonomous Driving},
author={Cillian Brewitt and Balint Gyevnar and Samuel Garcin and Stefano V. Albrecht},
booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
year={2021}
}
Josiah P. Hanna, Arrasy Rahman, Elliot Fosong, Francisco Eiras, Mihai Dobre, John Redford, Subramanian Ramamoorthy, Stefano V. Albrecht
Interpretable Goal Recognition in the Presence of Occluded Factors for Autonomous Vehicles
IEEE/RSJ International Conference on Intelligent Robots and Systems, 2021
Abstract | BibTex | arXiv
IROSautonomous-drivinggoal-recognitionexplainable-ai
Abstract:
Recognising the goals or intentions of observed vehicles is a key step towards predicting the long-term future behaviour of other agents in an autonomous driving scenario. When there are unseen obstacles or occluded vehicles in a scenario, goal recognition may be confounded by the effects of these unseen entities on the behaviour of observed vehicles. Existing prediction algorithms that assume rational behaviour with respect to inferred goals may fail to make accurate long-horizon predictions because they ignore the possibility that the behaviour is influenced by such unseen entities. We introduce the Goal and Occluded Factor Inference (GOFI) algorithm which bases inference on inverse-planning to jointly infer a probabilistic belief over goals and potential occluded factors. We then show how these beliefs can be integrated into Monte Carlo Tree Search (MCTS). We demonstrate that jointly inferring goals and occluded factors leads to more accurate beliefs with respect to the true world state and allows an agent to safely navigate several scenarios where other baselines take unsafe actions leading to collisions.
@inproceedings{hanna2021interpretable,
title={Interpretable Goal Recognition in the Presence of Occluded Factors for Autonomous Vehicles},
author={Josiah P. Hanna and Arrasy Rahman and Elliot Fosong and Francisco Eiras and Mihai Dobre and John Redford and Subramanian Ramamoorthy and Stefano V. Albrecht},
booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
year={2021}
}
Henry Pulver, Francisco Eiras, Ludovico Carozza, Majd Hawasly, Stefano V. Albrecht, Subramanian Ramamoorthy
PILOT: Efficient Planning by Imitation Learning and Optimisation for Safe Autonomous Driving
IEEE/RSJ International Conference on Intelligent Robots and Systems, 2021
Abstract | BibTex | arXiv | Video
IROSautonomous-driving
Abstract:
Achieving a proper balance between planning quality, safety and efficiency is a major challenge for autonomous driving. Optimisation-based motion planners are capable of producing safe, smooth and comfortable plans, but often at the cost of runtime efficiency. On the other hand, naively deploying trajectories produced by efficient-to-run deep imitation learning approaches might risk compromising safety. In this paper, we present PILOT -- a planning framework that comprises an imitation neural network followed by an efficient optimiser that actively rectifies the network's plan, guaranteeing fulfilment of safety and comfort requirements. The objective of the efficient optimiser is the same as the objective of an expensive-to-run optimisation-based planning system that the neural network is trained offline to imitate. This efficient optimiser provides a key layer of online protection from learning failures or deficiency in out-of-distribution situations that might compromise safety or comfort. Using a state-of-the-art, runtime-intensive optimisation-based method as the expert, we demonstrate in simulated autonomous driving experiments in CARLA that PILOT achieves a seven-fold reduction in runtime when compared to the expert it imitates without sacrificing planning quality.
@inproceedings{pulver2020pilot,
title={{PILOT:} Efficient Planning by Imitation Learning and Optimisation for Safe Autonomous Driving},
author={Henry Pulver and Francisco Eiras and Ludovico Carozza and Majd Hawasly and Stefano V. Albrecht and Subramanian Ramamoorthy},
booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
year={2021}
}
2020
Stefano V. Albrecht, Cillian Brewitt, John Wilhelm, Balint Gyevnar, Francisco Eiras, Mihai Dobre, Subramanian Ramamoorthy
Interpretable Goal-based Prediction and Planning for Autonomous Driving
arXiv:2002.02277, 2020
Abstract | BibTex | arXiv
autonomous-drivinggoal-recognitionexplainable-ai
Abstract:
We propose an integrated prediction and planning system for autonomous driving which uses rational inverse planning to recognise the goals of other vehicles. Goal recognition informs a Monte Carlo Tree Search (MCTS) algorithm to plan optimal maneuvers for the ego vehicle. Inverse planning and MCTS utilise a shared set of defined maneuvers and macro actions to construct plans which are explainable by means of rationality principles. Evaluation in simulations of urban driving scenarios demonstrate the system's ability to robustly recognise the goals of other vehicles, enabling our vehicle to exploit non-trivial opportunities to significantly reduce driving times. In each scenario, we extract intuitive explanations for the predictions which justify the system's decisions.
@misc{albrecht2020integrating,
title={Interpretable Goal-based Prediction and Planning for Autonomous Driving},
author={Stefano V. Albrecht and Cillian Brewitt and John Wilhelm and Balint Gyevnar and Francisco Eiras and Mihai Dobre and Subramanian Ramamoorthy},
year={2020},
eprint={2002.02277},
archivePrefix={arXiv},
primaryClass={cs.RO}
}
Henry Pulver, Francisco Eiras, Ludovico Carozza, Majd Hawasly, Stefano V. Albrecht, Subramanian Ramamoorthy
PILOT: Efficient Planning by Imitation Learning and Optimisation for Safe Autonomous Driving
arXiv:2011.00509, 2020
Abstract | BibTex | arXiv
autonomous-driving
Abstract:
Achieving the right balance between planning quality, safety and runtime efficiency is a major challenge for autonomous driving research. Optimisation-based planners are typically capable of producing high-quality, safe plans, but at the cost of efficiency. We present PILOT, a two-stage planning framework comprising an imitation neural network and an efficient optimisation component that guarantees the satisfaction of requirements of safety and comfort. The neural network is trained to imitate an expensive-to-run optimisation-based planning system with the same objective as the efficient optimisation component of PILOT. We demonstrate in simulated autonomous driving experiments that the proposed framework achieves a significant reduction in runtime when compared to the optimisation-based expert it imitates, without sacrificing the planning quality.
@misc{pulver2020pilot,
title={{PILOT:} Efficient Planning by Imitation Learning and Optimisation for Safe Autonomous Driving},
author={Henry Pulver and Francisco Eiras and Ludovico Carozza and Majd Hawasly and Stefano V. Albrecht and Subramanian Ramamoorthy},
year={2020},
eprint={2011.00509},
archivePrefix={arXiv},
primaryClass={cs.RO}
}
Francisco Eiras, Majd Hawasly, Stefano V. Albrecht, Subramanian Ramamoorthy
Two-Stage Optimization-based Motion Planner for Safe Urban Driving
arXiv:2002.02215, 2020
Abstract | BibTex | arXiv
autonomous-driving
Abstract:
Recent road trials have shown that guaranteeing the safety of driving decisions is essential for the wider adoption of autonomous vehicle technology. One promising direction is to pose safety requirements as planning constraints in nonlinear, nonconvex optimization problems of motion synthesis. However, many implementations of this approach are limited by uncertain convergence and local optimality of the solutions achieved, affecting overall robustness. To improve upon these issues, we propose a novel two-stage optimization framework: in the first stage, we find a solution to a Mixed-Integer Linear Programming (MILP) formulation of the motion synthesis problem, the output of which initializes a second Nonlinear Programming (NLP) stage. The MILP stage enforces hard constraints of safety and road rule compliance generating a solution in the right subspace, while the NLP stage refines the solution within the safety bounds for feasibility and smoothness. We demonstrate the effectiveness of our framework via simulated experiments of complex urban driving scenarios, outperforming a state-of-the-art baseline in metrics of convergence, comfort and progress.
@misc{eiras2020twostage,
title={Two-Stage Optimization-based Motion Planner for Safe Urban Driving},
author={Francisco Eiras and Majd Hawasly and Stefano V. Albrecht and Subramanian Ramamoorthy},
year={2020},
eprint={2002.02215},
archivePrefix={arXiv},
primaryClass={cs.RO}
}