Publications
For news about publications, follow us on X:
Click on any author names or tags to filter publications.
All topic tags:
surveydeep-rlmulti-agent-rlagent-modellingad-hoc-teamworkautonomous-drivinggoal-recognitionexplainable-aicausalgeneralisationsecurityemergent-communicationiterated-learningintrinsic-rewardsimulatorstate-estimationdeep-learningtransfer-learning
Selected tags (click to remove):
Mohamad-H.-Danesh
2024
Guy Azran, Mohamad H. Danesh, Stefano V. Albrecht, Sarah Keren
Contextual Pre-planning on Reward Machine Abstractions for Enhanced Transfer in Deep Reinforcement Learning
AAAI Conference on Artificial Intelligence, 2024
Abstract | BibTex | arXiv | Code | Video
AAAIdeep-rlcausal
Abstract:
Recent studies show that deep reinforcement learning (DRL) agents tend to overfit to the task on which they were trained and fail to adapt to minor environment changes. To expedite learning when transferring to unseen tasks, we propose a novel approach to representing the current task using reward machines (RMs), state machine abstractions that induce subtasks based on the current task’s rewards and dynamics. Our method provides agents with symbolic representations of optimal transitions from their current abstract state and rewards them for achieving these transitions. These representations are shared across tasks, allowing agents to exploit knowledge of previously encountered symbols and transitions, thus enhancing transfer. Empirical results show that our representations improve sample efficiency and few-shot transfer in a variety of domains.
@inproceedings{azran2024contextual,
title={Contextual Pre-planning on Reward Machine Abstractions for Enhanced Transfer in Deep Reinforcement Learning},
author={Guy Azran and Mohamad H. Danesh and Stefano V. Albrecht and Sarah Keren},
booktitle={Proceedings of the 38th AAAI Conference on Artificial Intelligence},
year={2024}
}
Guy Azran, Mohamad H. Danesh, Stefano V. Albrecht, Sarah Keren
Contextual Pre-planning on Reward Machine Abstractions for Enhanced Transfer in Deep Reinforcement Learning
ICAPS Workshop on Planning and Reinforcement Learning, 2024
Abstract | BibTex | arXiv | Code | Video
ICAPSdeep-rlcausal
Abstract:
Recent studies show that deep reinforcement learning (DRL) agents tend to overfit to the task on which they were trained and fail to adapt to minor environment changes. To expedite learning when transferring to unseen tasks, we propose a novel approach to representing the current task using reward machines (RMs), state machine abstractions that induce subtasks based on the current task’s rewards and dynamics. Our method provides agents with symbolic representations of optimal transitions from their current abstract state and rewards them for achieving these transitions. These representations are shared across tasks, allowing agents to exploit knowledge of previously encountered symbols and transitions, thus enhancing transfer. Empirical results show that our representations improve sample efficiency and few-shot transfer in a variety of domains.
@inproceedings{Azran2022enhancing,
title={Contextual Pre-planning on Reward Machine Abstractions for Enhanced Transfer in Deep Reinforcement Learning},
author={Azran, Guy and Danesh, Mohamad H. and Albrecht, Stefano V. and Keren, Sarah},
booktitle={ICAPS Workshop on Planning and Reinforcement Learning (https://prl-theworkshop.github.io/prl2024-icaps/},
year={2024}
}
2023
Guy Azran, Mohamad H. Danesh, Stefano V. Albrecht, Sarah Keren
Contextual Pre-Planning on Reward Machine Abstractions for Enhanced Transfer in Deep Reinforcement Learning
IJCAI Workshop on Planning and Reinforcement Learning, 2023
Abstract | BibTex | arXiv
IJCAIdeep-rlcausal
Abstract:
Recent studies show that deep reinforcement learning (DRL) agents tend to overfit to the task on which they were trained and fail to adapt to minor environment changes. To expedite learning when transferring to unseen tasks, we propose a novel approach to representing the current task using reward machines (RM), state machine abstractions that induce subtasks based on the current task’s rewards and dynamics. Our method provides agents with symbolic representations of optimal transitions from their current abstract state and rewards them for achieving these transitions. These representations are shared across tasks, allowing agents to exploit knowledge of previously encountered symbols and transitions, thus enhancing transfer. Our empirical evaluation shows that our representations improve sample efficiency and few-shot transfer in a variety of domains.
@inproceedings{azran2023contextual,
title={Contextual Pre-Planning on Reward Machine Abstractions for Enhanced Transfer in Deep Reinforcement Learning},
author={Guy Azran and Mohamad H. Danesh and Stefano V. Albrecht and Sarah Keren},
booktitle={IJCAI Workshop on Planning and Reinforcement Learning (https://prl-theworkshop.github.io/)},
year={2023}
}