Assured Reinforcement Learning with Formally Verified Abstract Policies

George Mason, Radu Calinescu, Daniel Kudenko, Alec Banks

2017

Abstract

We present a new reinforcement learning (RL) approach that enables an autonomous agent to solve decision making problems under constraints. Our assured reinforcement learning approach models the uncertain environment as a high-level, abstract Markov decision process (AMDP), and uses probabilistic model checking to establish AMDP policies that satisfy a set of constraints defined in probabilistic temporal logic. These formally verified abstract policies are then used to restrict the RL agent's exploration of the solution space so as to avoid constraint violations. We validate our RL approach by using it to develop autonomous agents for a flag-collection navigation task and an assisted-living planning problem.

Download


Paper Citation


in Harvard Style

Mason G., Calinescu R., Kudenko D. and Banks A. (2017). Assured Reinforcement Learning with Formally Verified Abstract Policies . In Proceedings of the 9th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART, ISBN 978-989-758-220-2, pages 105-117. DOI: 10.5220/0006156001050117

in Bibtex Style

@conference{icaart17,
author={George Mason and Radu Calinescu and Daniel Kudenko and Alec Banks},
title={Assured Reinforcement Learning with Formally Verified Abstract Policies},
booktitle={Proceedings of the 9th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,},
year={2017},
pages={105-117},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006156001050117},
isbn={978-989-758-220-2},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 9th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,
TI - Assured Reinforcement Learning with Formally Verified Abstract Policies
SN - 978-989-758-220-2
AU - Mason G.
AU - Calinescu R.
AU - Kudenko D.
AU - Banks A.
PY - 2017
SP - 105
EP - 117
DO - 10.5220/0006156001050117