Open this publication in new window or tab >> Show others...
2023 (English) In: Neural Computing & Applications, ISSN 0941-0643, E-ISSN 1433-3058, Vol. 35, no 23, p. 16693-16704Article in journal (Refereed) Published
Abstract [en] One-step reinforcement learning explanation methods account for individual actions but fail to consider the agent's future behavior, which can make their interpretation ambiguous. We propose to address this limitation by providing hierarchical goals as context for one-step explanations. By considering the current hierarchical goal as a context, one-step explanations can be interpreted with higher certainty, as the agent's future behavior is more predictable. We combine reward decomposition with hierarchical reinforcement learning into a novel explainable reinforcement learning framework, which yields more interpretable, goal-contextualized one-step explanations. With a qualitative analysis of one-step reward decomposition explanations, we first show that their interpretability is indeed limited in scenarios with multiple, different optimal policies-a characteristic shared by other one-step explanation methods. Then, we show that our framework retains high interpretability in such cases, as the hierarchical goal can be considered as context for the explanation. To the best of our knowledge, our work is the first to investigate hierarchical goals not as an explanation directly but as additional context for one-step reinforcement learning explanations.
Place, publisher, year, edition, pages
Springer, 2023
Keywords Reinforcement learning, Explainable AI, Reward decomposition, Hierarchical goals, Local explanations
National Category
Computer Sciences
Identifiers urn:nbn:se:oru:diva-99115 (URN) 10.1007/s00521-022-07280-8 (DOI) 000794083400001 () 2-s2.0-85129803505 (Scopus ID)
Note Funding agencies:
Örebro University
Wallenberg AI, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation
Federal Ministry for Economic Affairs and Climate FKZ 20X1905A-D
2022-05-232022-05-232023-11-28 Bibliographically approved