Exploring Contextual Importance and Utility in Explaining Affect Detection
2021 (English) In: AIxIA 2020 – Advances in Artificial Intelligence / [ed] Matteo Baldoni; Stefania Bandini, Springer, 2021, Vol. 12414, p. 3-18Conference paper, Published paper (Refereed)
Abstract [en]
By the ubiquitous usage of machine learning models with their inherent black-box nature, the necessity of explaining the decisions made by these models has become crucial. Although outcome explanation has been recently taken into account as a solution to the transparency issue in many areas, affect computing is one of the domains with the least dedicated effort on the practice of explainable AI, particularly over different machine learning models. The aim of this work is to evaluate the outcome explanations of two black-box models, namely neural network (NN) and linear discriminant analysis (LDA), to understand individuals affective states measured by wearable sensors. Emphasizing on context-aware decision explanations of these models, the two concepts of Contextual Importance (CI) and Contextual Utility (CU) are employed as a model-agnostic outcome explanation approach. We conduct our experiments on the two multimodal affect computing datasets, namely WESAD and MAHNOB-HCI. The results of applying a neural-based model on the first dataset reveal that the electrodermal activity, respiration as well as accelorometer sensors contribute significantly in the detection of "meditation" state for a particular participant. However, the respiration sensor does not intervene in the LDA decision of the same state. On the course of second dataset and the neural network model, the importance and utility of electrocardiogram and respiration sensors are shown as the dominant features in the detection of an individual "surprised" state, while the LDA model does not rely on the respiration sensor to detect this mental state.
Place, publisher, year, edition, pages Springer, 2021. Vol. 12414, p. 3-18
Series
Lecture Notes in Computer Science (LNCS), ISSN 0302-9743, E-ISSN 1611-3349 ; 12414
Keywords [en]
Explainable AI, Affect detection, Black-Box decision, Contextual importance and utility
National Category
Computer Sciences
Identifiers URN: urn:nbn:se:oru:diva-102684 DOI: 10.1007/978-3-030-77091-4_1 ISI: 000886994000001 Scopus ID: 2-s2.0-85111382491 ISBN: 9783030770914 (electronic) ISBN: 9783030770907 (print) OAI: oai:DiVA.org:oru-102684 DiVA, id: diva2:1718560
Conference 19th International Conference of the Italian-Association-for-Artificial-Intelligence (AIxIA 2020), (Virtual conference), November 25-27, 2020
2022-12-132022-12-132022-12-13 Bibliographically approved