To Örebro University

oru.seÖrebro University Publications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Decision Explanation: Applying Contextual Importance and Contextual Utility in Affect Detection
Department of Computing Science, Umeå University, Sweden.
Örebro University, School of Science and Technology. (Center for Applied Autonomous Sensor Systems (AASS))ORCID iD: 0000-0002-4001-2087
Department of Computing Science, Umeå University, Sweden; Aalto University, School of Science and Technology, Finland.
2020 (English)In: Proceedings of the Italian Workshop on Explainable Artificial Intelligence co-located with 19th International Conference of the Italian Association for Artificial Intelligence(AIxIA 2020) / [ed] Cataldo Musto, Daniele Magazzeni, Salvatore Ruggieri, Giovanni Semeraro, Technical University of Aachen , 2020, p. 1-13Conference paper, Published paper (Refereed)
Abstract [en]

Explainable AI has recently paved the way to justify decisions made by black-box models in various areas. However, a mature body of work in the field of affect detection is still limited. In this work, we evaluate a black-box outcome explanation for understanding humans’ affective states. We employ two concepts of Contextual Importance (CI) and Contextual Utility (CU), emphasizing on a context-aware decision explanation of a non-linear model, mainly a neural network. The neural model is designed to detect the individual mental states measured by wearable sensors to monitor the human user’s well-being. We conduct our experiments and outcome explanation on WESAD and MAHNOB-HCI, as multimodal affect computing datasets. The results reveal that in the first experiment the electrodermal activity, respiration as well as accelorometer and in the second experiment the electrocardiogram and respiration signals contribute significantly in the classification task of mental states for a specific participant. To the best of our knowledge, this is the first study leveraging the CI and CU concepts in outcome explanation of an affect detection model.

Place, publisher, year, edition, pages
Technical University of Aachen , 2020. p. 1-13
Series
CEUR Workshop Proceedings, E-ISSN 1613-0073
Keywords [en]
Explainable AI, Affect detection, Black-Box decision, Contextual Importance and Utility
National Category
Computer Systems
Identifiers
URN: urn:nbn:se:oru:diva-87504OAI: oai:DiVA.org:oru-87504DiVA, id: diva2:1502854
Conference
XAI.it 2020, Italian Workshop on Explainable Artificiale Intelligence, co-located with AI*IA, November 25-27, 2020
Available from: 2020-11-22 Created: 2020-11-22 Last updated: 2023-05-29Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Free full text

Authority records

Alirezaie, Marjan

Search in DiVA

By author/editor
Alirezaie, Marjan
By organisation
School of Science and Technology
Computer Systems

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 130 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf