To Örebro University

oru.seÖrebro universitets publikasjoner
Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Metrics and Evaluations of Time Series Explanations: An Application in Affect Computing
Department of Computing Science, Umeå University, Umeå, Sweden.
Örebro universitet, Institutionen för naturvetenskap och teknik. (Centre for Applied Autonomous Sensor Systems (AASS))ORCID-id: 0000-0002-4001-2087
Department of Computing Science, Umeå University, Umeå, Sweden; School of Science and Technology, Aalto University, Espoo, Finland.
2022 (engelsk)Inngår i: IEEE Access, E-ISSN 2169-3536, Vol. 10, s. 23995-24009Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Explainable artificial intelligence (XAI) has shed light on enormous applications by clarifying why neural models make specific decisions. However, it remains challenging to measure how sensitive XAI solutions are to the explanations of neural models. Although different evaluation metrics have been proposed to measure sensitivity, the main focus has been on the visual and textual data. There is insufficient attention devoted to the sensitivity metrics tailored for time series data. In this paper, we formulate several metrics, including max short-term sensitivity (MSS), max long-term sensitivity (MLS), average short-term sensitivity (ASS) and average long-term sensitivity (ALS), that target the sensitivity of XAI models with respect to the generated and real time series. Our hypothesis is that for close series with the same labels, we obtain similar explanations. We evaluate three XAI models, LIME, integrated gradient (IG), and SmoothGrad (SG), on CN-Waterfall, a deep convolutional network. This network is a highly accurate time series classifier in affect computing. Our experiments rely on data-, metric- and XAI hyperparameter- related settings on the WESAD and MAHNOB-HCI datasets. The results reveal that (i) IG and LIME provide a lower sensitivity scale than SG in all the metrics and settings, potentially due to the lower scale of important scores generated by IG and LIME, (ii) the XAI models show higher sensitivities for a smaller window of data, (iii) the sensitivities of XAI models fluctuate when the network parameters and data properties change, and (iv) the XAI models provide unstable sensitivities under different settings of hyperparameters.

sted, utgiver, år, opplag, sider
IEEE, 2022. Vol. 10, s. 23995-24009
Emneord [en]
Measurement, Sensitivity, Data models, Time series analysis, Predictive models, Perturbation methods, Computational modeling, Explainable AI, metrics, time series data, deep convolutional neural network
HSV kategori
Identifikatorer
URN: urn:nbn:se:oru:diva-98271DOI: 10.1109/ACCESS.2022.3155115ISI: 000766548000001Scopus ID: 2-s2.0-85125751693OAI: oai:DiVA.org:oru-98271DiVA, id: diva2:1647719
Merknad

Funding agency:

Umeå University

Tilgjengelig fra: 2022-03-28 Laget: 2022-03-28 Sist oppdatert: 2022-03-28bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekstScopus

Person

Alirezaie, Marjan

Søk i DiVA

Av forfatter/redaktør
Alirezaie, Marjan
Av organisasjonen
I samme tidsskrift
IEEE Access

Søk utenfor DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric

doi
urn-nbn
Totalt: 75 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf