Till Örebro universitet

oru.seÖrebro universitets publikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Readers' affect: predicting and understanding readers' emotions with deep learning
Department of Computer Science, University of Calicut, Malappuram, Kerala, India.
School of Electronics, Electrical Engineering and Computer Science, Queen’s University Belfast, Belfast, Northern Ireland, UK.
Örebro universitet, Institutionen för naturvetenskap och teknik.ORCID-id: 0000-0003-3902-2867
Department of Computer Science, University of Calicut, Malappuram, Kerala, India.
Visa övriga samt affilieringar
2022 (Engelska)Ingår i: Journal of Big Data, E-ISSN 2196-1115, Vol. 9, nr 1, artikel-id 82Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Emotions are highly useful to model human behavior being at the core of what makes us human. Today, people abundantly express and share emotions through social media. Technological advancements in such platforms enable sharing opinions or expressing any specific emotions towards what others have shared, mainly in the form of textual data. This entails an interesting arena for analysis; as to whether there is a disconnect between the writer's intended emotion and the reader's perception of textual content. In this paper, we present experiments for Readers' Emotion Detection through multi-target regression settings by exploring a Bi-LSTM-based Attention model, where our major intention is to analyze the interpretability and effectiveness of the deep learning model for the task. To conduct experiments, we procure two extensive datasets REN-10k and RENh-4k, apart from using a popular benchmark dataset from SemEval-2007. We perform a two-phase experimental evaluation, first being various coarse-grained and fine-grained evaluations of our model performance in comparison with several baselines belonging to different categories of emotion detection, viz., deep learning, lexicon based, and classical machine learning. Secondly, we evaluate model behavior towards readers' emotion detection assessing attention maps generated by the model through devising a novel set of qualitative and quantitative metrics. The first phase of experiments shows that our Bi-LSTM + Attention model significantly outperforms all baselines. The second analysis reveals that emotions may be correlated to specific words as well as named entities.

Ort, förlag, år, upplaga, sidor
Springer, 2022. Vol. 9, nr 1, artikel-id 82
Nyckelord [en]
Readers' emotion detection, Affective computing, Textual emotion detection, Deep learning, Attention, Interpretability
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
URN: urn:nbn:se:oru:diva-100459DOI: 10.1186/s40537-022-00614-2ISI: 000813759000001Scopus ID: 2-s2.0-85132561452OAI: oai:DiVA.org:oru-100459DiVA, id: diva2:1689270
Anmärkning

Funding agency:

Department of Science & Technology (India) SR/WOS-A/PM-62/2018

Tillgänglig från: 2022-08-22 Skapad: 2022-08-22 Senast uppdaterad: 2023-08-14Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

Förlagets fulltextScopus

Person

Sam Abraham, Savitha

Sök vidare i DiVA

Av författaren/redaktören
Sam Abraham, Savitha
Av organisationen
Institutionen för naturvetenskap och teknik
I samma tidskrift
Journal of Big Data
Datavetenskap (datalogi)

Sök vidare utanför DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetricpoäng

doi
urn-nbn
Totalt: 143 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf