To Örebro University

oru.seÖrebro University Publications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
FairLOF: Fairness in Outlier Detection
Queen’s University Belfast, Belfast, UK; Indian Institute of Technology Madras, Chennai, India.
Örebro University, School of Science and Technology.ORCID iD: 0000-0003-3902-2867
2021 (English)In: Data Science and Engineering, ISSN 2364-1185, E-ISSN 2364-1541, Vol. 6, no 4, p. 485-499Article in journal (Refereed) Published
Abstract [en]

An outlier detection method may be considered fair over specified sensitive attributes if the results of outlier detection are not skewed toward particular groups defined on such sensitive attributes. In this paper, we consider the task of fair outlier detection. Our focus is on the task of fair outlier detection over multiple multi-valued sensitive attributes (e.g., gender, race, religion, nationality and marital status, among others), one that has broad applications across modern data scenarios. We propose a fair outlier detection method, FairLOF, that is inspired by the popular LOF formulation for neighborhood-based outlier detection. We outline ways in which unfairness could be induced within LOF and develop three heuristic principles to enhance fairness, which form the basis of the FairLOF method. Being a novel task, we develop an evaluation framework for fair outlier detection, and use that to benchmark FairLOF on quality and fairness of results. Through an extensive empirical evaluation over real-world datasets, we illustrate that FairLOF is able to achieve significant improvements in fairness at sometimes marginal degradations on result quality as measured against the fairness-agnostic LOF method. We also show that a generalization of our method, named FairLOF-Flex, is able to open possibilities of further deepening fairness in outlier detection beyond what is offered by FairLOF.

Place, publisher, year, edition, pages
Springer, 2021. Vol. 6, no 4, p. 485-499
Keywords [en]
Outlier detection, Fairness, Unsupervised learning
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:oru:diva-94113DOI: 10.1007/s41019-021-00169-xISI: 000690810500001Scopus ID: 2-s2.0-85113768429OAI: oai:DiVA.org:oru-94113DiVA, id: diva2:1591583
Available from: 2021-09-07 Created: 2021-09-07 Last updated: 2022-01-28Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Sam Abraham, Savitha

Search in DiVA

By author/editor
Sam Abraham, Savitha
By organisation
School of Science and Technology
In the same journal
Data Science and Engineering
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 67 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf