oru.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Different ranking approaches defining association and agreement measures of paired ordinal data
Örebro University, Örebro University School of Business.ORCID iD: 0000-0001-7210-1925
2012 (English)In: Statistics in Medicine, ISSN 0277-6715, E-ISSN 1097-0258, Vol. 31, no 26, p. 3104-3117Article in journal (Refereed) Published
Abstract [en]

Rating scales are common for self-assessments of qualitative variables and also for expert-rating of the severity of disability, outcomes, etc. Scale assessments and other ordered classifications generate ordinal data having rank-invariant properties only. Hence, statistical methods are often based on ranks. The aim is to focus at the differences in ranking approaches between measures of association and of disagreement in paired ordinal data. The Spearman correlation coefficient is a measure of association between two variables, when each data set is transformed to ranks. The augmented ranking approach to evaluate disagreement takes account of the information given by the pairs of data, and provides identification and measures of systematic disagreement, when present, separately from measures of additional individual variability in assessments. The two approaches were applied to empirical data regarding relationship between perceived pain and physical health and reliability in pain assessments made by patients. The art of disagreement between the patients' perceived levels of outcome after treatment and the doctor's criterion-based scoring was also evaluated. The comprehensive evaluation of observed disagreement in terms of systematic and individual disagreement provides valuable interpretable information of their sources. The presence of systematic disagreement can be adjusted for and/or understood. Large individual variability could be a sign of poor quality of a scale or heterogeneity among raters. It was also demonstrated that a measure of association must not be used as a measure of agreement, even though such misuse of correlation coefficients is common.

Place, publisher, year, edition, pages
2012. Vol. 31, no 26, p. 3104-3117
Keywords [en]
ranks, association, agreement, disagreement, ordinal data
National Category
Probability Theory and Statistics
Research subject
Statistics
Identifiers
URN: urn:nbn:se:oru:diva-26552DOI: 10.1002/sim.5382ISI: 000309745400003OAI: oai:DiVA.org:oru-26552DiVA, id: diva2:573264
Available from: 2012-11-30 Created: 2012-11-30 Last updated: 2017-12-07Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records BETA

Svensson, Elisabeth

Search in DiVA

By author/editor
Svensson, Elisabeth
By organisation
Örebro University School of Business
In the same journal
Statistics in Medicine
Probability Theory and Statistics

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 52 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf