The credibility of research on information system security is challenged by inconsistent results and there is an ongoing discussion about research methodology and its effect on results within the employee non-/compliance to information security policies literature. We add to this discussion by investigating discrepancies between what we cl/aim to measure (theoretical properties of variables) and what we actually measure (respondents’ interpretations of our operationalized variables). The study asks: (1) How well do respondents’ interpretations of variables correspond to their theoretical definitions? (2) What are the characteristics and causes of any discrepancies between variable definitions and respondent interpretations? We report a pilot study including interviews with seven respondents to understand their interpretations of the variable Perceived severity from the Protection Motivation Theory (PMT).
We found that respondents’ interpretations differ substantially from the theoretical definitions which introduces error in measurement. There were not only individual differences in interpretations but also, and more importantly, systematic ones; When questions are not well specified, or do not cover respondents’ practice, respondents make interpretations based on their practice. Our results indicate three types of ambiguities, namely (i) Vagueness in part/s of the measurement item causing inconsistencies in interpretation between respondents, (ii) Envision/Interpret ‘new’ properties not related to the theory, (iii) ‘Misses the mark’ measurements whereby respondents misinterpret the fundamentals of the item. The qualitative method used proved conducive to understanding respondents’ thinking, which is a key to improving research instruments.