To Örebro University

oru.seÖrebro University Publications
Change search
Refine search result
1 - 13 of 13
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Almeida, Tiago
    et al.
    Örebro University, School of Science and Technology.
    Rudenko, Andrey
    Robert Bosch GmbH, Corporate Research, Stuttgart, Germany.
    Schreiter, Tim
    Örebro University, School of Science and Technology.
    Zhu, Yufei
    Örebro University, School of Science and Technology.
    Gutiérrez Maestro, Eduardo
    Örebro University, School of Science and Technology.
    Morillo-Mendez, Lucas
    Örebro University, School of Science and Technology.
    Kucner, Tomasz P.
    Mobile Robotics Group, Department of Electrical Engineering and Automation, Aalto University, Finland; FCAI, Finnish Center for Artificial Intelligence, Finland.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Palmieri, Luigi
    Robert Bosch GmbH, Corporate Research, Stuttgart, Germany.
    Arras, Kai O.
    Robert Bosch GmbH, Corporate Research, Stuttgart, Germany.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    THÖR-Magni: Comparative Analysis of Deep Learning Models for Role-Conditioned Human Motion Prediction2023In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, p. 2200-2209Conference paper (Refereed)
    Abstract [en]

    Autonomous systems, that need to operate in human environments and interact with the users, rely on understanding and anticipating human activity and motion. Among the many factors which influence human motion, semantic attributes, such as the roles and ongoing activities of the detected people, provide a powerful cue on their future motion, actions, and intentions. In this work we adapt several popular deep learning models for trajectory prediction with labels corresponding to the roles of the people. To this end we use the novel THOR-Magni dataset, which captures human activity in industrial settings and includes the relevant semantic labels for people who navigate complex environments, interact with objects and robots, work alone and in groups. In qualitative and quantitative experiments we show that the role-conditioned LSTM, Transformer, GAN and VAE methods can effectively incorporate the semantic categories, better capture the underlying input distribution and therefore produce more accurate motion predictions in terms of Top-K ADE/FDE and log-likelihood metrics.

  • 2.
    Morillo-Mendez, Lucas
    Örebro University, School of Science and Technology.
    SOCIAL ROBOTS / SOCIAL COGNITION: Robots' Gaze Effects in Older and Younger Adults2023Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This dissertation presents advances in social human-robot interaction (HRI) and human social cognition through a series of experiments in which humans face a robot. A predominant approach to studying the human factor in HRI consists of placing the human in the role of a user to explore potential factors affecting the acceptance or usability of a robot. This work takes a broader perspective and investigates if social robots are perceived as social agents, irrespective of their final role or usefulness in a particular interaction. To do so, it adopts methodologies and theories from cognitive and experimental psychology, such as the use of behavioral paradigms involving gaze following and a framework of more than twenty years of research employing gaze to explore social cognition. The communicative role of gaze in robots is used to explore their essential effectiveness and as a tool to learn how humans perceive them. Studying how certain social robots are perceived through the lens of research in social cognition is the central contribution of this dissertation.

    This thesis presents empirical research and the multidisciplinary literature on (robotic) gaze following, aging, and their relation with social cognition. Papers I and II investigate the decline in gaze following associated with aging, linked with a broader decline in social cognition, in scenarios with robots as gazing agents. In addition to the participants' self-reported perception of the robots, their reaction times were also measured to reflect their internal cognitive processes. Overall, this decline seems to persist when the gazing Overall, this decline seems to persist when the gazing agent is a robot, highlighting our depiction of robots as social agents. Paper IV explores the theories behind this decline using a robot, emphasizing how these theories extend to non-human agents. This work also investigates motion as a competing cue to gaze in social robots (Paper III), and mentalizing in robotic gaze following (Paper V).

    Through experiments with participants and within the scope of HRI and social cognition studies, this thesis presents a joint framework highlighting that robots are depicted as social agents. This finding emphasizes the importance of fundamental insights from social cognition when designing robot behaviors. Additionally, it promotes and supports the use of robots as valuable tools to explore the robustness of current theories in cognitive psychology to expand the field in parallel.

    List of papers
    1. Age-Related Differences in the Perception of Eye-Gaze from a Social Robot
    Open this publication in new window or tab >>Age-Related Differences in the Perception of Eye-Gaze from a Social Robot
    2021 (English)In: Social Robotics: 13th International Conference, ICSR 2021, Singapore, Singapore, November 10–13, 2021, Proceedings / [ed] Haizhou Li; Shuzhi Sam Ge; Yan Wu; Agnieszka Wykowska; Hongsheng He; Xiaorui Liu; Dongyu Li; Jairo Perez-Osorio, Springer , 2021, Vol. 13086, p. 350-361Conference paper, Published paper (Refereed)
    Abstract [en]

    The sensibility to deictic gaze declines naturally with age and often results in reduced social perception. Thus, the increasing efforts in developing social robots that assist older adults during daily life tasks need to consider the effects of aging. In this context, as non-verbal cues such as deictic gaze are important in natural communication in human-robot interaction, this paper investigates the performance of older adults, as compared to younger adults, during a controlled, online (visual search) task inspired by daily life activities, while assisted by a social robot. This paper also examines age-related differences in social perception. Our results showed a significant facilitation effect of head movement representing deictic gaze from a Pepper robot on task performance. This facilitation effect was not significantly different between the age groups. However, social perception of the robot was less influenced by its deictic gaze behavior in older adults, as compared to younger adults. This line of research may ultimately help informing the design of adaptive non-verbal cues from social robots for a wide range of end users.

    Place, publisher, year, edition, pages
    Springer, 2021
    Series
    Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 13086 LNCS
    Keywords
    Human-robot interaction, Older adults, Non-verbal cues
    National Category
    Robotics Computer Sciences
    Identifiers
    urn:nbn:se:oru:diva-96658 (URN)10.1007/978-3-030-90525-5_30 (DOI)000776504300030 ()2-s2.0-85119849431 (Scopus ID)9783030905255 (ISBN)9783030905248 (ISBN)
    Conference
    13th International Conference (ICSR 2021), Singapore, Singapore, November 10–13, 2021
    Funder
    European Commission, 754285Knut and Alice Wallenberg Foundation
    Note

    Funding agency:

    Spanish Government RTI2018095599-A-C22

    Available from: 2022-01-24 Created: 2022-01-24 Last updated: 2023-09-20Bibliographically approved
    2. Age-Related Differences in the Perception of Robotic Referential Gaze in Human-Robot Interaction
    Open this publication in new window or tab >>Age-Related Differences in the Perception of Robotic Referential Gaze in Human-Robot Interaction
    2022 (English)In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, p. 1-13Article in journal (Refereed) Epub ahead of print
    Abstract [en]

    There is an increased interest in using social robots to assist older adults during their daily life activities. As social robots are designed to interact with older users, it becomes relevant to study these interactions under the lens of social cognition. Gaze following, the social ability to infer where other people are looking at, deteriorates with older age. Therefore, the referential gaze from robots might not be an effective social cue to indicate spatial locations to older users. In this study, we explored the performance of older adults, middle-aged adults, and younger controls in a task assisted by the referential gaze of a Pepper robot. We examined age-related differences in task performance, and in self-reported social perception of the robot. Our main findings show that referential gaze from a robot benefited task performance, although the magnitude of this facilitation was lower for older participants. Moreover, perceived anthropomorphism of the robot varied less as a result of its referential gaze in older adults. This research supports that social robots, even if limited in their gazing capabilities, can be effectively perceived as social entities. Additionally, this research suggests that robotic social cues, usually validated with young participants, might be less optimal signs for older adults.

    Supplementary Information: The online version contains supplementary material available at 10.1007/s12369-022-00926-6.

    Place, publisher, year, edition, pages
    Springer, 2022
    Keywords
    Aging, Gaze following, Human-robot interaction, Non-verbal cues, Referential gaze, Social cues
    National Category
    Gerontology, specialising in Medical and Health Sciences Robotics
    Identifiers
    urn:nbn:se:oru:diva-101615 (URN)10.1007/s12369-022-00926-6 (DOI)000857896500001 ()36185773 (PubMedID)2-s2.0-85138680591 (Scopus ID)
    Funder
    European Commission, 754285Wallenberg AI, Autonomous Systems and Software Program (WASP)
    Note

    Funding agency:

    RobWell project - Spanish Ministerio de Ciencia, Innovacion y Universidades RTI2018-095599-A-C22

    Available from: 2022-10-04 Created: 2022-10-04 Last updated: 2023-12-08Bibliographically approved
    3. Robotic Gaze Drives Attention, Even with No Visible Eyes
    Open this publication in new window or tab >>Robotic Gaze Drives Attention, Even with No Visible Eyes
    2023 (English)In: HRI '23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, ACM / Association for Computing Machinery , 2023, p. 172-177Conference paper, Published paper (Refereed)
    Abstract [en]

    Robots can direct human attention using their eyes. However, it remains unclear whether it is the gaze or the low-level motion of the head rotation that drives attention. We isolated these components in a non-predictive gaze cueing task with a robot to explore how limited robotic signals orient attention. In each trial, the head of a NAO robot turned towards the left or right. To isolate the direction of rotation from its gaze, NAO was presented frontally and backward along blocks. Participants responded faster to targets on the gazed-at site, even when the eyes of the robot were not visible and the direction of rotation was opposed to that of the frontal condition. Our results showed that low-level motion did not orient attention, but the gaze direction of the robot did. These findings suggest that the robotic gaze is perceived as a social signal, similar to human gaze.

    Place, publisher, year, edition, pages
    ACM / Association for Computing Machinery, 2023
    Keywords
    Motion cue, Reflexive attention, Gaze following, Gaze cueing, Social robots
    National Category
    Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:oru:diva-108211 (URN)10.1145/3568294.3580066 (DOI)001054975700029 ()2-s2.0-85150446663 (Scopus ID)9781450399708 (ISBN)
    Conference
    ACM/IEEE International Conference on Human-Robot Interaction (HRI '23), Stockholm, Sweden, March 13-16, 2023
    Funder
    EU, Horizon 2020, 754285Wallenberg AI, Autonomous Systems and Software Program (WASP)
    Note

    Funding agency:

    Spanish Ministerio de Ciencia, Innovación y Universidades, RobWell project (No RTI2018-095599-A-C22)

    Available from: 2023-09-11 Created: 2023-09-11 Last updated: 2023-10-10
    4. Gaze cueing in older and younger adults is elicited by a social robot seen from the back
    Open this publication in new window or tab >>Gaze cueing in older and younger adults is elicited by a social robot seen from the back
    2023 (English)In: Cognitive Systems Research, ISSN 2214-4366, E-ISSN 1389-0417, Vol. 82, article id 101149Article in journal (Refereed) Published
    Abstract [en]

    The ability to follow the gaze of others deteriorates with age. This decline is typically tested with gaze cueing tasks, in which the time it takes to respond to targets on a screen is faster when they are preceded by a facial cue looking in the direction of the target (i.e., gaze cueing effect). It is unclear whether age-related differences in this effect occur with gaze cues other than the eyes, such as head orientation, and how these vary in function of the cue-target timing. Based on the perceived usefulness of social robots to assist older adults, we asked older and young adults to perform a gaze cueing task with the head of a NAO robot as the central cue. Crucially, the head was viewed from the back, and so its eye gaze was conveyed. In a control condition, the head was static and faced away from the participant. The stimulus onset asynchrony (SOA) between cue and target was 340 ms or 1000 ms. Both age groups showed a gaze cueing effect at both SOAs. Older participants showed a reduced facilitation effect (i.e., faster on congruent gazing trials than on neutral trials) at the 340-ms SOA compared to the 1000-ms SOA, and no differences between incongruent trials and neutral trials at the 340-ms SOA. Our results show that a robot with non-visible eyes can elicit gaze cueing effects. Age-related differences in the other effects are discussed regarding differences in processing time.

    Place, publisher, year, edition, pages
    Elsevier, 2023
    Keywords
    Gaze following, Gaze cueing effect, Human-robot interaction, Aging
    National Category
    Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:oru:diva-108208 (URN)10.1016/j.cogsys.2023.101149 (DOI)001054852800001 ()2-s2.0-85165450249 (Scopus ID)
    Funder
    EU, Horizon 2020, 754285Wallenberg AI, Autonomous Systems and Software Program (WASP)
    Note

    Funding agency:

    Spanish Ministerio de Ciencia, RobWellproject RTI2018-095599-A-C22

    Available from: 2023-09-11 Created: 2023-09-11 Last updated: 2023-09-20Bibliographically approved
    5. Can the robot "see" what I see? Robot gaze drives attention depending on mental state attribution
    Open this publication in new window or tab >>Can the robot "see" what I see? Robot gaze drives attention depending on mental state attribution
    Show others...
    2023 (English)In: Frontiers in Psychology, E-ISSN 1664-1078, Vol. 14, article id 1215771Article in journal (Refereed) Published
    Abstract [en]

    Mentalizing, where humans infer the mental states of others, facilitates understanding and interaction in social situations. Humans also tend to adopt mentalizing strategies when interacting with robotic agents. There is an ongoing debate about how inferred mental states affect gaze following, a key component of joint attention. Although the gaze from a robot induces gaze following, the impact of mental state attribution on robotic gaze following remains unclear. To address this question, we asked forty-nine young adults to perform a gaze cueing task during which mental state attribution was manipulated as follows. Participants sat facing a robot that turned its head to the screen at its left or right. Their task was to respond to targets that appeared either at the screen the robot gazed at or at the other screen. At the baseline, the robot was positioned so that participants would perceive it as being able to see the screens. We expected faster response times to targets at the screen the robot gazed at than targets at the non-gazed screen (i.e., gaze cueing effect). In the experimental condition, the robot's line of sight was occluded by a physical barrier such that participants would perceive it as unable to see the screens. Our results revealed gaze cueing effects in both conditions although the effect was reduced in the occluded condition compared to the baseline. These results add to the expanding fields of social cognition and human-robot interaction by suggesting that mentalizing has an impact on robotic gaze following.

    Place, publisher, year, edition, pages
    Frontiers Media S.A., 2023
    Keywords
    attention, cueing effect, gaze following, intentional stance, mentalizing, social robots
    National Category
    Robotics
    Identifiers
    urn:nbn:se:oru:diva-107503 (URN)10.3389/fpsyg.2023.1215771 (DOI)001037081700001 ()37519379 (PubMedID)2-s2.0-85166030431 (Scopus ID)
    Funder
    EU, European Research Council, 754285Wallenberg AI, Autonomous Systems and Software Program (WASP), RTI2018-095599-A-C22
    Note

    Funding Agency:

    RobWell project - Spanish Ministerio de Ciencia, Innovacion y Universidades

    Available from: 2023-08-10 Created: 2023-08-10 Last updated: 2023-09-20Bibliographically approved
    Download full text (pdf)
    SOCIAL ROBOTS / SOCIAL COGNITION: Robots' Gaze Effects in Older and Younger Adults
    Download (png)
    Bild
    Download (pdf)
    Cover
    Download (pdf)
    Spikblad
  • 3.
    Morillo-Mendez, Lucas
    et al.
    CTAG, Spain.
    Garcia, Eva
    CTAG, Spain.
    Augmented Reality as an Advanced Driver-Assistance System: A Cognitive Approach2018In: Proceedings of The 6th HUMMANIST Conference / [ed] Nicole Van Nes; Charlotte Voegelé, 2018Conference paper (Refereed)
    Abstract [en]

    AR is progressively being implemented in the automotive domain as an ADAS system. This increasingly popular technology has the potential to reduce the fatalities on the road which involve HF, however the cognitive components of AR are still being studied. This review provides a quick overview of the studies related with the cognitive mechanisms involved in AR while driving to date. Related research is varied, a taxonomy of the outcomes is provided. AR systems should follow certain criteria to avoid undesirable outcomes such as cognitive capture. Only information related with the main driving task should be shown to the driver in order to avoid occlusion of the real road by non-driving related tasks and high mental workload. However, information should not be shown at all times so it does not affect the driving skills of the users and they do not develop overreliance in the system, which may lead to risky behaviours. Some popular uses of AR in the car are navigation and as safety system (i.e. BSD or FCWS). AR cognitive outcomes should be studied in these particular contexts in the future. This article is intended as a mini-guide for manufacturers and designers in order to improve the quality and the efficiency of the systems that are currently being developed.

    Download full text (pdf)
    Augmented Reality as an Advanced Driver-Assistance System: A Cognitive Approach
  • 4.
    Morillo-Mendez, Lucas
    et al.
    Örebro University, School of Science and Technology.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology.
    Towards human-based models of behaviour in social robots: Exploring age-related differences in the processing of gaze cues in human-robot interaction2020In: Proceedings of the 9th European Starting AI Researchers' Symposium 2020 co-located with 24th European Conference on Artificial Intelligence (ECAI 2020) / [ed] Sebastian Rudolph, Goreti Marreiros, Technical University of Aachen , 2020, Vol. 2655Conference paper (Refereed)
    Abstract [en]

    The emergence of robotic systems offers many opportunities for olderadults (OA) to support their daily life activities. Therefore, there is aneed to study social interactions between OA and robots better. Oneimportant aspect of social communication is the use of non-verbal cues,of which eye gaze has proven to be of special interest both in the fieldsof social cognition and HRI. In this paper, we review previous work onHRI with OA and propose an experiment to compare the influence ofgaze behaviour of robots on older and younger users. These findingswill allow a better design and adaptation of social robots to age-relatedchanges in aspects of social cognition.

  • 5.
    Morillo-Mendez, Lucas
    et al.
    Örebro University, School of Science and Technology.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology.
    Hallström, Felix T.
    Örebro University, Örebro, Sweden.
    Schrooten, Martien G. S.
    Örebro University, School of Behavioural, Social and Legal Sciences.
    Robotic Gaze Drives Attention, Even with No Visible Eyes2023In: HRI '23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, ACM / Association for Computing Machinery , 2023, p. 172-177Conference paper (Refereed)
    Abstract [en]

    Robots can direct human attention using their eyes. However, it remains unclear whether it is the gaze or the low-level motion of the head rotation that drives attention. We isolated these components in a non-predictive gaze cueing task with a robot to explore how limited robotic signals orient attention. In each trial, the head of a NAO robot turned towards the left or right. To isolate the direction of rotation from its gaze, NAO was presented frontally and backward along blocks. Participants responded faster to targets on the gazed-at site, even when the eyes of the robot were not visible and the direction of rotation was opposed to that of the frontal condition. Our results showed that low-level motion did not orient attention, but the gaze direction of the robot did. These findings suggest that the robotic gaze is perceived as a social signal, similar to human gaze.

    Download full text (pdf)
    Robotic Gaze Drives Attention, Even with No Visible Eyes
  • 6.
    Morillo-Mendez, Lucas
    et al.
    Örebro University, School of Science and Technology.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology.
    Schrooten, Martien G. S.
    Örebro University, School of Behavioural, Social and Legal Sciences.
    Gaze cueing in older and younger adults is elicited by a social robot seen from the back2023In: Cognitive Systems Research, ISSN 2214-4366, E-ISSN 1389-0417, Vol. 82, article id 101149Article in journal (Refereed)
    Abstract [en]

    The ability to follow the gaze of others deteriorates with age. This decline is typically tested with gaze cueing tasks, in which the time it takes to respond to targets on a screen is faster when they are preceded by a facial cue looking in the direction of the target (i.e., gaze cueing effect). It is unclear whether age-related differences in this effect occur with gaze cues other than the eyes, such as head orientation, and how these vary in function of the cue-target timing. Based on the perceived usefulness of social robots to assist older adults, we asked older and young adults to perform a gaze cueing task with the head of a NAO robot as the central cue. Crucially, the head was viewed from the back, and so its eye gaze was conveyed. In a control condition, the head was static and faced away from the participant. The stimulus onset asynchrony (SOA) between cue and target was 340 ms or 1000 ms. Both age groups showed a gaze cueing effect at both SOAs. Older participants showed a reduced facilitation effect (i.e., faster on congruent gazing trials than on neutral trials) at the 340-ms SOA compared to the 1000-ms SOA, and no differences between incongruent trials and neutral trials at the 340-ms SOA. Our results show that a robot with non-visible eyes can elicit gaze cueing effects. Age-related differences in the other effects are discussed regarding differences in processing time.

    Download full text (pdf)
    Gaze cueing in older and younger adults is elicited by a social robot seen from the back
  • 7.
    Morillo-Mendez, Lucas
    et al.
    Örebro University, School of Science and Technology.
    Schrooten, Martien G. S.
    Örebro University, School of Law, Psychology and Social Work.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology.
    Age-Related Differences in the Perception of Eye-Gaze from a Social Robot2021In: Social Robotics: 13th International Conference, ICSR 2021, Singapore, Singapore, November 10–13, 2021, Proceedings / [ed] Haizhou Li; Shuzhi Sam Ge; Yan Wu; Agnieszka Wykowska; Hongsheng He; Xiaorui Liu; Dongyu Li; Jairo Perez-Osorio, Springer , 2021, Vol. 13086, p. 350-361Conference paper (Refereed)
    Abstract [en]

    The sensibility to deictic gaze declines naturally with age and often results in reduced social perception. Thus, the increasing efforts in developing social robots that assist older adults during daily life tasks need to consider the effects of aging. In this context, as non-verbal cues such as deictic gaze are important in natural communication in human-robot interaction, this paper investigates the performance of older adults, as compared to younger adults, during a controlled, online (visual search) task inspired by daily life activities, while assisted by a social robot. This paper also examines age-related differences in social perception. Our results showed a significant facilitation effect of head movement representing deictic gaze from a Pepper robot on task performance. This facilitation effect was not significantly different between the age groups. However, social perception of the robot was less influenced by its deictic gaze behavior in older adults, as compared to younger adults. This line of research may ultimately help informing the design of adaptive non-verbal cues from social robots for a wide range of end users.

  • 8.
    Morillo-Mendez, Lucas
    et al.
    Örebro University, School of Science and Technology.
    Schrooten, Martien G. S.
    Örebro University, School of Law, Psychology and Social Work.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology.
    Age-Related Differences in the Perception of Robotic Referential Gaze in Human-Robot Interaction2022In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, p. 1-13Article in journal (Refereed)
    Abstract [en]

    There is an increased interest in using social robots to assist older adults during their daily life activities. As social robots are designed to interact with older users, it becomes relevant to study these interactions under the lens of social cognition. Gaze following, the social ability to infer where other people are looking at, deteriorates with older age. Therefore, the referential gaze from robots might not be an effective social cue to indicate spatial locations to older users. In this study, we explored the performance of older adults, middle-aged adults, and younger controls in a task assisted by the referential gaze of a Pepper robot. We examined age-related differences in task performance, and in self-reported social perception of the robot. Our main findings show that referential gaze from a robot benefited task performance, although the magnitude of this facilitation was lower for older participants. Moreover, perceived anthropomorphism of the robot varied less as a result of its referential gaze in older adults. This research supports that social robots, even if limited in their gazing capabilities, can be effectively perceived as social entities. Additionally, this research suggests that robotic social cues, usually validated with young participants, might be less optimal signs for older adults.

    Supplementary Information: The online version contains supplementary material available at 10.1007/s12369-022-00926-6.

  • 9.
    Morillo-Mendez, Lucas
    et al.
    Örebro University, School of Science and Technology.
    Stower, Rebecca
    Division of Robotics, Perception and Learning, KTH, Stockholm, Sweden.
    Sleat, Alex
    Division of Robotics, Perception and Learning, KTH, Stockholm, Sweden.
    Schreiter, Tim
    Örebro University, School of Science and Technology.
    Leite, Iolanda
    Division of Robotics, Perception and Learning, KTH, Stockholm, Sweden.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology. Centre for Applied Autonomous Sensor Systems, Örebro University, Örebro, Sweden.
    Schrooten, Martien G. S.
    Örebro University, School of Behavioural, Social and Legal Sciences.
    Can the robot "see" what I see? Robot gaze drives attention depending on mental state attribution2023In: Frontiers in Psychology, E-ISSN 1664-1078, Vol. 14, article id 1215771Article in journal (Refereed)
    Abstract [en]

    Mentalizing, where humans infer the mental states of others, facilitates understanding and interaction in social situations. Humans also tend to adopt mentalizing strategies when interacting with robotic agents. There is an ongoing debate about how inferred mental states affect gaze following, a key component of joint attention. Although the gaze from a robot induces gaze following, the impact of mental state attribution on robotic gaze following remains unclear. To address this question, we asked forty-nine young adults to perform a gaze cueing task during which mental state attribution was manipulated as follows. Participants sat facing a robot that turned its head to the screen at its left or right. Their task was to respond to targets that appeared either at the screen the robot gazed at or at the other screen. At the baseline, the robot was positioned so that participants would perceive it as being able to see the screens. We expected faster response times to targets at the screen the robot gazed at than targets at the non-gazed screen (i.e., gaze cueing effect). In the experimental condition, the robot's line of sight was occluded by a physical barrier such that participants would perceive it as unable to see the screens. Our results revealed gaze cueing effects in both conditions although the effect was reduced in the occluded condition compared to the baseline. These results add to the expanding fields of social cognition and human-robot interaction by suggesting that mentalizing has an impact on robotic gaze following.

  • 10.
    Morillo-Mendez, Lucas
    et al.
    Örebro University, School of Science and Technology.
    Tanqueray, Laetitia
    Lund University, Lund, Sweden.
    Stedtler, Samantha
    Lund University, Lund, Sweden.
    Seaborn, Katie
    Tokyo Institute of Technology, Tokyo, Japan.
    Interdisciplinary Approaches in Human-Agent Interaction2023In: HAI '23: Proceedings of the 11th International Conference on Human-Agent Interaction, Association for Computing Machinery (ACM), 2023, p. 504-506Conference paper (Refereed)
    Abstract [en]

    As the field of human-agent interaction (HAI) matures, it becomes essential to acknowledge and address the differing multidisciplinary approaches that support it to favor a common interdisciplinary understanding, thus adopting a broader methodological and epistemological perspective. The field of HAI recognizes that agents are not merely isolated technologies but embedded within society. Consequently, the advancement of HAI is not only part of an engineering problem but one that must be informed by diverse disciplines such as legal, philosophical, psychological, design, medical, and sociological, among others. The goal of this workshop is to provide a space where participants can explore the diverse methodologies that compose HAI, reflect on diverse research practices-with their strengths and limitations-, and provide a safe environment where participants can disseminate their research in clear and engaging terms, rather than technical, to foster interdisciplinary collaborations.

  • 11.
    Schreiter, Tim
    et al.
    Örebro University, School of Science and Technology.
    Almeida, Tiago Rodrigues de
    Örebro University, School of Science and Technology.
    Zhu, Yufei
    Örebro University, School of Science and Technology.
    Gutiérrez Maestro, Eduardo
    Örebro University, School of Science and Technology.
    Morillo-Mendez, Lucas
    Örebro University, School of Science and Technology.
    Rudenko, Andrey
    Robert Bosch GmbH, Corporate Research, Stuttgart, Germany .
    Kucner, Tomasz P.
    Mobile Robotics Group, Department of Electrical Engineering and Automation, Aalto University, Finland.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Palmieri, Luigi
    Robert Bosch GmbH, Corporate Research, Stuttgart, Germany .
    Arras, Kai O.
    Robert Bosch GmbH, Corporate Research, Stuttgart, Germany .
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    The Magni Human Motion Dataset: Accurate, Complex, Multi-Modal, Natural, Semantically-Rich and Contextualized2022Conference paper (Refereed)
    Abstract [en]

    Rapid development of social robots stimulates active research in human motion modeling, interpretation and prediction, proactive collision avoidance, human-robot interaction and co-habitation in shared spaces. Modern approaches to this end require high quality datasets for training and evaluation. However, the majority of available datasets suffers from either inaccurate tracking data or unnatural, scripted behavior of the tracked people. This paper attempts to fill this gap by providing high quality tracking information from motion capture, eye-gaze trackers and on-board robot sensors in a semantically-rich environment. To induce natural behavior of the recorded participants, we utilise loosely scripted task assignment, which induces the participants navigate through the dynamic laboratory environment in a natural and purposeful way. The motion dataset, presented in this paper, sets a high quality standard, as the realistic and accurate data is enhanced with semantic information, enabling development of new algorithms which rely not only on the tracking information but also on contextual cues of the moving agents, static and dynamic environment. 

    Download full text (pdf)
    The Magni Human Motion Dataset: Accurate, Complex, Multi-Modal, Natural, Semantically-Rich and Contextualized
  • 12.
    Schreiter, Tim
    et al.
    Örebro University, School of Science and Technology.
    Morillo-Mendez, Lucas
    Örebro University, School of Science and Technology.
    Chadalavada, Ravi T.
    Örebro University, School of Science and Technology.
    Rudenko, Andrey
    Robert Bosch GmbH, Corporate Research, Stuttgart, Germany.
    Billing, Erik
    Interaction Lab, University of Skövde, Skövde, Sweden.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Arras, Kai O.
    Robert Bosch GmbH, Corporate Research, Stuttgart, Germany.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology. TU Munich, Germany.
    Advantages of Multimodal versus Verbal-Only Robot-to-Human Communication with an Anthropomorphic Robotic Mock Driver2023In: 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN): Proceedings, IEEE, 2023, p. 293-300Conference paper (Refereed)
    Abstract [en]

    Robots are increasingly used in shared environments with humans, making effective communication a necessity for successful human-robot interaction. In our work, we study a crucial component: active communication of robot intent. Here, we present an anthropomorphic solution where a humanoid robot communicates the intent of its host robot acting as an "Anthropomorphic Robotic Mock Driver" (ARMoD). We evaluate this approach in two experiments in which participants work alongside a mobile robot on various tasks, while the ARMoD communicates a need for human attention, when required, or gives instructions to collaborate on a joint task. The experiments feature two interaction styles of the ARMoD: a verbal-only mode using only speech and a multimodal mode, additionally including robotic gaze and pointing gestures to support communication and register intent in space. Our results show that the multimodal interaction style, including head movements and eye gaze as well as pointing gestures, leads to more natural fixation behavior. Participants naturally identified and fixated longer on the areas relevant for intent communication, and reacted faster to instructions in collaborative tasks. Our research further indicates that the ARMoD intent communication improves engagement and social interaction with mobile robots in workplace settings.

  • 13.
    Schreiter, Tim
    et al.
    Örebro University, School of Science and Technology.
    Morillo-Mendez, Lucas
    Örebro University, School of Science and Technology.
    Chadalavada, Ravi Teja
    Örebro University, School of Science and Technology.
    Rudenko, Andrey
    Robert Bosch GmbH, Corporate Research, Stuttgart, Germany.
    Billing, Erik Alexander
    Interaction Lab, University of Skövde, Sweden.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    The Effect of Anthropomorphism on Trust in an Industrial Human-Robot Interaction2022In: SCRITA Workshop Proceedings (arXiv:2208.11090), 2022Conference paper (Refereed)
    Abstract [en]

    Robots are increasingly deployed in spaces shared with humans, including home settings and industrial environments. In these environments, the interaction between humans and robots (HRI) is crucial for safety, legibility, and efficiency. A key factor in HRI is trust, which modulates the acceptance of the system. Anthropomorphism has been shown to modulate trust development in a robot, but robots in industrial environments are not usually anthropomorphic. We designed a simple interaction in an industrial environment in which an anthropomorphic mock driver (ARMoD) robot simulates to drive an autonomous guided vehicle (AGV). The task consisted of a human crossing paths with the AGV, with or without the ARMoD mounted on the top, in a narrow corridor. The human and the system needed to negotiate trajectories when crossing paths, meaning that the human had to attend to the trajectory of the robot to avoid a collision with it. There was a significant increment in the reported trust scores in the condition where the ARMoD was present, showing that the presence of an anthropomorphic robot is enough to modulate the trust, even in limited interactions as the one we present here. 

    Download full text (pdf)
    The Effect of Anthropomorphism on Trust in an Industrial Human-Robot Interaction
1 - 13 of 13
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf