To Örebro University

oru.seÖrebro University Publications
Change search
Link to record
Permanent link

Direct link
Morillo-Mendez, LucasORCID iD iconorcid.org/0000-0001-7339-8118
Alternative names
Publications (10 of 14) Show all publications
Morillo-Mendez, L., Schrooten, M. G. S., Loutfi, A. & Martinez Mozos, O. (2024). Age-Related Differences in the Perception of Robotic Referential Gaze in Human-Robot Interaction. International Journal of Social Robotics, 16(6), 1069-1081
Open this publication in new window or tab >>Age-Related Differences in the Perception of Robotic Referential Gaze in Human-Robot Interaction
2024 (English)In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 16, no 6, p. 1069-1081Article in journal (Refereed) Published
Abstract [en]

There is an increased interest in using social robots to assist older adults during their daily life activities. As social robots are designed to interact with older users, it becomes relevant to study these interactions under the lens of social cognition. Gaze following, the social ability to infer where other people are looking at, deteriorates with older age. Therefore, the referential gaze from robots might not be an effective social cue to indicate spatial locations to older users. In this study, we explored the performance of older adults, middle-aged adults, and younger controls in a task assisted by the referential gaze of a Pepper robot. We examined age-related differences in task performance, and in self-reported social perception of the robot. Our main findings show that referential gaze from a robot benefited task performance, although the magnitude of this facilitation was lower for older participants. Moreover, perceived anthropomorphism of the robot varied less as a result of its referential gaze in older adults. This research supports that social robots, even if limited in their gazing capabilities, can be effectively perceived as social entities. Additionally, this research suggests that robotic social cues, usually validated with young participants, might be less optimal signs for older adults.

Supplementary Information: The online version contains supplementary material available at 10.1007/s12369-022-00926-6.

Place, publisher, year, edition, pages
Springer, 2024
Keywords
Aging, Gaze following, Human-robot interaction, Non-verbal cues, Referential gaze, Social cues
National Category
Gerontology, specialising in Medical and Health Sciences Robotics
Identifiers
urn:nbn:se:oru:diva-101615 (URN)10.1007/s12369-022-00926-6 (DOI)000857896500001 ()36185773 (PubMedID)2-s2.0-85138680591 (Scopus ID)
Funder
European Commission, 754285Wallenberg AI, Autonomous Systems and Software Program (WASP)
Note

Funding agency:

RobWell project - Spanish Ministerio de Ciencia, Innovacion y Universidades RTI2018-095599-A-C22

Available from: 2022-10-04 Created: 2022-10-04 Last updated: 2024-07-30Bibliographically approved
Schreiter, T., Morillo-Mendez, L., Chadalavada, R. T., Rudenko, A., Billing, E., Magnusson, M., . . . Lilienthal, A. J. (2023). Advantages of Multimodal versus Verbal-Only Robot-to-Human Communication with an Anthropomorphic Robotic Mock Driver. In: 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN): Proceedings. Paper presented at 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Busan, South Korea, August 28-31, 2023 (pp. 293-300). IEEE
Open this publication in new window or tab >>Advantages of Multimodal versus Verbal-Only Robot-to-Human Communication with an Anthropomorphic Robotic Mock Driver
Show others...
2023 (English)In: 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN): Proceedings, IEEE, 2023, p. 293-300Conference paper, Published paper (Refereed)
Abstract [en]

Robots are increasingly used in shared environments with humans, making effective communication a necessity for successful human-robot interaction. In our work, we study a crucial component: active communication of robot intent. Here, we present an anthropomorphic solution where a humanoid robot communicates the intent of its host robot acting as an "Anthropomorphic Robotic Mock Driver" (ARMoD). We evaluate this approach in two experiments in which participants work alongside a mobile robot on various tasks, while the ARMoD communicates a need for human attention, when required, or gives instructions to collaborate on a joint task. The experiments feature two interaction styles of the ARMoD: a verbal-only mode using only speech and a multimodal mode, additionally including robotic gaze and pointing gestures to support communication and register intent in space. Our results show that the multimodal interaction style, including head movements and eye gaze as well as pointing gestures, leads to more natural fixation behavior. Participants naturally identified and fixated longer on the areas relevant for intent communication, and reacted faster to instructions in collaborative tasks. Our research further indicates that the ARMoD intent communication improves engagement and social interaction with mobile robots in workplace settings.

Place, publisher, year, edition, pages
IEEE, 2023
Series
IEEE RO-MAN, ISSN 1944-9445, E-ISSN 1944-9437
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-110873 (URN)10.1109/RO-MAN57019.2023.10309629 (DOI)001108678600042 ()9798350336702 (ISBN)9798350336719 (ISBN)
Conference
32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Busan, South Korea, August 28-31, 2023
Funder
EU, Horizon 2020, 101017274 (DARKO)
Available from: 2024-01-22 Created: 2024-01-22 Last updated: 2024-01-22Bibliographically approved
Morillo-Mendez, L., Stower, R., Sleat, A., Schreiter, T., Leite, I., Martinez Mozos, O. & Schrooten, M. G. S. (2023). Can the robot "see" what I see? Robot gaze drives attention depending on mental state attribution. Frontiers in Psychology, 14, Article ID 1215771.
Open this publication in new window or tab >>Can the robot "see" what I see? Robot gaze drives attention depending on mental state attribution
Show others...
2023 (English)In: Frontiers in Psychology, E-ISSN 1664-1078, Vol. 14, article id 1215771Article in journal (Refereed) Published
Abstract [en]

Mentalizing, where humans infer the mental states of others, facilitates understanding and interaction in social situations. Humans also tend to adopt mentalizing strategies when interacting with robotic agents. There is an ongoing debate about how inferred mental states affect gaze following, a key component of joint attention. Although the gaze from a robot induces gaze following, the impact of mental state attribution on robotic gaze following remains unclear. To address this question, we asked forty-nine young adults to perform a gaze cueing task during which mental state attribution was manipulated as follows. Participants sat facing a robot that turned its head to the screen at its left or right. Their task was to respond to targets that appeared either at the screen the robot gazed at or at the other screen. At the baseline, the robot was positioned so that participants would perceive it as being able to see the screens. We expected faster response times to targets at the screen the robot gazed at than targets at the non-gazed screen (i.e., gaze cueing effect). In the experimental condition, the robot's line of sight was occluded by a physical barrier such that participants would perceive it as unable to see the screens. Our results revealed gaze cueing effects in both conditions although the effect was reduced in the occluded condition compared to the baseline. These results add to the expanding fields of social cognition and human-robot interaction by suggesting that mentalizing has an impact on robotic gaze following.

Place, publisher, year, edition, pages
Frontiers Media S.A., 2023
Keywords
attention, cueing effect, gaze following, intentional stance, mentalizing, social robots
National Category
Robotics
Identifiers
urn:nbn:se:oru:diva-107503 (URN)10.3389/fpsyg.2023.1215771 (DOI)001037081700001 ()37519379 (PubMedID)2-s2.0-85166030431 (Scopus ID)
Funder
EU, European Research Council, 754285Wallenberg AI, Autonomous Systems and Software Program (WASP), RTI2018-095599-A-C22
Note

Funding Agency:

RobWell project - Spanish Ministerio de Ciencia, Innovacion y Universidades

Available from: 2023-08-10 Created: 2023-08-10 Last updated: 2023-09-20Bibliographically approved
Morillo-Mendez, L., Martinez Mozos, O. & Schrooten, M. G. S. (2023). Gaze cueing in older and younger adults is elicited by a social robot seen from the back. Cognitive Systems Research, 82, Article ID 101149.
Open this publication in new window or tab >>Gaze cueing in older and younger adults is elicited by a social robot seen from the back
2023 (English)In: Cognitive Systems Research, ISSN 2214-4366, E-ISSN 1389-0417, Vol. 82, article id 101149Article in journal (Refereed) Published
Abstract [en]

The ability to follow the gaze of others deteriorates with age. This decline is typically tested with gaze cueing tasks, in which the time it takes to respond to targets on a screen is faster when they are preceded by a facial cue looking in the direction of the target (i.e., gaze cueing effect). It is unclear whether age-related differences in this effect occur with gaze cues other than the eyes, such as head orientation, and how these vary in function of the cue-target timing. Based on the perceived usefulness of social robots to assist older adults, we asked older and young adults to perform a gaze cueing task with the head of a NAO robot as the central cue. Crucially, the head was viewed from the back, and so its eye gaze was conveyed. In a control condition, the head was static and faced away from the participant. The stimulus onset asynchrony (SOA) between cue and target was 340 ms or 1000 ms. Both age groups showed a gaze cueing effect at both SOAs. Older participants showed a reduced facilitation effect (i.e., faster on congruent gazing trials than on neutral trials) at the 340-ms SOA compared to the 1000-ms SOA, and no differences between incongruent trials and neutral trials at the 340-ms SOA. Our results show that a robot with non-visible eyes can elicit gaze cueing effects. Age-related differences in the other effects are discussed regarding differences in processing time.

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
Gaze following, Gaze cueing effect, Human-robot interaction, Aging
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-108208 (URN)10.1016/j.cogsys.2023.101149 (DOI)001054852800001 ()2-s2.0-85165450249 (Scopus ID)
Funder
EU, Horizon 2020, 754285Wallenberg AI, Autonomous Systems and Software Program (WASP)
Note

Funding agency:

Spanish Ministerio de Ciencia, RobWellproject RTI2018-095599-A-C22

Available from: 2023-09-11 Created: 2023-09-11 Last updated: 2023-09-20Bibliographically approved
Morillo-Mendez, L., Tanqueray, L., Stedtler, S. & Seaborn, K. (2023). Interdisciplinary Approaches in Human-Agent Interaction. In: HAI '23: Proceedings of the 11th International Conference on Human-Agent Interaction: . Paper presented at 11th International Conference on Human-Agent Interaction (HAI 2023), Gothenburg, Sweden, December 4-7, 2023 (pp. 504-506). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Interdisciplinary Approaches in Human-Agent Interaction
2023 (English)In: HAI '23: Proceedings of the 11th International Conference on Human-Agent Interaction, Association for Computing Machinery (ACM), 2023, p. 504-506Conference paper, Published paper (Refereed)
Abstract [en]

As the field of human-agent interaction (HAI) matures, it becomes essential to acknowledge and address the differing multidisciplinary approaches that support it to favor a common interdisciplinary understanding, thus adopting a broader methodological and epistemological perspective. The field of HAI recognizes that agents are not merely isolated technologies but embedded within society. Consequently, the advancement of HAI is not only part of an engineering problem but one that must be informed by diverse disciplines such as legal, philosophical, psychological, design, medical, and sociological, among others. The goal of this workshop is to provide a space where participants can explore the diverse methodologies that compose HAI, reflect on diverse research practices-with their strengths and limitations-, and provide a safe environment where participants can disseminate their research in clear and engaging terms, rather than technical, to foster interdisciplinary collaborations.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Keywords
Interdisciplinarity, Human-agent interaction, Human-robot interaction, Human-computer interaction
National Category
Computer Vision and Robotics (Autonomous Systems) Human Computer Interaction
Identifiers
urn:nbn:se:oru:diva-111721 (URN)10.1145/3623809.3623982 (DOI)001148034200096 ()2-s2.0-85180127700 (Scopus ID)9798400708244 (ISBN)
Conference
11th International Conference on Human-Agent Interaction (HAI 2023), Gothenburg, Sweden, December 4-7, 2023
Available from: 2024-02-20 Created: 2024-02-20 Last updated: 2024-02-20Bibliographically approved
Morillo-Mendez, L., Martinez Mozos, O., Hallström, F. T. & Schrooten, M. G. S. (2023). Robotic Gaze Drives Attention, Even with No Visible Eyes. In: HRI '23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at ACM/IEEE International Conference on Human-Robot Interaction (HRI '23), Stockholm, Sweden, March 13-16, 2023 (pp. 172-177). ACM / Association for Computing Machinery
Open this publication in new window or tab >>Robotic Gaze Drives Attention, Even with No Visible Eyes
2023 (English)In: HRI '23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, ACM / Association for Computing Machinery , 2023, p. 172-177Conference paper, Published paper (Refereed)
Abstract [en]

Robots can direct human attention using their eyes. However, it remains unclear whether it is the gaze or the low-level motion of the head rotation that drives attention. We isolated these components in a non-predictive gaze cueing task with a robot to explore how limited robotic signals orient attention. In each trial, the head of a NAO robot turned towards the left or right. To isolate the direction of rotation from its gaze, NAO was presented frontally and backward along blocks. Participants responded faster to targets on the gazed-at site, even when the eyes of the robot were not visible and the direction of rotation was opposed to that of the frontal condition. Our results showed that low-level motion did not orient attention, but the gaze direction of the robot did. These findings suggest that the robotic gaze is perceived as a social signal, similar to human gaze.

Place, publisher, year, edition, pages
ACM / Association for Computing Machinery, 2023
Keywords
Motion cue, Reflexive attention, Gaze following, Gaze cueing, Social robots
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-108211 (URN)10.1145/3568294.3580066 (DOI)001054975700029 ()2-s2.0-85150446663 (Scopus ID)9781450399708 (ISBN)
Conference
ACM/IEEE International Conference on Human-Robot Interaction (HRI '23), Stockholm, Sweden, March 13-16, 2023
Funder
EU, Horizon 2020, 754285Wallenberg AI, Autonomous Systems and Software Program (WASP)
Note

Funding agency:

Spanish Ministerio de Ciencia, Innovación y Universidades, RobWell project (No RTI2018-095599-A-C22)

Available from: 2023-09-11 Created: 2023-09-11 Last updated: 2024-02-27Bibliographically approved
Morillo-Mendez, L. (2023). SOCIAL ROBOTS / SOCIAL COGNITION: Robots' Gaze Effects in Older and Younger Adults. (Doctoral dissertation). Örebro: Örebro University
Open this publication in new window or tab >>SOCIAL ROBOTS / SOCIAL COGNITION: Robots' Gaze Effects in Older and Younger Adults
2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

This dissertation presents advances in social human-robot interaction (HRI) and human social cognition through a series of experiments in which humans face a robot. A predominant approach to studying the human factor in HRI consists of placing the human in the role of a user to explore potential factors affecting the acceptance or usability of a robot. This work takes a broader perspective and investigates if social robots are perceived as social agents, irrespective of their final role or usefulness in a particular interaction. To do so, it adopts methodologies and theories from cognitive and experimental psychology, such as the use of behavioral paradigms involving gaze following and a framework of more than twenty years of research employing gaze to explore social cognition. The communicative role of gaze in robots is used to explore their essential effectiveness and as a tool to learn how humans perceive them. Studying how certain social robots are perceived through the lens of research in social cognition is the central contribution of this dissertation.

This thesis presents empirical research and the multidisciplinary literature on (robotic) gaze following, aging, and their relation with social cognition. Papers I and II investigate the decline in gaze following associated with aging, linked with a broader decline in social cognition, in scenarios with robots as gazing agents. In addition to the participants' self-reported perception of the robots, their reaction times were also measured to reflect their internal cognitive processes. Overall, this decline seems to persist when the gazing Overall, this decline seems to persist when the gazing agent is a robot, highlighting our depiction of robots as social agents. Paper IV explores the theories behind this decline using a robot, emphasizing how these theories extend to non-human agents. This work also investigates motion as a competing cue to gaze in social robots (Paper III), and mentalizing in robotic gaze following (Paper V).

Through experiments with participants and within the scope of HRI and social cognition studies, this thesis presents a joint framework highlighting that robots are depicted as social agents. This finding emphasizes the importance of fundamental insights from social cognition when designing robot behaviors. Additionally, it promotes and supports the use of robots as valuable tools to explore the robustness of current theories in cognitive psychology to expand the field in parallel.

Place, publisher, year, edition, pages
Örebro: Örebro University, 2023. p. 87
Series
Örebro Studies in Technology, ISSN 1650-8580 ; 98
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-108225 (URN)9789175295213 (ISBN)
Public defence
2023-10-13, Örebro universitet, Forumhuset, Hörsal F, Fakultetsgatan 1, Örebro, 09:00 (English)
Opponent
Supervisors
Available from: 2023-09-12 Created: 2023-09-12 Last updated: 2023-09-28Bibliographically approved
Almeida, T., Rudenko, A., Schreiter, T., Zhu, Y., Gutiérrez Maestro, E., Morillo-Mendez, L., . . . Lilienthal, A. (2023). THÖR-Magni: Comparative Analysis of Deep Learning Models for Role-Conditioned Human Motion Prediction. In: 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW): . Paper presented at IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, Paris, France, October 2-6, 2023 (pp. 2192-2201). IEEE
Open this publication in new window or tab >>THÖR-Magni: Comparative Analysis of Deep Learning Models for Role-Conditioned Human Motion Prediction
Show others...
2023 (English)In: 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), IEEE, 2023, p. 2192-2201Conference paper, Published paper (Refereed)
Abstract [en]

Autonomous systems, that need to operate in human environments and interact with the users, rely on understanding and anticipating human activity and motion. Among the many factors which influence human motion, semantic attributes, such as the roles and ongoing activities of the detected people, provide a powerful cue on their future motion, actions, and intentions. In this work we adapt several popular deep learning models for trajectory prediction with labels corresponding to the roles of the people. To this end we use the novel THOR-Magni dataset, which captures human activity in industrial settings and includes the relevant semantic labels for people who navigate complex environments, interact with objects and robots, work alone and in groups. In qualitative and quantitative experiments we show that the role-conditioned LSTM, Transformer, GAN and VAE methods can effectively incorporate the semantic categories, better capture the underlying input distribution and therefore produce more accurate motion predictions in terms of Top-K ADE/FDE and log-likelihood metrics.

Place, publisher, year, edition, pages
IEEE, 2023
Series
IEEE International Conference on Computer Vision Workshop (ICCVW), ISSN 2473-9936, E-ISSN 2473-9944
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-109508 (URN)10.1109/ICCVW60793.2023.00234 (DOI)001156680302028 ()2-s2.0-85182932549 (Scopus ID)9798350307450 (ISBN)9798350307443 (ISBN)
Conference
IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, Paris, France, October 2-6, 2023
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP), NT4220EU, Horizon 2020, 101017274 (DARKO)
Available from: 2023-10-31 Created: 2023-10-31 Last updated: 2024-03-25Bibliographically approved
Schreiter, T., Morillo-Mendez, L., Chadalavada, R. T., Rudenko, A., Billing, E. A. & Lilienthal, A. J. (2022). The Effect of Anthropomorphism on Trust in an Industrial Human-Robot Interaction. In: SCRITA Workshop Proceedings (arXiv:2208.11090): . Paper presented at 31st IEEE International Conference on Robot & Human Interactive Communication, Naples, Italy, August 29 - September 2, 2022.
Open this publication in new window or tab >>The Effect of Anthropomorphism on Trust in an Industrial Human-Robot Interaction
Show others...
2022 (English)In: SCRITA Workshop Proceedings (arXiv:2208.11090), 2022Conference paper, Published paper (Refereed)
Abstract [en]

Robots are increasingly deployed in spaces shared with humans, including home settings and industrial environments. In these environments, the interaction between humans and robots (HRI) is crucial for safety, legibility, and efficiency. A key factor in HRI is trust, which modulates the acceptance of the system. Anthropomorphism has been shown to modulate trust development in a robot, but robots in industrial environments are not usually anthropomorphic. We designed a simple interaction in an industrial environment in which an anthropomorphic mock driver (ARMoD) robot simulates to drive an autonomous guided vehicle (AGV). The task consisted of a human crossing paths with the AGV, with or without the ARMoD mounted on the top, in a narrow corridor. The human and the system needed to negotiate trajectories when crossing paths, meaning that the human had to attend to the trajectory of the robot to avoid a collision with it. There was a significant increment in the reported trust scores in the condition where the ARMoD was present, showing that the presence of an anthropomorphic robot is enough to modulate the trust, even in limited interactions as the one we present here. 

National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-102773 (URN)10.48550/arXiv.2208.14637 (DOI)
Conference
31st IEEE International Conference on Robot & Human Interactive Communication, Naples, Italy, August 29 - September 2, 2022
Projects
DARKO
Funder
EU, Horizon 2020, 101017274 754285
Available from: 2022-12-19 Created: 2022-12-19 Last updated: 2022-12-20Bibliographically approved
Schreiter, T., Almeida, T. R., Zhu, Y., Gutiérrez Maestro, E., Morillo-Mendez, L., Rudenko, A., . . . Lilienthal, A. (2022). The Magni Human Motion Dataset: Accurate, Complex, Multi-Modal, Natural, Semantically-Rich and Contextualized. In: : . Paper presented at 31st IEEE International Conference on Robot & Human Interactive Communication, Naples, Italy, August 29 - September 2, 2022.
Open this publication in new window or tab >>The Magni Human Motion Dataset: Accurate, Complex, Multi-Modal, Natural, Semantically-Rich and Contextualized
Show others...
2022 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Rapid development of social robots stimulates active research in human motion modeling, interpretation and prediction, proactive collision avoidance, human-robot interaction and co-habitation in shared spaces. Modern approaches to this end require high quality datasets for training and evaluation. However, the majority of available datasets suffers from either inaccurate tracking data or unnatural, scripted behavior of the tracked people. This paper attempts to fill this gap by providing high quality tracking information from motion capture, eye-gaze trackers and on-board robot sensors in a semantically-rich environment. To induce natural behavior of the recorded participants, we utilise loosely scripted task assignment, which induces the participants navigate through the dynamic laboratory environment in a natural and purposeful way. The motion dataset, presented in this paper, sets a high quality standard, as the realistic and accurate data is enhanced with semantic information, enabling development of new algorithms which rely not only on the tracking information but also on contextual cues of the moving agents, static and dynamic environment. 

Keywords
Dataset, Human Motion Prediction, Eye Tracking
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-102772 (URN)10.48550/arXiv.2208.14925 (DOI)
Conference
31st IEEE International Conference on Robot & Human Interactive Communication, Naples, Italy, August 29 - September 2, 2022
Projects
DARKO
Funder
EU, Horizon 2020, 101017274Knut and Alice Wallenberg Foundation
Available from: 2022-12-19 Created: 2022-12-19 Last updated: 2022-12-20Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-7339-8118

Search in DiVA

Show all publications