To Örebro University

oru.seÖrebro University Publications
Change search
Link to record
Permanent link

Direct link
Martinez Mozos, OscarORCID iD iconorcid.org/0000-0002-3908-4921
Publications (10 of 49) Show all publications
Morillo-Mendez, L., Stower, R., Sleat, A., Schreiter, T., Leite, I., Martinez Mozos, O. & Schrooten, M. G. S. (2023). Can the robot "see" what I see? Robot gaze drives attention depending on mental state attribution. Frontiers in Psychology, 14, Article ID 1215771.
Open this publication in new window or tab >>Can the robot "see" what I see? Robot gaze drives attention depending on mental state attribution
Show others...
2023 (English)In: Frontiers in Psychology, E-ISSN 1664-1078, Vol. 14, article id 1215771Article in journal (Refereed) Published
Abstract [en]

Mentalizing, where humans infer the mental states of others, facilitates understanding and interaction in social situations. Humans also tend to adopt mentalizing strategies when interacting with robotic agents. There is an ongoing debate about how inferred mental states affect gaze following, a key component of joint attention. Although the gaze from a robot induces gaze following, the impact of mental state attribution on robotic gaze following remains unclear. To address this question, we asked forty-nine young adults to perform a gaze cueing task during which mental state attribution was manipulated as follows. Participants sat facing a robot that turned its head to the screen at its left or right. Their task was to respond to targets that appeared either at the screen the robot gazed at or at the other screen. At the baseline, the robot was positioned so that participants would perceive it as being able to see the screens. We expected faster response times to targets at the screen the robot gazed at than targets at the non-gazed screen (i.e., gaze cueing effect). In the experimental condition, the robot's line of sight was occluded by a physical barrier such that participants would perceive it as unable to see the screens. Our results revealed gaze cueing effects in both conditions although the effect was reduced in the occluded condition compared to the baseline. These results add to the expanding fields of social cognition and human-robot interaction by suggesting that mentalizing has an impact on robotic gaze following.

Place, publisher, year, edition, pages
Frontiers Media S.A., 2023
Keywords
attention, cueing effect, gaze following, intentional stance, mentalizing, social robots
National Category
Robotics
Identifiers
urn:nbn:se:oru:diva-107503 (URN)10.3389/fpsyg.2023.1215771 (DOI)001037081700001 ()37519379 (PubMedID)2-s2.0-85166030431 (Scopus ID)
Funder
EU, European Research Council, 754285Wallenberg AI, Autonomous Systems and Software Program (WASP), RTI2018-095599-A-C22
Note

Funding Agency:

RobWell project - Spanish Ministerio de Ciencia, Innovacion y Universidades

Available from: 2023-08-10 Created: 2023-08-10 Last updated: 2023-09-20Bibliographically approved
Hernandez, A. C., Gomez, C., Barber, R. & Martinez Mozos, O. (2023). Exploiting the confusions of semantic places to improve service robotic tasks in indoor environments. Robotics and Autonomous Systems, 159, Article ID 104290.
Open this publication in new window or tab >>Exploiting the confusions of semantic places to improve service robotic tasks in indoor environments
2023 (English)In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 159, article id 104290Article in journal (Refereed) Published
Abstract [en]

A significant challenge in service robots is the semantic understanding of their surrounding areas. Traditional approaches addressed this problem by segmenting the environment into regions corresponding to full rooms that are assigned labels consistent with human perception, e.g. office or kitchen. However, different areas inside the same room can be used in different ways: Could the table and the chair in my kitchen become my office ? What is the category of that area now? office or kitchen? To adapt to these circumstances we propose a new paradigm where we intentionally relax the resulting labeling of place classifiers by allowing confusions, and by avoiding further filtering leading to clean full room classifications. Our hypothesis is that confusions can be beneficial to a service robot and, therefore, they can be kept and better exploited. Our approach creates a subdivision of the environment into different regions by maintaining the confusions which are due to the scene appearance or to the distribution of objects. In this paper, we present a proof of concept implemented in simulated and real scenarios, that improves efficiency in the robotic task of searching for objects by exploiting the confusions in place classifications.

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
Semantic understanding, Service robots
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-106501 (URN)10.1016/j.robot.2022.104290 (DOI)001002166500002 ()2-s2.0-85140915247 (Scopus ID)
Funder
Knut and Alice Wallenberg Foundation
Note

Funding agencies:

Spanish Government RTI2018-095599-B-C21 RTI2018-095599-A-C22

RoboCity2030-Madrid Robotics Digital Innovation Hub project, Spain S2018/NMT-4331

 

Available from: 2023-06-22 Created: 2023-06-22 Last updated: 2023-06-22Bibliographically approved
Morillo-Mendez, L., Martinez Mozos, O. & Schrooten, M. G. S. (2023). Gaze cueing in older and younger adults is elicited by a social robot seen from the back. Cognitive Systems Research, 82, Article ID 101149.
Open this publication in new window or tab >>Gaze cueing in older and younger adults is elicited by a social robot seen from the back
2023 (English)In: Cognitive Systems Research, ISSN 2214-4366, E-ISSN 1389-0417, Vol. 82, article id 101149Article in journal (Refereed) Published
Abstract [en]

The ability to follow the gaze of others deteriorates with age. This decline is typically tested with gaze cueing tasks, in which the time it takes to respond to targets on a screen is faster when they are preceded by a facial cue looking in the direction of the target (i.e., gaze cueing effect). It is unclear whether age-related differences in this effect occur with gaze cues other than the eyes, such as head orientation, and how these vary in function of the cue-target timing. Based on the perceived usefulness of social robots to assist older adults, we asked older and young adults to perform a gaze cueing task with the head of a NAO robot as the central cue. Crucially, the head was viewed from the back, and so its eye gaze was conveyed. In a control condition, the head was static and faced away from the participant. The stimulus onset asynchrony (SOA) between cue and target was 340 ms or 1000 ms. Both age groups showed a gaze cueing effect at both SOAs. Older participants showed a reduced facilitation effect (i.e., faster on congruent gazing trials than on neutral trials) at the 340-ms SOA compared to the 1000-ms SOA, and no differences between incongruent trials and neutral trials at the 340-ms SOA. Our results show that a robot with non-visible eyes can elicit gaze cueing effects. Age-related differences in the other effects are discussed regarding differences in processing time.

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
Gaze following, Gaze cueing effect, Human-robot interaction, Aging
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-108208 (URN)10.1016/j.cogsys.2023.101149 (DOI)001054852800001 ()2-s2.0-85165450249 (Scopus ID)
Funder
EU, Horizon 2020, 754285Wallenberg AI, Autonomous Systems and Software Program (WASP)
Note

Funding agency:

Spanish Ministerio de Ciencia, RobWellproject RTI2018-095599-A-C22

Available from: 2023-09-11 Created: 2023-09-11 Last updated: 2023-09-20Bibliographically approved
Calatrava Nicolás, F. M. & Martinez Mozos, O. (2023). Light Residual Network for Human Activity Recognition using Wearable Sensor Data. IEEE Sensors Letters, 7(10), Article ID 7005304.
Open this publication in new window or tab >>Light Residual Network for Human Activity Recognition using Wearable Sensor Data
2023 (English)In: IEEE Sensors Letters, E-ISSN 2475-1472, Vol. 7, no 10, article id 7005304Article in journal (Refereed) Published
Abstract [en]

This letter addresses the problem of human activity recognition (HAR) of people wearing inertial sensors using data from the UCI-HAR dataset. We propose a light residual network, which obtains an F1-Score of 97.6% that outperforms previous works, while drastically reducing the number of parameters by a factor of 15, and thus the training complexity. In addition, we propose a new benchmark based on leave-one (person)-out cross-validation to standardize and unify future classifications on the same dataset, and to increase reliability and fairness in the comparisons.

Place, publisher, year, edition, pages
IEEE, 2023
Keywords
Sensor signal processing, deep learning, human activity recognition (HAR), inertial sensors, residual network
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-109393 (URN)10.1109/LSENS.2023.3311623 (DOI)001071738100001 ()2-s2.0-85171554747 (Scopus ID)
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2023-10-25 Created: 2023-10-25 Last updated: 2023-10-25Bibliographically approved
Rodrigues de Almeida, T. & Martinez Mozos, O. (2023). Likely, Light, and Accurate Context-Free Clusters-based Trajectory Prediction. In: : . Paper presented at 26th IEEE International Conference on Intelligent Transportation Systems (IEEE ITSC 2023), Bilbao, Bizkaia, Spain, September 24-28, 2023.
Open this publication in new window or tab >>Likely, Light, and Accurate Context-Free Clusters-based Trajectory Prediction
2023 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Autonomous systems in the road transportation network require intelligent mechanisms that cope with uncertainty to foresee the future. In this paper, we propose a multi-stage probabilistic approach for trajectory forecasting: trajectory transformation to displacement space, clustering of displacement time series, trajectory proposals, and ranking proposals. We introduce a new deep feature clustering method, underlying self-conditioned GAN, which copes better with distribution shifts than traditional methods. Additionally, we propose novel distance-based ranking proposals to assign probabilities to the generated trajectories that are more efficient yet accurate than an auxiliary neural network. The overall system surpasses context-free deep generative models in human and road agents trajectory data while performing similarly to point estimators when comparing the most probable trajectory.

National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-108198 (URN)
Conference
26th IEEE International Conference on Intelligent Transportation Systems (IEEE ITSC 2023), Bilbao, Bizkaia, Spain, September 24-28, 2023
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2023-09-11 Created: 2023-09-11 Last updated: 2023-09-13Bibliographically approved
Morillo-Mendez, L., Martinez Mozos, O., Hallström, F. T. & Schrooten, M. G. S. (2023). Robotic Gaze Drives Attention, Even with No Visible Eyes. In: HRI '23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at ACM/IEEE International Conference on Human-Robot Interaction (HRI '23), Stockholm, Sweden, March 13-16, 2023 (pp. 172-177). ACM / Association for Computing Machinery
Open this publication in new window or tab >>Robotic Gaze Drives Attention, Even with No Visible Eyes
2023 (English)In: HRI '23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, ACM / Association for Computing Machinery , 2023, p. 172-177Conference paper, Published paper (Refereed)
Abstract [en]

Robots can direct human attention using their eyes. However, it remains unclear whether it is the gaze or the low-level motion of the head rotation that drives attention. We isolated these components in a non-predictive gaze cueing task with a robot to explore how limited robotic signals orient attention. In each trial, the head of a NAO robot turned towards the left or right. To isolate the direction of rotation from its gaze, NAO was presented frontally and backward along blocks. Participants responded faster to targets on the gazed-at site, even when the eyes of the robot were not visible and the direction of rotation was opposed to that of the frontal condition. Our results showed that low-level motion did not orient attention, but the gaze direction of the robot did. These findings suggest that the robotic gaze is perceived as a social signal, similar to human gaze.

Place, publisher, year, edition, pages
ACM / Association for Computing Machinery, 2023
Keywords
Motion cue, Reflexive attention, Gaze following, Gaze cueing, Social robots
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-108211 (URN)10.1145/3568294.3580066 (DOI)001054975700029 ()2-s2.0-85150446663 (Scopus ID)9781450399708 (ISBN)
Conference
ACM/IEEE International Conference on Human-Robot Interaction (HRI '23), Stockholm, Sweden, March 13-16, 2023
Funder
EU, Horizon 2020, 754285Wallenberg AI, Autonomous Systems and Software Program (WASP)
Note

Funding agency:

Spanish Ministerio de Ciencia, Innovación y Universidades, RobWell project (No RTI2018-095599-A-C22)

Available from: 2023-09-11 Created: 2023-09-11 Last updated: 2024-02-27Bibliographically approved
Almeida, T., Rudenko, A., Schreiter, T., Zhu, Y., Gutiérrez Maestro, E., Morillo-Mendez, L., . . . Lilienthal, A. (2023). THÖR-Magni: Comparative Analysis of Deep Learning Models for Role-Conditioned Human Motion Prediction. In: Proceedings of the IEEE/CVF International Conference on Computer Vision: . Paper presented at IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, Paris, France, October 2-6, 2023 (pp. 2200-2209).
Open this publication in new window or tab >>THÖR-Magni: Comparative Analysis of Deep Learning Models for Role-Conditioned Human Motion Prediction
Show others...
2023 (English)In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, p. 2200-2209Conference paper, Published paper (Refereed)
Abstract [en]

Autonomous systems, that need to operate in human environments and interact with the users, rely on understanding and anticipating human activity and motion. Among the many factors which influence human motion, semantic attributes, such as the roles and ongoing activities of the detected people, provide a powerful cue on their future motion, actions, and intentions. In this work we adapt several popular deep learning models for trajectory prediction with labels corresponding to the roles of the people. To this end we use the novel THOR-Magni dataset, which captures human activity in industrial settings and includes the relevant semantic labels for people who navigate complex environments, interact with objects and robots, work alone and in groups. In qualitative and quantitative experiments we show that the role-conditioned LSTM, Transformer, GAN and VAE methods can effectively incorporate the semantic categories, better capture the underlying input distribution and therefore produce more accurate motion predictions in terms of Top-K ADE/FDE and log-likelihood metrics.

National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-109508 (URN)
Conference
IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, Paris, France, October 2-6, 2023
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP), NT4220EU, Horizon 2020, 101017274 (DARKO)
Available from: 2023-10-31 Created: 2023-10-31 Last updated: 2023-11-01Bibliographically approved
Gutiérrez Maestro, E., Almeida, T. R., Schaffernicht, E. & Martinez Mozos, O. (2023). Wearable-Based Intelligent Emotion Monitoring in Older Adults during Daily Life Activities. Applied Sciences, 13(9), Article ID 5637.
Open this publication in new window or tab >>Wearable-Based Intelligent Emotion Monitoring in Older Adults during Daily Life Activities
2023 (English)In: Applied Sciences, E-ISSN 2076-3417, Vol. 13, no 9, article id 5637Article in journal (Refereed) Published
Abstract [en]

We present a system designed to monitor the well-being of older adults during their daily activities. To automatically detect and classify their emotional state, we collect physiological data through a wearable medical sensor. Ground truth data are obtained using a simple smartphone app that provides ecological momentary assessment (EMA), a method for repeatedly sampling people's current experiences in real time in their natural environments. We are making the resulting dataset publicly available as a benchmark for future comparisons and methods. We are evaluating two feature selection methods to improve classification performance and proposing a feature set that augments and contrasts domain expert knowledge based on time-analysis features. The results demonstrate an improvement in classification accuracy when using the proposed feature selection methods. Furthermore, the feature set we present is better suited for predicting emotional states in a leave-one-day-out experimental setup, as it identifies more patterns.

Place, publisher, year, edition, pages
MDPI, 2023
Keywords
activities for daily life (ADL), artificial intelligence, affective computing, machine learning, medical wearable, mental well-being, older adults, smart health
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:oru:diva-106058 (URN)10.3390/app13095637 (DOI)000986950700001 ()2-s2.0-85159278222 (Scopus ID)
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Knut and Alice Wallenberg Foundation
Available from: 2023-05-26 Created: 2023-05-26 Last updated: 2024-01-03Bibliographically approved
Barber, R., Ortiz, F. J., Garrido, S., Calatrava Nicolás, F., Mora, A., Prados, A., . . . Martinez Mozos, O. (2022). A Multirobot System in an Assisted Home Environment to Support the Elderly in Their Daily Lives. Sensors, 22(20), Article ID 7983.
Open this publication in new window or tab >>A Multirobot System in an Assisted Home Environment to Support the Elderly in Their Daily Lives
Show others...
2022 (English)In: Sensors, E-ISSN 1424-8220, Vol. 22, no 20, article id 7983Article in journal (Refereed) Published
Abstract [en]

The increasing isolation of the elderly both in their own homes and in care homes has made the problem of caring for elderly people who live alone an urgent priority. This article presents a proposed design for a heterogeneous multirobot system consisting of (i) a small mobile robot to monitor the well-being of elderly people who live alone and suggest activities to keep them positive and active and (ii) a domestic mobile manipulating robot that helps to perform household tasks. The entire system is integrated in an automated home environment (AAL), which also includes a set of low-cost automation sensors, a medical monitoring bracelet and an Android application to propose emotional coaching activities to the person who lives alone. The heterogeneous system uses ROS, IoT technologies, such as Node-RED, and the Home Assistant Platform. Both platforms with the home automation system have been tested over a long period of time and integrated in a real test environment, with good results. The semantic segmentation of the navigation and planning environment in the mobile manipulator for navigation and movement in the manipulation area facilitated the tasks of the later planners. Results about the interactions of users with the applications are presented and the use of artificial intelligence to predict mood is discussed. The experiments support the conclusion that the assistance robot correctly proposes activities, such as calling a relative, exercising, etc., during the day, according to the user's detected emotional state, making this is an innovative proposal aimed at empowering the elderly so that they can be autonomous in their homes and have a good quality of life.

Place, publisher, year, edition, pages
MDPI, 2022
Keywords
Ambient Assisted Living (AAL), IoT, Node-RED, ROS, aging, assistive robotics, heterogeneous systems, interoperability, robotic manipulation, smart home, social robots, well-being
National Category
Robotics
Identifiers
urn:nbn:se:oru:diva-101971 (URN)10.3390/s22207983 (DOI)000873536500001 ()36298332 (PubMedID)2-s2.0-85140825541 (Scopus ID)
Note

Funding agency:

Fundacion Seneca 19895/GERM/15

Available from: 2022-10-28 Created: 2022-10-28 Last updated: 2022-11-09Bibliographically approved
Morillo-Mendez, L., Schrooten, M. G. S., Loutfi, A. & Martinez Mozos, O. (2022). Age-Related Differences in the Perception of Robotic Referential Gaze in Human-Robot Interaction. International Journal of Social Robotics, 1-13
Open this publication in new window or tab >>Age-Related Differences in the Perception of Robotic Referential Gaze in Human-Robot Interaction
2022 (English)In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, p. 1-13Article in journal (Refereed) Epub ahead of print
Abstract [en]

There is an increased interest in using social robots to assist older adults during their daily life activities. As social robots are designed to interact with older users, it becomes relevant to study these interactions under the lens of social cognition. Gaze following, the social ability to infer where other people are looking at, deteriorates with older age. Therefore, the referential gaze from robots might not be an effective social cue to indicate spatial locations to older users. In this study, we explored the performance of older adults, middle-aged adults, and younger controls in a task assisted by the referential gaze of a Pepper robot. We examined age-related differences in task performance, and in self-reported social perception of the robot. Our main findings show that referential gaze from a robot benefited task performance, although the magnitude of this facilitation was lower for older participants. Moreover, perceived anthropomorphism of the robot varied less as a result of its referential gaze in older adults. This research supports that social robots, even if limited in their gazing capabilities, can be effectively perceived as social entities. Additionally, this research suggests that robotic social cues, usually validated with young participants, might be less optimal signs for older adults.

Supplementary Information: The online version contains supplementary material available at 10.1007/s12369-022-00926-6.

Place, publisher, year, edition, pages
Springer, 2022
Keywords
Aging, Gaze following, Human-robot interaction, Non-verbal cues, Referential gaze, Social cues
National Category
Gerontology, specialising in Medical and Health Sciences Robotics
Identifiers
urn:nbn:se:oru:diva-101615 (URN)10.1007/s12369-022-00926-6 (DOI)000857896500001 ()36185773 (PubMedID)2-s2.0-85138680591 (Scopus ID)
Funder
European Commission, 754285Wallenberg AI, Autonomous Systems and Software Program (WASP)
Note

Funding agency:

RobWell project - Spanish Ministerio de Ciencia, Innovacion y Universidades RTI2018-095599-A-C22

Available from: 2022-10-04 Created: 2022-10-04 Last updated: 2023-12-08Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-3908-4921

Search in DiVA

Show all publications