To Örebro University

oru.seÖrebro University Publications
Change search
Link to record
Permanent link

Direct link
Martinez Mozos, OscarORCID iD iconorcid.org/0000-0002-3908-4921
Publications (10 of 50) Show all publications
Morillo-Mendez, L., Schrooten, M. G. S., Loutfi, A. & Martinez Mozos, O. (2024). Age-Related Differences in the Perception of Robotic Referential Gaze in Human-Robot Interaction. International Journal of Social Robotics, 16(6), 1069-1081
Open this publication in new window or tab >>Age-Related Differences in the Perception of Robotic Referential Gaze in Human-Robot Interaction
2024 (English)In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 16, no 6, p. 1069-1081Article in journal (Refereed) Published
Abstract [en]

There is an increased interest in using social robots to assist older adults during their daily life activities. As social robots are designed to interact with older users, it becomes relevant to study these interactions under the lens of social cognition. Gaze following, the social ability to infer where other people are looking at, deteriorates with older age. Therefore, the referential gaze from robots might not be an effective social cue to indicate spatial locations to older users. In this study, we explored the performance of older adults, middle-aged adults, and younger controls in a task assisted by the referential gaze of a Pepper robot. We examined age-related differences in task performance, and in self-reported social perception of the robot. Our main findings show that referential gaze from a robot benefited task performance, although the magnitude of this facilitation was lower for older participants. Moreover, perceived anthropomorphism of the robot varied less as a result of its referential gaze in older adults. This research supports that social robots, even if limited in their gazing capabilities, can be effectively perceived as social entities. Additionally, this research suggests that robotic social cues, usually validated with young participants, might be less optimal signs for older adults.

Supplementary Information: The online version contains supplementary material available at 10.1007/s12369-022-00926-6.

Place, publisher, year, edition, pages
Springer, 2024
Keywords
Aging, Gaze following, Human-robot interaction, Non-verbal cues, Referential gaze, Social cues
National Category
Gerontology, specialising in Medical and Health Sciences Robotics and automation
Identifiers
urn:nbn:se:oru:diva-101615 (URN)10.1007/s12369-022-00926-6 (DOI)000857896500001 ()36185773 (PubMedID)2-s2.0-85138680591 (Scopus ID)
Funder
European Commission, 754285Wallenberg AI, Autonomous Systems and Software Program (WASP)
Note

Funding agency:

RobWell project - Spanish Ministerio de Ciencia, Innovacion y Universidades RTI2018-095599-A-C22

Available from: 2022-10-04 Created: 2022-10-04 Last updated: 2025-02-05Bibliographically approved
Morillo-Mendez, L., Stower, R., Sleat, A., Schreiter, T., Leite, I., Martinez Mozos, O. & Schrooten, M. G. S. (2023). Can the robot "see" what I see? Robot gaze drives attention depending on mental state attribution. Frontiers in Psychology, 14, Article ID 1215771.
Open this publication in new window or tab >>Can the robot "see" what I see? Robot gaze drives attention depending on mental state attribution
Show others...
2023 (English)In: Frontiers in Psychology, E-ISSN 1664-1078, Vol. 14, article id 1215771Article in journal (Refereed) Published
Abstract [en]

Mentalizing, where humans infer the mental states of others, facilitates understanding and interaction in social situations. Humans also tend to adopt mentalizing strategies when interacting with robotic agents. There is an ongoing debate about how inferred mental states affect gaze following, a key component of joint attention. Although the gaze from a robot induces gaze following, the impact of mental state attribution on robotic gaze following remains unclear. To address this question, we asked forty-nine young adults to perform a gaze cueing task during which mental state attribution was manipulated as follows. Participants sat facing a robot that turned its head to the screen at its left or right. Their task was to respond to targets that appeared either at the screen the robot gazed at or at the other screen. At the baseline, the robot was positioned so that participants would perceive it as being able to see the screens. We expected faster response times to targets at the screen the robot gazed at than targets at the non-gazed screen (i.e., gaze cueing effect). In the experimental condition, the robot's line of sight was occluded by a physical barrier such that participants would perceive it as unable to see the screens. Our results revealed gaze cueing effects in both conditions although the effect was reduced in the occluded condition compared to the baseline. These results add to the expanding fields of social cognition and human-robot interaction by suggesting that mentalizing has an impact on robotic gaze following.

Place, publisher, year, edition, pages
Frontiers Media S.A., 2023
Keywords
attention, cueing effect, gaze following, intentional stance, mentalizing, social robots
National Category
Robotics and automation
Identifiers
urn:nbn:se:oru:diva-107503 (URN)10.3389/fpsyg.2023.1215771 (DOI)001037081700001 ()37519379 (PubMedID)2-s2.0-85166030431 (Scopus ID)
Funder
EU, European Research Council, 754285Wallenberg AI, Autonomous Systems and Software Program (WASP), RTI2018-095599-A-C22
Note

Funding Agency:

RobWell project - Spanish Ministerio de Ciencia, Innovacion y Universidades

Available from: 2023-08-10 Created: 2023-08-10 Last updated: 2025-02-09Bibliographically approved
Hernandez, A. C., Gomez, C., Barber, R. & Martinez Mozos, O. (2023). Exploiting the confusions of semantic places to improve service robotic tasks in indoor environments. Robotics and Autonomous Systems, 159, Article ID 104290.
Open this publication in new window or tab >>Exploiting the confusions of semantic places to improve service robotic tasks in indoor environments
2023 (English)In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 159, article id 104290Article in journal (Refereed) Published
Abstract [en]

A significant challenge in service robots is the semantic understanding of their surrounding areas. Traditional approaches addressed this problem by segmenting the environment into regions corresponding to full rooms that are assigned labels consistent with human perception, e.g. office or kitchen. However, different areas inside the same room can be used in different ways: Could the table and the chair in my kitchen become my office ? What is the category of that area now? office or kitchen? To adapt to these circumstances we propose a new paradigm where we intentionally relax the resulting labeling of place classifiers by allowing confusions, and by avoiding further filtering leading to clean full room classifications. Our hypothesis is that confusions can be beneficial to a service robot and, therefore, they can be kept and better exploited. Our approach creates a subdivision of the environment into different regions by maintaining the confusions which are due to the scene appearance or to the distribution of objects. In this paper, we present a proof of concept implemented in simulated and real scenarios, that improves efficiency in the robotic task of searching for objects by exploiting the confusions in place classifications.

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
Semantic understanding, Service robots
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:oru:diva-106501 (URN)10.1016/j.robot.2022.104290 (DOI)001002166500002 ()2-s2.0-85140915247 (Scopus ID)
Funder
Knut and Alice Wallenberg Foundation
Note

Funding agencies:

Spanish Government RTI2018-095599-B-C21 RTI2018-095599-A-C22

RoboCity2030-Madrid Robotics Digital Innovation Hub project, Spain S2018/NMT-4331

 

Available from: 2023-06-22 Created: 2023-06-22 Last updated: 2025-02-07Bibliographically approved
Chang, C., Miyauchi, S., Morooka, K., Kurazume, R. & Martinez Mozos, O. (2023). FusionNet: A Frame Interpolation Network for 4D Heart Models. In: Jonghye Woo, Alessa Hering, Wilson Silva, ... (Ed.), Medical Image Computing and Computer Assisted Intervention – MICCAI 2023 Workshops: MTSAIL 2023, LEAF 2023, AI4Treat 2023, MMMI 2023, REMIA 2023, Held in Conjunction with MICCAI 2023, Vancouver, BC, Canada, October 8–12, 2023, Proceedings. Paper presented at 26th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Vancouver, Canada, October 8-12, 2023 (pp. 35-44). Springer, 14394
Open this publication in new window or tab >>FusionNet: A Frame Interpolation Network for 4D Heart Models
Show others...
2023 (English)In: Medical Image Computing and Computer Assisted Intervention – MICCAI 2023 Workshops: MTSAIL 2023, LEAF 2023, AI4Treat 2023, MMMI 2023, REMIA 2023, Held in Conjunction with MICCAI 2023, Vancouver, BC, Canada, October 8–12, 2023, Proceedings / [ed] Jonghye Woo, Alessa Hering, Wilson Silva, ..., Springer, 2023, Vol. 14394, p. 35-44Conference paper, Published paper (Refereed)
Abstract [en]

Cardiac magnetic resonance (CMR) imaging is widely used to visualise cardiac motion and diagnose heart disease. However, standard CMR imaging requires patients to lie still in a confined space inside a loud machine for 40-60 min, which increases patient discomfort. In addition, shorter scan times decrease either or both the temporal and spatial resolutions of cardiac motion, and thus, the diagnostic accuracy of the procedure. Of these, we focus on reduced temporal resolution and propose a neural network called FusionNet to obtain four-dimensional (4D) cardiac motion with high temporal resolution from CMR images captured in a short period of time. The model estimates intermediate 3D heart shapes based on adjacent shapes. The results of an experimental evaluation of the proposed FusionNet model showed that it achieved a performance of over 0.897 in terms of the Dice coefficient, confirming that it can recover shapes more precisely than existing methods. This code is available at: https://github.com/smiyauchi199/FusionNet.git.

Place, publisher, year, edition, pages
Springer, 2023
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 14394
Keywords
Frame interpolation, 4D heart model, Generative model
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-114140 (URN)10.1007/978-3-031-47425-5_4 (DOI)001211854500004 ()2-s2.0-85185719639 (Scopus ID)9783031474248 (ISBN)9783031474255 (ISBN)
Conference
26th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Vancouver, Canada, October 8-12, 2023
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Knut and Alice Wallenberg Foundation
Note

This work was supported by JSPS KAKENHI Grant Number 20K19924, the Wallenberg AI, Autonomous Systems and Software Program (WASP), Sweden funded by the Knut and Alice Wallenberg Foundation, Sweden, and used the UK Biobank Resource under application no. 42239.

Available from: 2024-06-12 Created: 2024-06-12 Last updated: 2024-06-12Bibliographically approved
Morillo-Mendez, L., Martinez Mozos, O. & Schrooten, M. G. S. (2023). Gaze cueing in older and younger adults is elicited by a social robot seen from the back. Cognitive Systems Research, 82, Article ID 101149.
Open this publication in new window or tab >>Gaze cueing in older and younger adults is elicited by a social robot seen from the back
2023 (English)In: Cognitive Systems Research, ISSN 2214-4366, E-ISSN 1389-0417, Vol. 82, article id 101149Article in journal (Refereed) Published
Abstract [en]

The ability to follow the gaze of others deteriorates with age. This decline is typically tested with gaze cueing tasks, in which the time it takes to respond to targets on a screen is faster when they are preceded by a facial cue looking in the direction of the target (i.e., gaze cueing effect). It is unclear whether age-related differences in this effect occur with gaze cues other than the eyes, such as head orientation, and how these vary in function of the cue-target timing. Based on the perceived usefulness of social robots to assist older adults, we asked older and young adults to perform a gaze cueing task with the head of a NAO robot as the central cue. Crucially, the head was viewed from the back, and so its eye gaze was conveyed. In a control condition, the head was static and faced away from the participant. The stimulus onset asynchrony (SOA) between cue and target was 340 ms or 1000 ms. Both age groups showed a gaze cueing effect at both SOAs. Older participants showed a reduced facilitation effect (i.e., faster on congruent gazing trials than on neutral trials) at the 340-ms SOA compared to the 1000-ms SOA, and no differences between incongruent trials and neutral trials at the 340-ms SOA. Our results show that a robot with non-visible eyes can elicit gaze cueing effects. Age-related differences in the other effects are discussed regarding differences in processing time.

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
Gaze following, Gaze cueing effect, Human-robot interaction, Aging
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:oru:diva-108208 (URN)10.1016/j.cogsys.2023.101149 (DOI)001054852800001 ()2-s2.0-85165450249 (Scopus ID)
Funder
EU, Horizon 2020, 754285Wallenberg AI, Autonomous Systems and Software Program (WASP)
Note

Funding agency:

Spanish Ministerio de Ciencia, RobWellproject RTI2018-095599-A-C22

Available from: 2023-09-11 Created: 2023-09-11 Last updated: 2025-02-07Bibliographically approved
Calatrava Nicolás, F. M. & Martinez Mozos, O. (2023). Light Residual Network for Human Activity Recognition using Wearable Sensor Data. IEEE Sensors Letters, 7(10), Article ID 7005304.
Open this publication in new window or tab >>Light Residual Network for Human Activity Recognition using Wearable Sensor Data
2023 (English)In: IEEE Sensors Letters, E-ISSN 2475-1472, Vol. 7, no 10, article id 7005304Article in journal (Refereed) Published
Abstract [en]

This letter addresses the problem of human activity recognition (HAR) of people wearing inertial sensors using data from the UCI-HAR dataset. We propose a light residual network, which obtains an F1-Score of 97.6% that outperforms previous works, while drastically reducing the number of parameters by a factor of 15, and thus the training complexity. In addition, we propose a new benchmark based on leave-one (person)-out cross-validation to standardize and unify future classifications on the same dataset, and to increase reliability and fairness in the comparisons.

Place, publisher, year, edition, pages
IEEE, 2023
Keywords
Sensor signal processing, deep learning, human activity recognition (HAR), inertial sensors, residual network
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-109393 (URN)10.1109/LSENS.2023.3311623 (DOI)001071738100001 ()2-s2.0-85171554747 (Scopus ID)
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2023-10-25 Created: 2023-10-25 Last updated: 2023-10-25Bibliographically approved
Rodrigues de Almeida, T. & Martinez Mozos, O. (2023). Likely, Light, and Accurate Context-Free Clusters-based Trajectory Prediction. In: 2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC), 24-28 Sept. 2023: Proceedings. Paper presented at 26th IEEE International Conference on Intelligent Transportation Systems (IEEE ITSC 2023), Bilbao, Bizkaia, Spain, September 24-28, 2023 (pp. 1269-1276). IEEE
Open this publication in new window or tab >>Likely, Light, and Accurate Context-Free Clusters-based Trajectory Prediction
2023 (English)In: 2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC), 24-28 Sept. 2023: Proceedings, IEEE, 2023, p. 1269-1276Conference paper, Published paper (Refereed)
Abstract [en]

Autonomous systems in the road transportation network require intelligent mechanisms that cope with uncertainty to foresee the future. In this paper, we propose a multi-stage probabilistic approach for trajectory forecasting: trajectory transformation to displacement space, clustering of displacement time series, trajectory proposals, and ranking proposals. We introduce a new deep feature clustering method, underlying self-conditioned GAN, which copes better with distribution shifts than traditional methods. Additionally, we propose novel distance-based ranking proposals to assign probabilities to the generated trajectories that are more efficient yet accurate than an auxiliary neural network. The overall system surpasses context-free deep generative models in human and road agents trajectory data while performing similarly to point estimators when comparing the most probable trajectory.

Place, publisher, year, edition, pages
IEEE, 2023
Series
IEEE International Conference on Intelligent Transportation Systems, ISSN 2153-0009, E-ISSN 2153-0017
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-108198 (URN)10.1109/ITSC57777.2023.10422479 (DOI)001178996701042 ()2-s2.0-85186527058 (Scopus ID)9798350399479 (ISBN)9798350399462 (ISBN)
Conference
26th IEEE International Conference on Intelligent Transportation Systems (IEEE ITSC 2023), Bilbao, Bizkaia, Spain, September 24-28, 2023
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2023-09-11 Created: 2023-09-11 Last updated: 2024-06-14Bibliographically approved
Morillo-Mendez, L., Martinez Mozos, O., Hallström, F. T. & Schrooten, M. G. S. (2023). Robotic Gaze Drives Attention, Even with No Visible Eyes. In: HRI '23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at ACM/IEEE International Conference on Human-Robot Interaction (HRI '23), Stockholm, Sweden, March 13-16, 2023 (pp. 172-177). ACM / Association for Computing Machinery
Open this publication in new window or tab >>Robotic Gaze Drives Attention, Even with No Visible Eyes
2023 (English)In: HRI '23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, ACM / Association for Computing Machinery , 2023, p. 172-177Conference paper, Published paper (Refereed)
Abstract [en]

Robots can direct human attention using their eyes. However, it remains unclear whether it is the gaze or the low-level motion of the head rotation that drives attention. We isolated these components in a non-predictive gaze cueing task with a robot to explore how limited robotic signals orient attention. In each trial, the head of a NAO robot turned towards the left or right. To isolate the direction of rotation from its gaze, NAO was presented frontally and backward along blocks. Participants responded faster to targets on the gazed-at site, even when the eyes of the robot were not visible and the direction of rotation was opposed to that of the frontal condition. Our results showed that low-level motion did not orient attention, but the gaze direction of the robot did. These findings suggest that the robotic gaze is perceived as a social signal, similar to human gaze.

Place, publisher, year, edition, pages
ACM / Association for Computing Machinery, 2023
Keywords
Motion cue, Reflexive attention, Gaze following, Gaze cueing, Social robots
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:oru:diva-108211 (URN)10.1145/3568294.3580066 (DOI)001054975700029 ()2-s2.0-85150446663 (Scopus ID)9781450399708 (ISBN)
Conference
ACM/IEEE International Conference on Human-Robot Interaction (HRI '23), Stockholm, Sweden, March 13-16, 2023
Funder
EU, Horizon 2020, 754285Wallenberg AI, Autonomous Systems and Software Program (WASP)
Note

Funding agency:

Spanish Ministerio de Ciencia, Innovación y Universidades, RobWell project (No RTI2018-095599-A-C22)

Available from: 2023-09-11 Created: 2023-09-11 Last updated: 2025-02-07Bibliographically approved
Almeida, T., Rudenko, A., Schreiter, T., Zhu, Y., Gutiérrez Maestro, E., Morillo-Mendez, L., . . . Lilienthal, A. (2023). THÖR-Magni: Comparative Analysis of Deep Learning Models for Role-Conditioned Human Motion Prediction. In: 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW): . Paper presented at IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, Paris, France, October 2-6, 2023 (pp. 2192-2201). IEEE
Open this publication in new window or tab >>THÖR-Magni: Comparative Analysis of Deep Learning Models for Role-Conditioned Human Motion Prediction
Show others...
2023 (English)In: 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), IEEE, 2023, p. 2192-2201Conference paper, Published paper (Refereed)
Abstract [en]

Autonomous systems, that need to operate in human environments and interact with the users, rely on understanding and anticipating human activity and motion. Among the many factors which influence human motion, semantic attributes, such as the roles and ongoing activities of the detected people, provide a powerful cue on their future motion, actions, and intentions. In this work we adapt several popular deep learning models for trajectory prediction with labels corresponding to the roles of the people. To this end we use the novel THOR-Magni dataset, which captures human activity in industrial settings and includes the relevant semantic labels for people who navigate complex environments, interact with objects and robots, work alone and in groups. In qualitative and quantitative experiments we show that the role-conditioned LSTM, Transformer, GAN and VAE methods can effectively incorporate the semantic categories, better capture the underlying input distribution and therefore produce more accurate motion predictions in terms of Top-K ADE/FDE and log-likelihood metrics.

Place, publisher, year, edition, pages
IEEE, 2023
Series
IEEE International Conference on Computer Vision Workshop (ICCVW), ISSN 2473-9936, E-ISSN 2473-9944
National Category
Computer graphics and computer vision
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-109508 (URN)10.1109/ICCVW60793.2023.00234 (DOI)001156680302028 ()2-s2.0-85182932549 (Scopus ID)9798350307450 (ISBN)9798350307443 (ISBN)
Conference
IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, Paris, France, October 2-6, 2023
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP), NT4220EU, Horizon 2020, 101017274 (DARKO)
Available from: 2023-10-31 Created: 2023-10-31 Last updated: 2025-02-07Bibliographically approved
Gutiérrez Maestro, E., Almeida, T. R., Schaffernicht, E. & Martinez Mozos, O. (2023). Wearable-Based Intelligent Emotion Monitoring in Older Adults during Daily Life Activities. Applied Sciences, 13(9), Article ID 5637.
Open this publication in new window or tab >>Wearable-Based Intelligent Emotion Monitoring in Older Adults during Daily Life Activities
2023 (English)In: Applied Sciences, E-ISSN 2076-3417, Vol. 13, no 9, article id 5637Article in journal (Refereed) Published
Abstract [en]

We present a system designed to monitor the well-being of older adults during their daily activities. To automatically detect and classify their emotional state, we collect physiological data through a wearable medical sensor. Ground truth data are obtained using a simple smartphone app that provides ecological momentary assessment (EMA), a method for repeatedly sampling people's current experiences in real time in their natural environments. We are making the resulting dataset publicly available as a benchmark for future comparisons and methods. We are evaluating two feature selection methods to improve classification performance and proposing a feature set that augments and contrasts domain expert knowledge based on time-analysis features. The results demonstrate an improvement in classification accuracy when using the proposed feature selection methods. Furthermore, the feature set we present is better suited for predicting emotional states in a leave-one-day-out experimental setup, as it identifies more patterns.

Place, publisher, year, edition, pages
MDPI, 2023
Keywords
activities for daily life (ADL), artificial intelligence, affective computing, machine learning, medical wearable, mental well-being, older adults, smart health
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:oru:diva-106058 (URN)10.3390/app13095637 (DOI)000986950700001 ()2-s2.0-85159278222 (Scopus ID)
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Knut and Alice Wallenberg Foundation
Available from: 2023-05-26 Created: 2023-05-26 Last updated: 2024-01-03Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-3908-4921

Search in DiVA

Show all publications