To Örebro University

oru.seÖrebro University Publications
Change search
Refine search result
1 - 49 of 49
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Almeida, Tiago Rodrigues de
    et al.
    Örebro University, School of Science and Technology.
    Gutiérrez Maestro, Eduardo
    Örebro University, School of Science and Technology.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology.
    Context-free Self-Conditioned GAN for Trajectory Forecasting2022In: 21st IEEE International Conference on Machine Learning and Applications. ICMLA 2022: Proceedings / [ed] Wani, MA; Kantardzic, M; Palade, V; Neagu, D; Yang, L; Chan, KY, IEEE, 2022, p. 1218-1223Conference paper (Refereed)
    Abstract [en]

    In this paper, we present a context-free unsupervised approach based on a self-conditioned GAN to learn different modes from 2D trajectories. Our intuition is that each mode indicates a different behavioral moving pattern in the discriminator's feature space. We apply this approach to the problem of trajectory forecasting. We present three different training settings based on self-conditioned GAN, which produce better forecasters. We test our method in two data sets: human motion and road agents. Experimental results show that our approach outperforms previous context-free methods in the least representative supervised labels while performing well in the remaining labels. In addition, our approach outperforms globally in human motion, while performing well in road agents.

  • 2.
    Almeida, Tiago
    et al.
    Örebro University, School of Science and Technology.
    Rudenko, Andrey
    Robert Bosch GmbH, Corporate Research, Stuttgart, Germany.
    Schreiter, Tim
    Örebro University, School of Science and Technology.
    Zhu, Yufei
    Örebro University, School of Science and Technology.
    Gutiérrez Maestro, Eduardo
    Örebro University, School of Science and Technology.
    Morillo-Mendez, Lucas
    Örebro University, School of Science and Technology.
    Kucner, Tomasz P.
    Mobile Robotics Group, Department of Electrical Engineering and Automation, Aalto University, Finland; FCAI, Finnish Center for Artificial Intelligence, Finland.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Palmieri, Luigi
    Robert Bosch GmbH, Corporate Research, Stuttgart, Germany.
    Arras, Kai O.
    Robert Bosch GmbH, Corporate Research, Stuttgart, Germany.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    THÖR-Magni: Comparative Analysis of Deep Learning Models for Role-Conditioned Human Motion Prediction2023In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, p. 2200-2209Conference paper (Refereed)
    Abstract [en]

    Autonomous systems, that need to operate in human environments and interact with the users, rely on understanding and anticipating human activity and motion. Among the many factors which influence human motion, semantic attributes, such as the roles and ongoing activities of the detected people, provide a powerful cue on their future motion, actions, and intentions. In this work we adapt several popular deep learning models for trajectory prediction with labels corresponding to the roles of the people. To this end we use the novel THOR-Magni dataset, which captures human activity in industrial settings and includes the relevant semantic labels for people who navigate complex environments, interact with objects and robots, work alone and in groups. In qualitative and quantitative experiments we show that the role-conditioned LSTM, Transformer, GAN and VAE methods can effectively incorporate the semantic categories, better capture the underlying input distribution and therefore produce more accurate motion predictions in terms of Top-K ADE/FDE and log-likelihood metrics.

  • 3.
    Almeida, Tiago
    et al.
    Örebro University, School of Science and Technology. IEETA, DEM, University of Aveiro, Aveiro, Portugal.
    Santos, Vitor
    IEETA, DEM, University of Aveiro, Aveiro, Portugal.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology.
    Lourenco, Bernardo
    IEETA, DEM, University of Aveiro, Aveiro, Portugal.
    Comparative Analysis of Deep Neural Networks for the Detection and Decoding of Data Matrix Landmarks in Cluttered Indoor Environments2021In: Journal of Intelligent and Robotic Systems, ISSN 0921-0296, E-ISSN 1573-0409, Vol. 103, no 1, article id 13Article in journal (Refereed)
    Abstract [en]

    Data Matrix patterns imprinted as passive visual landmarks have shown to be a valid solution for the self-localization of Automated Guided Vehicles (AGVs) in shop floors. However, existing Data Matrix decoding applications take a long time to detect and segment the markers in the input image. Therefore, this paper proposes a pipeline where the detector is based on a real-time Deep Learning network and the decoder is a conventional method, i.e. the implementation in libdmtx. To do so, several types of Deep Neural Networks (DNNs) for object detection were studied, trained, compared, and assessed. The architectures range from region proposals (Faster R-CNN) to single-shot methods (SSD and YOLO). This study focused on performance and processing time to select the best Deep Learning (DL) model to carry out the detection of the visual markers. Additionally, a specific data set was created to evaluate those networks. This test set includes demanding situations, such as high illumination gradients in the same scene and Data Matrix markers positioned in skewed planes. The proposed approach outperformed the best known and most used Data Matrix decoder available in libraries like libdmtx.

  • 4.
    Barber, Ramón
    et al.
    Robotics Lab, Universidad Carlos III de Madrid, Avda. de la Universidad, Leganés, Spain.
    Ortiz, Francisco J.
    Department of Automation, Electrical Engineering and Electronics Technology, Universidad Politécnica de Cartagena, Cartagena, Spain.
    Garrido, Santiago
    Robotics Lab, Universidad Carlos III de Madrid, Avda. de la Universidad, Leganés, Spain.
    Calatrava Nicolás, Francisco
    Örebro University, School of Science and Technology.
    Mora, Alicia
    Robotics Lab, Universidad Carlos III de Madrid, Avda. de la Universidad, Leganés, Spain.
    Prados, Adrián
    Robotics Lab, Universidad Carlos III de Madrid, Avda. de la Universidad, Leganés, Spain.
    Vera-Repullo, José Alfonso
    Department of Automation, Electrical Engineering and Electronics Technology, Universidad Politécnica de Cartagena, Cartagena, Spain.
    Roca-González, Joaquín
    Department of Automation, Electrical Engineering and Electronics Technology, Universidad Politécnica de Cartagena, Cartagena, Spain.
    Méndez, Inmaculada
    Department of Evolutionary and Educational Psychology, Faculty of Psychology, University of Murcia, Murcia, Spain.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology.
    A Multirobot System in an Assisted Home Environment to Support the Elderly in Their Daily Lives2022In: Sensors, E-ISSN 1424-8220, Vol. 22, no 20, article id 7983Article in journal (Refereed)
    Abstract [en]

    The increasing isolation of the elderly both in their own homes and in care homes has made the problem of caring for elderly people who live alone an urgent priority. This article presents a proposed design for a heterogeneous multirobot system consisting of (i) a small mobile robot to monitor the well-being of elderly people who live alone and suggest activities to keep them positive and active and (ii) a domestic mobile manipulating robot that helps to perform household tasks. The entire system is integrated in an automated home environment (AAL), which also includes a set of low-cost automation sensors, a medical monitoring bracelet and an Android application to propose emotional coaching activities to the person who lives alone. The heterogeneous system uses ROS, IoT technologies, such as Node-RED, and the Home Assistant Platform. Both platforms with the home automation system have been tested over a long period of time and integrated in a real test environment, with good results. The semantic segmentation of the navigation and planning environment in the mobile manipulator for navigation and movement in the manipulation area facilitated the tasks of the later planners. Results about the interactions of users with the applications are presented and the use of artificial intelligence to predict mood is discussed. The experiments support the conclusion that the assistance robot correctly proposes activities, such as calling a relative, exercising, etc., during the day, according to the user's detected emotional state, making this is an innovative proposal aimed at empowering the elderly so that they can be autonomous in their homes and have a good quality of life.

  • 5.
    Bautista-Salinas, Daniel
    et al.
    Technical University of Cartagena, Cartagena, Spain.
    Roca Gonzalez, Joaquin
    Technical University of Cartagena, Cartagena, Spain.
    Mendez, Inmaculada
    University of Murcia, Murcia, Spain.
    Martinez Mozos, Oscar
    Technical University of Cartagena, Cartagena, Spain.
    Monitoring and Prediction of Mood in Elderly People During Daily Life Activities2019In: 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE, 2019, p. 6930-6934Conference paper (Refereed)
    Abstract [en]

    We present an intelligent wearable system to monitor and predict mood states of elderly people during their daily life activities. Our system is composed of a wristband to record different physiological activities together with a mobile app for ecological momentary assessment (EMA). Machine learning is used to train a classifier to automatically predict different mood states based on the smart band only. Our approach shows promising results on mood accuracy and provides results comparable with the state of the art in the specific detection of happiness and activeness.

  • 6.
    Bautista-Salinas, Daniel
    et al.
    Technical University of Cartagena, Cartagena, Spain.
    Szentagotai-Tatar, Aurora
    Babes-Bolyai University, Cluj-Napoca, Romania.
    Matu, Silviu A.
    Babes-Bolyai University, Cluj-Napoca, Romania.
    Martinez Mozos, Oscar
    Technical University of Cartagena, Cartagena, Spain.
    Mental health monitoring of elderly people during daily life activities2018In: First sheld-on conference meeting: Proceedings Book / [ed] Jake Kaner; Petre Lameski; Signe Tomsone; Michael Burnard; Francisco Melero, 2018, p. 89-91Conference paper (Refereed)
  • 7.
    Calatrava Nicolás, Francisco M.
    et al.
    Örebro University, School of Science and Technology.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology.
    Light Residual Network for Human Activity Recognition using Wearable Sensor Data2023In: IEEE Sensors Letters, E-ISSN 2475-1472, Vol. 7, no 10, article id 7005304Article in journal (Refereed)
    Abstract [en]

    This letter addresses the problem of human activity recognition (HAR) of people wearing inertial sensors using data from the UCI-HAR dataset. We propose a light residual network, which obtains an F1-Score of 97.6% that outperforms previous works, while drastically reducing the number of parameters by a factor of 15, and thus the training complexity. In addition, we propose a new benchmark based on leave-one (person)-out cross-validation to standardize and unify future classifications on the same dataset, and to increase reliability and fairness in the comparisons.

  • 8.
    Calatrava-Nicolás, Francisco M.
    et al.
    ETSII (Escuela Técnica Superior de Ingeniería Industrial), Technical University of Cartagena, Cartagena, Spain.
    Gutiérrez-Maestro, Eduardo
    Örebro University, School of Science and Technology.
    Bautista-Salinas, Daniel
    The Hamlyn Centre for Robotic Surgery, Imperial College London, London, UK.
    Ortiz, Francisco J.
    ETSII (Escuela Técnica Superior de Ingeniería Industrial), Technical University of Cartagena, Cartagena, Spain.
    González, Joaquín Roca
    ETSII (Escuela Técnica Superior de Ingeniería Industrial), Technical University of Cartagena, Cartagena, Spain.
    Vera-Repullo, José Alfonso
    ETSII (Escuela Técnica Superior de Ingeniería Industrial), Technical University of Cartagena, Cartagena, Spain.
    Jiménez-Buendía, Manuel
    ETSII (Escuela Técnica Superior de Ingeniería Industrial), Technical University of Cartagena, Cartagena, Spain.
    Méndez, Inmaculada
    Department of Evolutionary and Educational Psychology, Faculty of Psychology, Campus Regional Excellence Mare Nostrum, University of Murcia, Murcia, Spain.
    Ruiz-Esteban, Cecilia
    Department of Evolutionary and Educational Psychology, Faculty of Psychology, Campus Regional Excellence Mare Nostrum, University of Murcia, Murcia, Spain.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology.
    Robotic-Based Well-Being Monitoring and Coaching System for the Elderly in Their Daily Activities2021In: Sensors, E-ISSN 1424-8220, Vol. 21, no 20, article id 6865Article in journal (Refereed)
    Abstract [en]

    The increasingly ageing population and the tendency to live alone have led science and engineering researchers to search for health care solutions. In the COVID 19 pandemic, the elderly have been seriously affected in addition to suffering from isolation and its associated and psychological consequences. This paper provides an overview of the RobWell (Robotic-based Well-Being Monitoring and Coaching System for the Elderly in their Daily Activities) system. It is a system focused on the field of artificial intelligence for mood prediction and coaching. This paper presents a general overview of the initially proposed system as well as the preliminary results related to the home automation subsystem, autonomous robot navigation and mood estimation through machine learning prior to the final system integration, which will be discussed in future works. The main goal is to improve their mental well-being during their daily household activities. The system is composed of ambient intelligence with intelligent sensors, actuators and a robotic platform that interacts with the user. A test smart home system was set up in which the sensors, actuators and robotic platform were integrated and tested. For artificial intelligence applied to mood prediction, we used machine learning to classify several physiological signals into different moods. In robotics, it was concluded that the ROS autonomous navigation stack and its autodocking algorithm were not reliable enough for this task, while the robot's autonomy was sufficient. Semantic navigation, artificial intelligence and computer vision alternatives are being sought.

  • 9. Coppola, Claudio
    et al.
    Martinez Mozos, Oscar
    Bellotto, Nicola
    Applying a 3D qualitative trajectory calculus to human action recognition using depth cameras2015Conference paper (Refereed)
    Abstract [en]

    The life span of ordinary people is increasing steadily and many developed countries are facing the big challenge of dealing with an ageing population at greater risk of impairments and cognitive disorders, which hinder their quality of life. Monitoring human activities of daily living (ADLs) is important in order to identify potential health problems and apply corrective strategies as soon as possible. Towards this long term goal, the research here presented is a first step to monitor ADLs using 3D sensors in an Ambient Assisted Living (AAL) environment. In particular, the work here presented adopts a new 3D Qualitative Trajectory Calculus (QTC3D) to represent human actions that belong to such activities, designing and implementing a set of computational tools (i.e. Hidden Markov Models) to learn and classify them from standard datasets. Preliminary results show the good performance of our system and its potential application to a large number of scenarios, including mobile robots for AAL.

  • 10.
    Crespo Herrero, Jonathan
    et al.
    System Engineering and Automation Department, University Carlos III of Madrid, Madrid, Spain.
    Barber Castaño, Ramón I.
    System Engineering and Automation Department, University Carlos III of Madrid, Madrid, Spain.
    Martinez Mozos, Oscar
    School of Computer Science, University of Lincoln, Lincoln, England.
    An Inferring Semantic System Based on Relational Models for Mobile Robotics2015In: 2015 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC) / [ed] Valente, A.; Marques, L.; Morais, R.; Almeida, L., IEEE, 2015, p. 83-88Conference paper (Refereed)
    Abstract [en]

    Nowadays, there are many robots with the ability to move along its environment and they need a navigation system. In the semantic navigation paradigm, the ability to reason and to infer new knowledge is required. In this work a relational database is presented and proposed. This database allows to manage and to raise queries of semantic information from any environment in which a mobile robot would perform a semantic navigation, providing advantages such as conceptual simplicity and a fast method for switching between semantic environments, being unnecessary redefine new rules. In this paper, a target topological level is obtained from a semantic level destination.

  • 11.
    Crespo, Jonathan
    et al.
    Universidad Carlos III de Madrid, Madrid, Spain.
    Barber, Ramón
    Universidad Carlos III de Madrid, Madrid, Spain.
    Martinez Mozos, Oscar
    Universidad Politécnica de Cartagena, Cartagena, Spain.
    Relational Model for Robotic Semantic Navigation in Indoor Environments2017In: Journal of Intelligent and Robotic Systems, ISSN 0921-0296, E-ISSN 1573-0409, Vol. 86, no 3-4, p. 617-639Article in journal (Refereed)
    Abstract [en]

    The emergence of service robots in our environment raises the need to find systems that help the robots in the task of managing the information from human environments. A semantic model of the environment provides the robot with a representation closer to the human perception, and it improves its human-robot communication system. In addition, a semantic model will improve the capabilities of the robot to carry out high level navigation tasks. This paper presents a semantic relational model that includes conceptual and physical representation of objects and places, utilities of the objects, and semantic relation among objects and places. This model allows the robot to manage the environment and to make queries about the environment in order to do plans for navigation tasks. In addition, this model has several advantages such as conceptual simplicity and flexibility of adaptation to different environments. To test the performance of the proposed semantic model, the output for the semantic inference system is associate to the geometric and topological information of objects and places in order to do the navigation tasks.

  • 12.
    Crespo, Jonathan
    et al.
    University of Carlos III of Madrid, Madrid, Spain.
    Barber, Ramón
    University of Carlos III of Madrid, Madrid, Spain.
    Martinez Mozos, Oscar
    Technical University of Cartagena, Murcia, Spain.
    Bessler, Daniel
    Institute of Artificial Intelligent, University of Bremen, Bremen, Germany.
    Beetz, Michael
    Institute of Artificial Intelligent, University of Bremen, Bremen, Germany.
    Reasoning systems for semantic navigation in mobile robots2018In: 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Maciejewski, AA; Okamura, A; Bicchi, A; Stachniss, C; Song, DZ; Lee, DH; Chaumette, F; Ding, H; Li, JS; Wen, J; Roberts, J; Masamune, K; Chong, NY; Amato, N; Tsagwarakis, N; Rocco, P; Asfour, T; Chung, WK; Yasuyoshi, Y; Sun, Y; Maciekeski, T; Althoefer, K; AndradeCetto, J; Chung, WK; Demircan, E; Dias, J; Fraisse, P; Gross, R; Harada, H; Hasegawa, Y; Hayashibe, M; Kiguchi, K; Kim, K; Kroeger, T; Li, Y; Ma, S; Mochiyama, H; Monje, CA; Rekleitis, I; Roberts, R; Stulp, F; Tsai, CHD; Zollo, L, IEEE, 2018, p. 5654-5659Conference paper (Refereed)
    Abstract [en]

    Semantic navigation is the navigation paradigm in which environmental semantic concepts and their relationships are taken into account to plan the route of a mobile robot. This paradigm facilitates the interaction with humans and the understanding of human environments in terms of navigation goals and tasks. At the high level, a semantic navigation system requires two main components: a semantic representation of the environment, and a reasoning system. This paper is focused on develop a model of the environment using semantic concepts. This paper presents two solutions for the semantic navigation paradigm. Both systems implement an ontological model. Whilst the first one uses a relational database, the second one is based on KnowRob. Both systems have been integrated in a semantic navigator. We compare both systems at the qualitative and quantitative levels, and present an implementation on a mobile robot as a proof of concept.

  • 13.
    Crespo, Jonathan
    et al.
    Higher Technical School of Computer Engineering, University Rey Juan Carlos, Móstoles, Spain.
    Carlos Castillo, Jose
    Department of Systems Engineering and Automation, University Carlos III of Madrid, Leganés, Spain.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology.
    Barber, Ramon
    Department of Systems Engineering and Automation, University Carlos III of Madrid, Leganés, Spain.
    Semantic Information for Robot Navigation: A Survey2020In: Applied Sciences: APPS, E-ISSN 1454-5101, Vol. 10, no 2, article id 497Article in journal (Refereed)
    Abstract [en]

    There is a growing trend in robotics for implementing behavioural mechanisms based on human psychology, such as the processes associated with thinking. Semantic knowledge has opened new paths in robot navigation, allowing a higher level of abstraction in the representation of information. In contrast with the early years, when navigation relied on geometric navigators that interpreted the environment as a series of accessible areas or later developments that led to the use of graph theory, semantic information has moved robot navigation one step further. This work presents a survey on the concepts, methodologies and techniques that allow including semantic information in robot navigation systems. The techniques involved have to deal with a range of tasks from modelling the environment and building a semantic map, to including methods to learn new concepts and the representation of the knowledge acquired, in many cases through interaction with users. As understanding the environment is essential to achieve high-level navigation, this paper reviews techniques for acquisition of semantic information, paying attention to the two main groups: human-assisted and autonomous techniques. Some state-of-the-art semantic knowledge representations are also studied, including ontologies, cognitive maps and semantic maps. All of this leads to a recent concept, semantic navigation, which integrates the previous topics to generate high-level navigation systems able to deal with real-world complex situations

  • 14.
    Gomez, Clara
    et al.
    System Engineering and Automation Department, Carlos III University of Madrid, Madrid, Spain.
    Hernandez, Alejandra
    System Engineering and Automation Department, Carlos III University of Madrid, Madrid, Spain.
    Barber, Ramón
    System Engineering and Automation Department, Carlos III University of Madrid, Madrid, Spain.
    Moreno, Luis
    System Engineering and Automation Department, Carlos III University of Madrid, Madrid, Spain.
    Martinez Mozos, Oscar
    Department of Electronics, Computer Technology and Projects, Technical University of Cartagena, Cartagena, Spain.
    Localization of Mobile Robots Incorporating Scene Information in a Hierarchical Model2019In: 2019 Third IEEE International Conference on Robotic Computing (IRC), IEEE, 2019, p. 429-430Conference paper (Refereed)
    Abstract [en]

    The success of mobile robots, specially those operating in human environments, relies on the ability to understand human structures. The aim of this work is to develop a localization framework that considers different scenes and a hierarchical model of the environment. A probabilistic model for recognizing scenes including the objects in the environment is implemented. An efficient hierarchical model formed by different topological representations is used to localize the robot. The localization result is improved by semantic scene information. The experiments developed give preliminary results of our framework working in real environments. They uphold the usefulness of integrating the understanding of the environment in a localization process.

  • 15.
    Gutiérrez Maestro, Eduardo
    et al.
    Örebro University, School of Science and Technology.
    Almeida, Tiago Rodrigues de
    Örebro University, School of Science and Technology.
    Schaffernicht, Erik
    Örebro University, School of Science and Technology.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology.
    Wearable-Based Intelligent Emotion Monitoring in Older Adults during Daily Life Activities2023In: Applied Sciences, E-ISSN 2076-3417, Vol. 13, no 9, article id 5637Article in journal (Refereed)
    Abstract [en]

    We present a system designed to monitor the well-being of older adults during their daily activities. To automatically detect and classify their emotional state, we collect physiological data through a wearable medical sensor. Ground truth data are obtained using a simple smartphone app that provides ecological momentary assessment (EMA), a method for repeatedly sampling people's current experiences in real time in their natural environments. We are making the resulting dataset publicly available as a benchmark for future comparisons and methods. We are evaluating two feature selection methods to improve classification performance and proposing a feature set that augments and contrasts domain expert knowledge based on time-analysis features. The results demonstrate an improvement in classification accuracy when using the proposed feature selection methods. Furthermore, the feature set we present is better suited for predicting emotional states in a leave-one-day-out experimental setup, as it identifies more patterns.

  • 16.
    Halodová, Lucie
    et al.
    Artificial Intelligence Center, Czech Technical University, Prague, Czech Republic.
    Dvořráková, Eliška
    Artificial Intelligence Center, Czech Technical University, Prague, Czech Republic.
    Majer, Filip
    Artificial Intelligence Center, Czech Technical University, Prague, Czech Republic.
    Vintr, Tomáš
    Artificial Intelligence Center, Czech Technical University, Prague, Czech Republic.
    Martinez Mozos, Oscar
    Technical University of Cartagena, Cartagena, Spain.
    Dayoub, Feras
    Australian Centre for Robotic Vision, QUT, Australia.
    Krajník, Tomáš
    Artificial Intelligence Center, Czech Technical University, Prague, Czech Republic.
    Predictive and adaptive maps for long-term visual navigation in changing environments2019In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE Press, 2019, p. 7033-7039Conference paper (Refereed)
    Abstract [en]

    In this paper, we compare different map management techniques for long-term visual navigation in changing environments. In this scenario, the navigation system needs to continuously update and refine its feature map in order to adapt to the environment appearance change. To achieve reliable long-term navigation, the map management techniques have to (i) select features useful for the current navigation task, (ii) remove features that are obsolete, (iii) and add new features from the current camera view to the map. We propose several map management strategies and evaluate their performance with regard to the robot localisation accuracy in long-term teach-and-repeat navigation. Our experiments, performed over three months, indicate that strategies which model cyclic changes of the environment appearance and predict which features are going to be visible at a particular time and location, outperform strategies which do not explicitly model the temporal evolution of the changes.

  • 17.
    Hernandez, Alejandra C.
    et al.
    Robotics Lab, Department of Systems Engineering and Automation, Carlos III University of Madrid, Spain.
    Gomez, Clara
    Robotics Lab, Department of Systems Engineering and Automation, Carlos III University of Madrid, Spain.
    Barber, Ramon
    Robotics Lab, Department of Systems Engineering and Automation, Carlos III University of Madrid, Spain.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology.
    Exploiting the confusions of semantic places to improve service robotic tasks in indoor environments2023In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 159, article id 104290Article in journal (Refereed)
    Abstract [en]

    A significant challenge in service robots is the semantic understanding of their surrounding areas. Traditional approaches addressed this problem by segmenting the environment into regions corresponding to full rooms that are assigned labels consistent with human perception, e.g. office or kitchen. However, different areas inside the same room can be used in different ways: Could the table and the chair in my kitchen become my office ? What is the category of that area now? office or kitchen? To adapt to these circumstances we propose a new paradigm where we intentionally relax the resulting labeling of place classifiers by allowing confusions, and by avoiding further filtering leading to clean full room classifications. Our hypothesis is that confusions can be beneficial to a service robot and, therefore, they can be kept and better exploited. Our approach creates a subdivision of the environment into different regions by maintaining the confusions which are due to the scene appearance or to the distribution of objects. In this paper, we present a proof of concept implemented in simulated and real scenarios, that improves efficiency in the robotic task of searching for objects by exploiting the confusions in place classifications.

  • 18.
    Hernández, Alejandra C.
    et al.
    System Engineering and Automation Dept., Carlos III University of Madrid, Madrid, Spain.
    Gomez, Clara
    System Engineering and Automation Dept., Carlos III University of Madrid, Madrid, Spain.
    Barber, Ramón
    System Engineering and Automation Dept., Carlos III University of Madrid, Madrid, Spain.
    Martinez Mozos, Oscar
    Dept. of Electronics, Computer Tech. and Projects, Technical University of Cartagena, Cartagena, Spain.
    Object-based probabilistic place recognition for indoor human environments2018In: 2018 INTERNATIONAL CONFERENCE ON CONTROL, ARTIFICIAL INTELLIGENCE, ROBOTICS & OPTIMIZATION (ICCAIRO), IEEE, 2018, p. 177-182Conference paper (Refereed)
    Abstract [en]

    Giving a robot autonomy and independence in a human environment implies not only having to move safely but also the ability to understand the environment where it is located. Scene understanding is one of the most challenging tasks in robotics because the design, the objects and the arrangement of them in the scene varies considerably. In this paper we present a Probabilistic Place Recognition Model applied to mobile robots and able to work in indoor human environments. A model of uncertainties is proposed based on the information about the objects in the scene and the relationships between them. This information can influence the final decision about the probability of the presence of a robot in a place. The experimental results obtained of common indoor human environments demonstrate the ability of the model to predict place categories considering the information of the objects and the relations between them. Using more information in the prediction process makes the model more descriptive, scalable and better adapted for human-robot and robot-environment interaction.

  • 19.
    Hernández, Luis G.
    et al.
    Tecnologico de Monterrey, Escuela de Ingeniería y Ciencias, Zapopan, Mexico.
    Martinez Mozos, Oscar
    Technical University of Cartagena, Cartagena, Spain.
    Ferrández, José M.
    Technical University of Cartagena, Cartagena, Spain.
    Antelis, Javier M.
    Tecnologico de Monterrey, Escuela de Ingeniería y Ciencias, Zapopan, Mexico.
    EEG-Based Detection of Braking Intention Under Different Car Driving Conditions2018In: Frontiers in Neuroinformatics, E-ISSN 1662-5196, Vol. 12, article id 29Article in journal (Refereed)
    Abstract [en]

    The anticipatory recognition of braking is essential to prevent traffic accidents. For instance, driving assistance systems can be useful to properly respond to emergency braking situations. Moreover, the response time to emergency braking situations can be affected and even increased by different driver's cognitive states caused by stress, fatigue, and extra workload. This work investigates the detection of emergency braking from driver's electroencephalographic (EEG) signals that precede the brake pedal actuation. Bioelectrical signals were recorded while participants were driving in a car simulator while avoiding potential collisions by performing emergency braking. In addition, participants were subjected to stress, workload, and fatigue. EEG signals were classified using support vector machines (SVM) and convolutional neural networks (CNN) in order to discriminate between braking intention and normal driving. Results showed significant recognition of emergency braking intention which was on average 71.1% for SVM and 71.8% CNN. In addition, the classification accuracy for the best participant was 80.1 and 88.1% for SVM and CNN, respectively. These results show the feasibility of incorporating recognizable driver's bioelectrical responses into advanced driver-assistance systems to carry out early detection of emergency braking situations which could be useful to reduce car accidents.

  • 20.
    Jung, Hojung
    et al.
    Graduate Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Kurazume, Ryo
    Graduate Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Iwashita, Yumi
    Graduate Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Martinez Mozos, Oscar
    School of Computer Science, University of Lincoln, Lincoln, England.
    Two-dimensional local ternary patterns using synchronized images for outdoor place categorization2014In: 2014 IEEE International Conference on Image Processing (ICIP), IEEE, 2014, p. 5726-5730Conference paper (Refereed)
    Abstract [en]

    We present a novel approach for outdoor place categorization using synchronized texture and depth images obtained using a laser scanner. Categorizing outdoor places according to type is useful for autonomous driving or service robots, which work adaptively according to the surrounding conditions. However, place categorization is not straight forward due to the wide variety of environments and sensor performance limitations. In the present paper, we introduce a two-dimensional local ternary pattern (2D-LTP) descriptor using a pair of synchronized texture and depth images. The proposed 2D-LTP describes the local co-occurrence of a synchronized and complementary image pair with ternary patterns. In the present study, we construct histograms of a 2D-LTP as a feature of an outdoor place and apply singular value decomposition (SVD) to deal with the high dimensionality of the place. The novel descriptor, i.e., the 2D-LTP, exhibits a higher categorization performance than conventional image descriptors with outdoor place experiments.

  • 21.
    Jung, Hojung
    et al.
    Graduate School of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Martinez Mozos, Oscar
    School of Computer Science, University of Lincoln, Lincoln, England.
    Iwashita, Yumi
    Graduate Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Kurazume, Ryo
    Graduate Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Indoor Place Categorization Using Co-occurrences of LBPs in Gray and Depth Images from RGB-D Sensors2014In: 2014 Fifth International Conference on Emerging Security Technologies, IEEE, 2014, p. 40-45Conference paper (Refereed)
    Abstract [en]

    Indoor place categorization is an important capability for service robots working and interacting in human environments. This paper presents a new place categorization method which uses information about the spatial correlation between the different image modalities provided by RGB-D sensors. Our approach applies co-occurrence histograms of local binary patterns (LBPs) from gray and depth images that correspond to the same indoor scene. The resulting histograms are used as feature vectors in a supervised classifier. Our experimental results show the effectiveness of our method to categorize indoor places using RGB-D cameras.

  • 22.
    Jung, Hojung
    et al.
    Graduate School of Information Science and Electrical Engeneering, Kyushu University, Fukuoka, Japan.
    Martinez Mozos, Oscar
    School of Computer Science, University of Lincoln, Lincoln, England.
    Iwashita, Yumi
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Kurazume, Ryo
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Local N-ary Patterns: a local multi-modal descriptor for place categorization2016In: Advanced Robotics, ISSN 0169-1864, E-ISSN 1568-5535, Vol. 30, no 6, p. 402-415Article in journal (Refereed)
    Abstract [en]

    This paper presents an effective integration method of multiple modalities such as depth, color, and reflectance for place categorization. To achieve better performance with integrated multi-modalities, we introduce a novel descriptor, local N-ary patterns (LTP), which can perform robust discrimination of place categorization. In this paper, the LNP descriptor is applied to a combination of two modalities, i.e. depth and reflectance, provided by a laser range finder. However, the LNP descriptor can be easily extended to a larger number of modalities. The proposed LNP describes relationships between the multi-modal values of pixels and their neighboring pixels. Since we consider the multi-modal relationship, our proposed method clearly demonstrates more effective classification results than using individual modalities. We carried out experiments with the Kyushu University Indoor Semantic Place Dataset, which is publicly available. This data-set is composed of five indoor categories: corridors, kitchens, laboratories, study rooms, and offices. We confirmed that our proposed method outperforms previous uni-modal descriptors.

  • 23. Jung, Hojung
    et al.
    Martinez Mozos, Oscar
    Iwashita, Yumi
    Kurazume, Ryo
    The Outdoor LiDAR Dataset for Semantic Place Labeling2015In: The Abstracts of the international conference on advanced mechatronics: toward evolutionary fusion of IT and mechatronics: ICAM, Tokyo, 2015, p. 154-155Conference paper (Refereed)
    Abstract [en]

    We present two sets of outdoor LiDAR dataset for semantic place labeling using two different LiDAR sensors. Recognizing outdoor places according to semantic categories is useful for a mobile service robot, which works adaptively according to the surrounding conditions. However, place recognition is not straight forward due to the wide variety of environments and sensor performance limitations. In this paper, we present two sets of outdoor LiDAR dataset captured by two different LiDAR sensors, SICK and FARO LiDAR sensors. The LiDAR datasets consist of four different semantic places including forest, residential area, parking lot and urban area categories. The datasets are useful for benchmarking vision-based semantic place labeling in outdoor environments.

  • 24.
    Jung, Hojung
    et al.
    Graduate School of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Oto, Yuki
    Graduate School of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Martinez Mozos, Oscar
    Technical University of Cartagena (UPCT), Cartagena, Spain.
    Iwashita, Yumi
    Jet Propulsion Laboratory, Pasadena, USA.
    Kurazume, Ryo
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Multi-modal panoramic 3D outdoor datasets for place categorization2016In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE Press, 2016, p. 4545-4550Conference paper (Refereed)
    Abstract [en]

    We present two multi-modal panoramic 3D outdoor (MPO) datasets for semantic place categorization with six categories: forest, coast, residential area, urban area and indoor/outdoor parking lot. The first dataset consists of 650 static panoramic scans of dense (9,000,000 points) 3D color and reflectance point clouds obtained using a FARO laser scanner with synchronized color images. The second dataset consists of 34,200 real-time panoramic scans of sparse (70,000 points) 3D reflectance point clouds obtained using a Velodyne laser scanner while driving a car. The datasets were obtained in the city of Fukuoka, Japan and are publicly available in [1], [2]. In addition, we compare several approaches for semantic place categorization with best results of 96.42% (dense) and 89.67% (sparse).

  • 25.
    Krajník, Tomáš
    et al.
    Lincoln Centre for Autonomous Systems, University of Lincoln, Lincoln, England.
    Fentanes, Jaime P.
    Lincoln Centre for Autonomous Systems, University of Lincoln, Lincoln, England.
    Martinez Mozos, Oscar
    Lincoln Centre for Autonomous Systems, University of Lincoln, Lincoln, England.
    Duckett, Tom
    Lincoln Centre for Autonomous Systems, University of Lincoln, Lincoln, England.
    Ekekrantz, Johan
    Computer Vision and Active Perception Lab, The Royal Institute of Technology (KTH), Stockholm, Sweden.
    Hanheide, Marc
    Lincoln Centre for Autonomous Systems, University of Lincoln, Lincoln, England.
    Long-term topological localisation for service robots in dynamic environments using spectral maps2014In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE Press, 2014, p. 4537-4542Conference paper (Refereed)
    Abstract [en]

    This paper presents a new approach for topological localisation of service robots in dynamic indoor environments. In contrast to typical localisation approaches that rely mainly on static parts of the environment, our approach makes explicit use of information about changes by learning and modelling the spatio-temporal dynamics of the environment where the robot is acting. The proposed spatio-temporal world model is able to predict environmental changes in time, allowing the robot to improve its localisation capabilities during long-term operations in populated environments. To investigate the proposed approach, we have enabled a mobile robot to autonomously patrol a populated environment over a period of one week while building the proposed model representation. We demonstrate that the experience learned during one week is applicable for topological localization even after a hiatus of three months by showing that the localization error rate is significantly lower compared to static environment representations.

  • 26.
    Krajník, Tomáš
    et al.
    Lincoln Centre for Autonomous Systems, University of Lincoln, Lincoln, England.
    Fentanes, Jaime P.
    Lincoln Centre for Autonomous Systems, University of Lincoln, Lincoln, England.
    Martinez Mozos, Oscar
    Lincoln Centre for Autonomous Systems, University of Lincoln, Lincoln, England.
    Duckett, Tom
    Lincoln Centre for Autonomous Systems, University of Lincoln, Lincoln, England.
    Ekekrantz, Johan
    Computer Vision and Active Perception Lab, The Royal Institute of Technology (KTH), Stockholm, Sweden.
    Hanheide, Marc
    Lincoln Centre for Autonomous Systems, University of Lincoln, Lincoln, England.
    Long-term topological localisation for service robots in dynamic environments using spectral maps2014Conference paper (Refereed)
  • 27.
    Krajník, Tomáš
    et al.
    Faculty of Electrical Engineering, Czech Technical University, Praha, Czechia.
    Vintr, Tomáš
    Faculty of Electrical Engineering, Czech Technical University, Praha, Czechia.
    Molina, Sergi
    Lincoln Centre for Autonomous Systems, University of Lincoln, Lincoln, England.
    Fentanes, Jairne P.
    Lincoln Centre for Autonomous Systems, University of Lincoln, Lincoln, England.
    Cielniak, Grzegorz
    Lincoln Centre for Autonomous Systems, University of Lincoln, Lincoln, England.
    Martinez Mozos, Oscar
    Technical University of Cartagena, Cartagena, Spain.
    Broughton, George
    Faculty of Electrical Engineering, Czech Technical University, Praha, Czechia.
    Duckett, Tom
    Lincoln Centre for Autonomous Systems, University of Lincoln, Lincoln, England.
    Warped Hypertime Representations for Long-Term Autonomy of Mobile Robots2019In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 4, no 4, p. 3310-3317Article in journal (Refereed)
    Abstract [en]

    This letter presents a novel method for introducing time into discrete and continuous spatial representations used in mobile robotics, by modeling long-term, pseudo-periodic variations caused by human activities or natural processes. Unlike previous approaches, the proposed method does not treat time and space separately, and its continuous nature respects both the temporal and spatial continuity of the modeled phenomena. The key idea is to extend the spatial model with a set of wrapped time dimensions that represent the periodicities of the observed events. By performing clustering over this extended representation, we obtain a model that allows the prediction of probabilistic distributions of future states and events in both discrete and continuous spatial representations. We apply the proposed algorithm to several long-term datasets acquired by mobile robots and show that the method enables a robot to predict future states of representations with different dimensions. The experiments further show that the method achieves more accurate predictions than the previous state of the art.

  • 28.
    Kwak, Sung Jo
    et al.
    Graduate School of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan; Jeju Global Research Center, Korea Institute of Energy Research, Jeju, South Korea.
    Hasegawa, Tsutomu
    Graduate School of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Martinez Mozos, Oscar
    Graduate School of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Youb Chung, Seong
    Department of Mechanical Engineering, Chungju National University, Chungju, South Korea.
    Elimination of unnecessary contact states in contact state graphs for robotic assembly tasks2014In: The International Journal of Advanced Manufacturing Technology, ISSN 0268-3768, E-ISSN 1433-3015, Vol. 70, no 9-12, p. 1683-1697Article in journal (Refereed)
    Abstract [en]

    It needs much computation to develop a contact state graph and find an assembly sequence because polyhedral objects consist of a lot of vertices, edges, and faces. In this paper, we propose a new method to eliminate unnecessary contact states in the contact state graph corresponding to a robotic assembly task. In our method, the faces of polyhedral objects are triangulated, and the adjacency of each vertex, edge, and triangle between an initial contact state and a target contact state is defined. Then, this adjacency is used to create contact state graphs at different priorities. When a contact state graph is finished at a higher priority, a lot of unnecessary contact states can be eliminated because the contact state graph already includes at least one realizable assembly sequence. Our priority-based method is compared with a face-based method through statistically analyzing the contact state graphs obtained from different assembly tasks. Finally, our method results in a significant improvement in the final performance.

  • 29.
    Martinez Mozos, Oscar
    et al.
    School of Computer Science, University of Lincoln, Lincoln, England.
    Galindo, Cipriano
    System Engineering and Automation Department, University of Malaga, Malaga, Spain.
    Tapus, Adriana
    Robotics and Computer Vision Lab, ENSTA-ParisTech, Palaiseau, France.
    Guest-editorial: Computer-based intelligent technologies for improving the quality of life2015In: IEEE journal of biomedical and health informatics, ISSN 2168-2194, E-ISSN 2168-2208, Vol. 19, no 1, p. 4-5Article in journal (Refereed)
  • 30.
    Martinez Mozos, Oscar
    et al.
    School of Computer Science, University of Lincoln, Lincoln, England.
    Mizutani, Hitoshi
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Jung, Hojung
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Kurazume, Ryo
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Hasegawa, Tsutomu
    Kumamoto National College of Technology, Kumamoto, Japan.
    Categorization of Indoor Places by Combining Local Binary Pattern Histograms of Range and Reflectance Data from Laser Range Finders2013In: Advanced Robotics, ISSN 0169-1864, E-ISSN 1568-5535, Vol. 27, no 18, p. 1455-1464Article in journal (Refereed)
    Abstract [en]

    This paper presents an approach to categorize typical places in indoor environments using 3D scans provided by a laser range finder. Examples of such places are offices, laboratories, or kitchens. In our method, we combine the range and reflectance data from the laser scan for the final categorization of places. Range and reflectance images are transformed into histograms of local binary patterns and combined into a single feature vector. This vector is later classified using support vector machines. The results of the presented experiments demonstrate the capability of our technique to categorize indoor places with high accuracy. We also show that the combination of range and reflectance information improves the final categorization results in comparison with a single modality.

  • 31.
    Martinez Mozos, Oscar
    et al.
    Technical Univeristy of Cartagena, Cartagena, Spain.
    Nakashima, Kazuto
    Graduate School of Information Science and Electrical Engeneering, Kyushu University, Fukuoka, Japan.
    Jung, Hojung
    Graduate School of Information Science and Electrical Engeneering, Kyushu University, Fukuoka, Japan.
    Iwashita, Yumi
    Jet Propulsion Laboratory, California Institute of Technology, Paasadena, USA.
    Kurazume, Ryo
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Fukuoka datasets for place categorization2019In: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 38, no 5, p. 507-517Article in journal (Refereed)
    Abstract [en]

    This paper presents several multi-modal 3D datasets for the problem of categorization of places. In this problem. a robotic agent should decide on the type of place/environment where it is located (residential area, forest, etc.) using information gathered by its sensors. In addition to the 3D depth information, the datasets include additional modalities such as RGB or reflectance images. The observations were taken in different indoor and outdoor environments in Fukuoka city, Japan. Outdoor place categories include forests, urban areas, indoor parking, outdoor parking, coastal areas, and residential areas. Indoor place categories include corridors, offices, study rooms, kitchens, laboratories, and toilets. The datasets are available to download at http://robotics.ait.kyushu-u.ac.jp/kyushu_datasets.

  • 32.
    Martinez Mozos, Oscar
    et al.
    Technical University of Cartagena, Cartagena, Spain.
    Sandulescu, Virginia
    Department of Automatic Control and Computer Science, Politehnica University of Bucharest, Bucharest, Romania.
    Andrews, Sally
    Division of Psychology, Nottingham Trent University, Nottingham, England .
    Ellis, David
    Department of Psychology, Lancaster University, Lancaster, England.
    Bellotto, Nicola
    School of Computer Science, University of Lincoln, Lincoln, England.
    Dobrescu, Radu
    Department of Automatic Control and Computer Science, Politehnica University of Bucharest, Bucharest, Romania.
    Manuel Ferrandez, Jose
    Technical University of Cartagena, Cartagena, Spain.
    Stress Detection Using Wearable Physiological and Sociometric Sensors2017In: International Journal of Neural Systems, ISSN 0129-0657, E-ISSN 1793-6462, Vol. 27, no 2, article id 1650041Article in journal (Refereed)
    Abstract [en]

    Stress remains a significant social problem for individuals in modern societies. This paper presents a machine learning approach for the automatic detection of stress of people in a social situation by combining two sensor systems that capture physiological and social responses. We compare the performance using different classifiers including support vector machine, AdaBoost, and k-nearest neighbor. Our experimental results show that by combining the measurements from both sensor systems, we could accurately discriminate between stressful and neutral situations during a controlled Trier social stress test (TSST). Moreover, this paper assesses the discriminative ability of each sensor modality individually and considers their suitability for real-time stress detection. Finally, we present an study of the most discriminative features for stress detection.

  • 33.
    Martinez Mozos, Oscar
    et al.
    School of Computer Science, University of Lincoln, Lincoln, England.
    Tsuji, Tokuo
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Chae, Hyunuk
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Kuwahata, Shunya
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Seok Pyo, Yoon
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Hasegawa, Tsutomu
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Morooka, Ken'ichi
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Kurazume, Ryo
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    The intelligent room for elderly care2013In: Natural and Artificial Models in Computation and Biology: 5th International Work-Conference on the Interplay Between Natural and Artificial Computation, IWINAC 2013, Mallorca, Spain, June 10-14, 2013. Proceedings, Part I / [ed] José Manuel Ferrández Vicente; José Ramón Álvarez Sánchez; Félix de la Paz López; Fco. Javier Toledo Moreo, Springer, 2013, Vol. 7930, p. 103-112Conference paper (Refereed)
    Abstract [en]

    Daily life assistance for elderly is one of the most promising and interesting scenarios for advanced technologies in the near future. Improving the quality of life of elderly is also some of the first priorities in modern countries and societies where the percentage of elder people is rapidly increasing due mainly to great improvements in medicine during the last decades. In this paper, we present an overview of our informationally structured room that supports daily life activities of elderly with the aim of improving their quality of life. Our environment contains different distributed sensors including a floor sensing system and several intelligent cabinets. Sensor information is sent to a centralized management system which processes the data and makes it available to a service robot which assists the people in the room. One important restriction in our intelligent environment is to maintain a small number of sensors to avoid interfering with the daily activities of people and to reduce as much as possible the invasion of their privacy. In addition we discuss some experiments using our real environment and robot.

  • 34.
    Marton, Zoltan-Csaba
    et al.
    Institute of Robotics and Mechatronics, German Aerospace Center (DLR), Germany.
    Balint-Benczedi, Ferenc
    Institute of Artificial Intelligence, Unversity of Bremen, Germany.
    Martinez Mozos, Oscar
    School of Computer Science, University of Lincoln, Lincoln, England.
    Pangercic, Dejan
    Autonomous Technologies Group, Robert Bosch LLC, USA.
    Beetz, Michael
    Institute of Artificial Intelligence, Unversity of Bremen, Germany.
    Cumulative Object Categorization in Clutter2013In: Robotics: Science and Systems, 2013Conference paper (Refereed)
    Abstract [en]

    In this paper we present an approach based on scene- or part-graphs for geometrically categorizing touching and occluded objects, using additive RGBD feature descriptors and hashing of graph configuration parameters for describing the spatial arrangement of constituent parts. The presented experiments quantify that this method outperforms our earlier part-voting and sliding window classification. We evaluated our approach on cluttered scenes, and by using a 3D dataset containing over 15000 Kinect scans of over 100 objects which were grouped into general geometric categories. Additionally, color, geometric, and combined features were compared for categorization tasks.

  • 35.
    Morillo-Mendez, Lucas
    et al.
    Örebro University, School of Science and Technology.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology.
    Towards human-based models of behaviour in social robots: Exploring age-related differences in the processing of gaze cues in human-robot interaction2020In: Proceedings of the 9th European Starting AI Researchers' Symposium 2020 co-located with 24th European Conference on Artificial Intelligence (ECAI 2020) / [ed] Sebastian Rudolph, Goreti Marreiros, Technical University of Aachen , 2020, Vol. 2655Conference paper (Refereed)
    Abstract [en]

    The emergence of robotic systems offers many opportunities for olderadults (OA) to support their daily life activities. Therefore, there is aneed to study social interactions between OA and robots better. Oneimportant aspect of social communication is the use of non-verbal cues,of which eye gaze has proven to be of special interest both in the fieldsof social cognition and HRI. In this paper, we review previous work onHRI with OA and propose an experiment to compare the influence ofgaze behaviour of robots on older and younger users. These findingswill allow a better design and adaptation of social robots to age-relatedchanges in aspects of social cognition.

  • 36.
    Morillo-Mendez, Lucas
    et al.
    Örebro University, School of Science and Technology.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology.
    Hallström, Felix T.
    Örebro University, Örebro, Sweden.
    Schrooten, Martien G. S.
    Örebro University, School of Behavioural, Social and Legal Sciences.
    Robotic Gaze Drives Attention, Even with No Visible Eyes2023In: HRI '23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, ACM / Association for Computing Machinery , 2023, p. 172-177Conference paper (Refereed)
    Abstract [en]

    Robots can direct human attention using their eyes. However, it remains unclear whether it is the gaze or the low-level motion of the head rotation that drives attention. We isolated these components in a non-predictive gaze cueing task with a robot to explore how limited robotic signals orient attention. In each trial, the head of a NAO robot turned towards the left or right. To isolate the direction of rotation from its gaze, NAO was presented frontally and backward along blocks. Participants responded faster to targets on the gazed-at site, even when the eyes of the robot were not visible and the direction of rotation was opposed to that of the frontal condition. Our results showed that low-level motion did not orient attention, but the gaze direction of the robot did. These findings suggest that the robotic gaze is perceived as a social signal, similar to human gaze.

    Download full text (pdf)
    Robotic Gaze Drives Attention, Even with No Visible Eyes
  • 37.
    Morillo-Mendez, Lucas
    et al.
    Örebro University, School of Science and Technology.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology.
    Schrooten, Martien G. S.
    Örebro University, School of Behavioural, Social and Legal Sciences.
    Gaze cueing in older and younger adults is elicited by a social robot seen from the back2023In: Cognitive Systems Research, ISSN 2214-4366, E-ISSN 1389-0417, Vol. 82, article id 101149Article in journal (Refereed)
    Abstract [en]

    The ability to follow the gaze of others deteriorates with age. This decline is typically tested with gaze cueing tasks, in which the time it takes to respond to targets on a screen is faster when they are preceded by a facial cue looking in the direction of the target (i.e., gaze cueing effect). It is unclear whether age-related differences in this effect occur with gaze cues other than the eyes, such as head orientation, and how these vary in function of the cue-target timing. Based on the perceived usefulness of social robots to assist older adults, we asked older and young adults to perform a gaze cueing task with the head of a NAO robot as the central cue. Crucially, the head was viewed from the back, and so its eye gaze was conveyed. In a control condition, the head was static and faced away from the participant. The stimulus onset asynchrony (SOA) between cue and target was 340 ms or 1000 ms. Both age groups showed a gaze cueing effect at both SOAs. Older participants showed a reduced facilitation effect (i.e., faster on congruent gazing trials than on neutral trials) at the 340-ms SOA compared to the 1000-ms SOA, and no differences between incongruent trials and neutral trials at the 340-ms SOA. Our results show that a robot with non-visible eyes can elicit gaze cueing effects. Age-related differences in the other effects are discussed regarding differences in processing time.

    Download full text (pdf)
    Gaze cueing in older and younger adults is elicited by a social robot seen from the back
  • 38.
    Morillo-Mendez, Lucas
    et al.
    Örebro University, School of Science and Technology.
    Schrooten, Martien G. S.
    Örebro University, School of Law, Psychology and Social Work.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology.
    Age-Related Differences in the Perception of Eye-Gaze from a Social Robot2021In: Social Robotics: 13th International Conference, ICSR 2021, Singapore, Singapore, November 10–13, 2021, Proceedings / [ed] Haizhou Li; Shuzhi Sam Ge; Yan Wu; Agnieszka Wykowska; Hongsheng He; Xiaorui Liu; Dongyu Li; Jairo Perez-Osorio, Springer , 2021, Vol. 13086, p. 350-361Conference paper (Refereed)
    Abstract [en]

    The sensibility to deictic gaze declines naturally with age and often results in reduced social perception. Thus, the increasing efforts in developing social robots that assist older adults during daily life tasks need to consider the effects of aging. In this context, as non-verbal cues such as deictic gaze are important in natural communication in human-robot interaction, this paper investigates the performance of older adults, as compared to younger adults, during a controlled, online (visual search) task inspired by daily life activities, while assisted by a social robot. This paper also examines age-related differences in social perception. Our results showed a significant facilitation effect of head movement representing deictic gaze from a Pepper robot on task performance. This facilitation effect was not significantly different between the age groups. However, social perception of the robot was less influenced by its deictic gaze behavior in older adults, as compared to younger adults. This line of research may ultimately help informing the design of adaptive non-verbal cues from social robots for a wide range of end users.

  • 39.
    Morillo-Mendez, Lucas
    et al.
    Örebro University, School of Science and Technology.
    Schrooten, Martien G. S.
    Örebro University, School of Law, Psychology and Social Work.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology.
    Age-Related Differences in the Perception of Robotic Referential Gaze in Human-Robot Interaction2022In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, p. 1-13Article in journal (Refereed)
    Abstract [en]

    There is an increased interest in using social robots to assist older adults during their daily life activities. As social robots are designed to interact with older users, it becomes relevant to study these interactions under the lens of social cognition. Gaze following, the social ability to infer where other people are looking at, deteriorates with older age. Therefore, the referential gaze from robots might not be an effective social cue to indicate spatial locations to older users. In this study, we explored the performance of older adults, middle-aged adults, and younger controls in a task assisted by the referential gaze of a Pepper robot. We examined age-related differences in task performance, and in self-reported social perception of the robot. Our main findings show that referential gaze from a robot benefited task performance, although the magnitude of this facilitation was lower for older participants. Moreover, perceived anthropomorphism of the robot varied less as a result of its referential gaze in older adults. This research supports that social robots, even if limited in their gazing capabilities, can be effectively perceived as social entities. Additionally, this research suggests that robotic social cues, usually validated with young participants, might be less optimal signs for older adults.

    Supplementary Information: The online version contains supplementary material available at 10.1007/s12369-022-00926-6.

  • 40.
    Morillo-Mendez, Lucas
    et al.
    Örebro University, School of Science and Technology.
    Stower, Rebecca
    Division of Robotics, Perception and Learning, KTH, Stockholm, Sweden.
    Sleat, Alex
    Division of Robotics, Perception and Learning, KTH, Stockholm, Sweden.
    Schreiter, Tim
    Örebro University, School of Science and Technology.
    Leite, Iolanda
    Division of Robotics, Perception and Learning, KTH, Stockholm, Sweden.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology. Centre for Applied Autonomous Sensor Systems, Örebro University, Örebro, Sweden.
    Schrooten, Martien G. S.
    Örebro University, School of Behavioural, Social and Legal Sciences.
    Can the robot "see" what I see? Robot gaze drives attention depending on mental state attribution2023In: Frontiers in Psychology, E-ISSN 1664-1078, Vol. 14, article id 1215771Article in journal (Refereed)
    Abstract [en]

    Mentalizing, where humans infer the mental states of others, facilitates understanding and interaction in social situations. Humans also tend to adopt mentalizing strategies when interacting with robotic agents. There is an ongoing debate about how inferred mental states affect gaze following, a key component of joint attention. Although the gaze from a robot induces gaze following, the impact of mental state attribution on robotic gaze following remains unclear. To address this question, we asked forty-nine young adults to perform a gaze cueing task during which mental state attribution was manipulated as follows. Participants sat facing a robot that turned its head to the screen at its left or right. Their task was to respond to targets that appeared either at the screen the robot gazed at or at the other screen. At the baseline, the robot was positioned so that participants would perceive it as being able to see the screens. We expected faster response times to targets at the screen the robot gazed at than targets at the non-gazed screen (i.e., gaze cueing effect). In the experimental condition, the robot's line of sight was occluded by a physical barrier such that participants would perceive it as unable to see the screens. Our results revealed gaze cueing effects in both conditions although the effect was reduced in the occluded condition compared to the baseline. These results add to the expanding fields of social cognition and human-robot interaction by suggesting that mentalizing has an impact on robotic gaze following.

  • 41.
    Nakashima, Kazuto
    et al.
    Graduate School of Information Science and Electrical Engeneering, Kyushu University, Fukuoka, Japan.
    Jung, Hojung
    Graduate School of Information Science and Electrical Engeneering, Kyushu University, Fukuoka, Japan.
    Oto, Yuki
    Graduate School of Information Science and Electrical Engeneering, Kyushu University, Fukuoka, Japan.
    Iwashita, Yumi
    Jet Propulsion Laboratory, California Institute of Technology, Pasadena, USA.
    Kurazume, Ryo
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Martinez Mozos, Oscar
    Technical University of Cartagena, Cartagena, Spain.
    Learning geometric and photometric features from panoramic LiDAR scans for outdoor place categorization2018In: Advanced Robotics, ISSN 0169-1864, E-ISSN 1568-5535, Vol. 32, no 14, p. 750-765Article in journal (Refereed)
    Abstract [en]

    Semantic place categorization, which is one of the essential tasks for autonomous robots and vehicles, allows them to have capabilities of self-decision and navigation in unfamiliar environments. In particular, outdoor places are more difficult targets than indoor ones due to perceptual variations, such as dynamic illuminance over 24 hours and occlusions by cars and pedestrians. This paper presents a novel method of categorizing outdoor places using convolutional neural networks (CNNs), which take omnidirectional depth/reflectance images obtained by 3D LiDARs as the inputs. First, we construct a large-scale outdoor place dataset named Multi-modal Panoramic 3D Outdoor (MPO) comprising two types of point clouds captured by two different LiDARs. They are labeled with six outdoor place categories: coast, forest, indoor/outdoor parking, residential area, and urban area. Second, we provide CNNs for LiDAR-based outdoor place categorization and evaluate our approach with the MPO dataset. Our results on the MPO dataset outperform traditional approaches and show the effectiveness in which we use both depth and reflectance modalities. To analyze our trained deep networks, we visualize the learned features.

  • 42.
    Nakashima, Kazuto
    et al.
    Graduate School of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Nham, Seungwoo
    Graduate School of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Jung, Hojung
    Graduate School of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Iwashita, Yumi
    Jet Propulsion Laboratory, California Institute of Technology, Pasadena, USA.
    Kurazume, Ryo
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Martinez Mozos, Oscar
    Technical University of Cartagena (UPCT), Cartagena, Spain.
    Recognizing outdoor scenes by convolutional features of omni-directional LiDAR scans2017In: 2017 IEEE/SICE International Symposium on System Integration (SII), IEEE, 2017, p. 387-392Conference paper (Refereed)
    Abstract [en]

    We present a novel method for the outdoor scene categorization using 2D convolutional neural networks (CNNs) which take panoramic depth images obtained by a 3D laser scanner as input. We evaluate our approach in two outdoor scene datasets including six categories: coast, forest, indoor parking, outdoor parking, residential area, and urban area. Our results on both datasets (over 94%) outperform previous approaches and show the effectiveness of this approach for outdoor scene categorization using depth images. To analyze our trained networks we visualize the learned features by using two visualization methods.

  • 43.
    Rodrigues de Almeida, Tiago
    et al.
    Örebro University, School of Science and Technology.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology.
    Likely, Light, and Accurate Context-Free Clusters-based Trajectory Prediction2023Conference paper (Refereed)
    Abstract [en]

    Autonomous systems in the road transportation network require intelligent mechanisms that cope with uncertainty to foresee the future. In this paper, we propose a multi-stage probabilistic approach for trajectory forecasting: trajectory transformation to displacement space, clustering of displacement time series, trajectory proposals, and ranking proposals. We introduce a new deep feature clustering method, underlying self-conditioned GAN, which copes better with distribution shifts than traditional methods. Additionally, we propose novel distance-based ranking proposals to assign probabilities to the generated trajectories that are more efficient yet accurate than an auxiliary neural network. The overall system surpasses context-free deep generative models in human and road agents trajectory data while performing similarly to point estimators when comparing the most probable trajectory.

  • 44.
    Sandulescu, Virginia
    et al.
    Department of Automatic Control and Industrial Informatics, Politehnica University of Bucharest, Bucharest, Romania.
    Andrews, Sally
    School of Social Sciences, Nottingham Trent University, Nottingham, England.
    Ellis, David A.
    Department of Psychology, Lancaster University, Lancaster, England.
    Dobrescu, Radu
    Department of Automatic Control and Industrial Informatics, Politehnica University of Bucharest, Bucharest, Romania.
    Martinez Mozos, Oscar
    School of Computer Science, University of Lincoln, Lincoln, England.
    Mobile app for stress monitoring using voice features2015In: 2015 E-Health and Bioengineering Conference (EHB), IEEE, 2015Conference paper (Refereed)
    Abstract [en]

    The paper describes the steps involved in designing and implementing a mobile app for real time monitoring of mental stress using voice features and machine learning techniques. The app is easy to use and completely non-invasive. It is called StressID and it is available in the Google Play store. With the use of a server application presenting a web interface, interested parties may remotely monitor the stress states detected by the mobile app, enlarging the number of use case scenarios.

  • 45.
    Sandulescu, Virginia
    et al.
    Politehnica University of Bucharest, Bucharest, Romania.
    Andrews, Sally
    School of Psychology, University of Lincoln, Lincoln, England.
    Ellis, David
    School of Psychology, University of Lincoln, Lincoln, England.
    Bellotto, Nicola
    School of Computer Science, University of Lincoln, Lincoln, England.
    Martinez Mozos, Oscar
    School of Computer Science, University of Lincoln, Lincoln, England.
    Stress Detection Using Wearable Physiological Sensors2015In: Artificial Computation in Biology and Medicine: International Work-Conference on the Interplay Between Natural and Artificial Computation, IWINAC 2015, Elche, Spain, June 1-5, 2015, Proceedings, Part I / [ed] José Manuel Ferrández Vicente; José Ramón Álvarez-Sánchez; Félix de la Paz López; Fco. Javier Toledo-Moreo; Hojjat Adeli, Springer, 2015, Vol. 9107, p. 526-532Conference paper (Refereed)
    Abstract [en]

    As the population increases in the world, the ratio of health carers is rapidly decreasing. Therefore, there is an urgent need to create new technologies to monitor the physical and mental health of people during their daily life. In particular, negative mental states like depression and anxiety are big problems in modern societies, usually due to stressful situations during everyday activities including work. This paper presents a machine learning approach for stress detection on people using wearable physiological sensors with the final aim of improving their quality of life. The presented technique can monitor the state of the subject continuously and classify it into ”stressful” or ”non-stressful” situations. Our classification results show that this method is a good starting point towards real-time stress detection.

  • 46.
    Schreiter, Tim
    et al.
    Örebro University, School of Science and Technology.
    Almeida, Tiago Rodrigues de
    Örebro University, School of Science and Technology.
    Zhu, Yufei
    Örebro University, School of Science and Technology.
    Gutiérrez Maestro, Eduardo
    Örebro University, School of Science and Technology.
    Morillo-Mendez, Lucas
    Örebro University, School of Science and Technology.
    Rudenko, Andrey
    Robert Bosch GmbH, Corporate Research, Stuttgart, Germany .
    Kucner, Tomasz P.
    Mobile Robotics Group, Department of Electrical Engineering and Automation, Aalto University, Finland.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Palmieri, Luigi
    Robert Bosch GmbH, Corporate Research, Stuttgart, Germany .
    Arras, Kai O.
    Robert Bosch GmbH, Corporate Research, Stuttgart, Germany .
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    The Magni Human Motion Dataset: Accurate, Complex, Multi-Modal, Natural, Semantically-Rich and Contextualized2022Conference paper (Refereed)
    Abstract [en]

    Rapid development of social robots stimulates active research in human motion modeling, interpretation and prediction, proactive collision avoidance, human-robot interaction and co-habitation in shared spaces. Modern approaches to this end require high quality datasets for training and evaluation. However, the majority of available datasets suffers from either inaccurate tracking data or unnatural, scripted behavior of the tracked people. This paper attempts to fill this gap by providing high quality tracking information from motion capture, eye-gaze trackers and on-board robot sensors in a semantically-rich environment. To induce natural behavior of the recorded participants, we utilise loosely scripted task assignment, which induces the participants navigate through the dynamic laboratory environment in a natural and purposeful way. The motion dataset, presented in this paper, sets a high quality standard, as the realistic and accurate data is enhanced with semantic information, enabling development of new algorithms which rely not only on the tracking information but also on contextual cues of the moving agents, static and dynamic environment. 

    Download full text (pdf)
    The Magni Human Motion Dataset: Accurate, Complex, Multi-Modal, Natural, Semantically-Rich and Contextualized
  • 47.
    Tsuji, Tokuo
    et al.
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Martinez Mozos, Oscar
    School of Computer Science, University of Lincoln, Lincoln, England.
    Chae, Hyunuk
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Pyo, Yoonseok
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Kusaka, Kazuya
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Hasegawa, Tsutomu
    Kumamoto National College of Technology, Kumamoto, Japan.
    Morooka, Ken'ichi
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Kurazume, Ryo
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    An Informationally Structured Room for Robotic Assistance2015In: Sensors, E-ISSN 1424-8220, Vol. 15, no 4, p. 9438-9465Article in journal (Refereed)
    Abstract [en]

    The application of assistive technologies for elderly people is one of the most promising and interesting scenarios for intelligent technologies in the present and near future. Moreover, the improvement of the quality of life for the elderly is one of the first priorities in modern countries and societies. In this work, we present an informationally structured room that is aimed at supporting the daily life activities of elderly people. This room integrates different sensor modalities in a natural and non-invasive way inside the environment. The information gathered by the sensors is processed and sent to a centralized management system, which makes it available to a service robot assisting the people. One important restriction of our intelligent room is reducing as much as possible any interference with daily activities. Finally, this paper presents several experiments and situations using our intelligent environment in cooperation with our service robot.

  • 48.
    Yamada, Hiroyuki
    et al.
    Graduate School of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan; Research & Development Group, Hitachi, Ltd., Ibaraki, Japan.
    Ahn, Jeongho
    Graduate School of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Martinez Mozos, Oscar
    Iwashita, Yumi
    Jet Propulsion Laboratory, California Institute of Technology, Pasadena CA, USA.
    Kurazume, Ryo
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Gait-based person identification using 3D LiDAR and long short-term memory deep networks2020In: Advanced Robotics, ISSN 0169-1864, E-ISSN 1568-5535, Vol. 34, no 18, p. 1201-1211Article in journal (Refereed)
    Abstract [en]

    Gait recognition is one measure of biometrics, which also includes facial, fingerprint, and retina recognition. Although most biometric methods require direct contact between a device and a subject, gait recognition has unique characteristics whereby interaction with the subjects is not required and can be performed from a distance. Cameras are commonly used for gait recognition, and a number of researchers have used depth information obtained using an RGB-D camera, such as the Microsoft Kinect. Although depth-based gait recognition has advantages, such as robustness against light conditions or appearance variations, there are also limitations. For instance, the RGB-D camera cannot be used outdoors and the measurement distance is limited to approximately 10 meters. The present paper describes a long short-term memory-based method for gait recognition using a real-time multi-line LiDAR. Very few studies have dealt with LiDAR-based gait recognition, and the present study is the first attempt that combines LiDAR data and long short-term memory for gait recognition and focuses on dealing with different appearances. We collect the first gait recognition dataset that consists of time-series range data for 30 people with clothing variations and show the effectiveness of the proposed approach.

  • 49.
    Zoltan-Csaba, Marton
    et al.
    Institute of Robotics and Mechatronics, German Aerospace Center (DLR), Oberpfaffenhofen, Germany.
    Balint-Benczedi, Ferenc
    Institute of Artificial Intelligence, Universität Bremen, Center for Computing Technologies (TZI), Bremen, Germany.
    Martinez Mozos, Oscar
    School of Computer Science, University of Lincoln, Lincoln, England.
    Blodow, Nico
    Intelligent Autonomous Systems, Technische Universität München, München, Germany.
    Kanezaki, Asako
    Graduate School of Information Science & Technology, The University of Tokyo, Tokyo, Japan.
    Goron, Lucian C.
    Intelligent Autonomous Systems, Technische Universität München, München, Germany.
    Pangercic, Dejan
    Autonomous Technologies Group Robert Bosch LLC, Palo Alto, USA.
    Beetz, Michael
    Institute of Artificial Intelligence, Universität Bremen, Center for Computing Technologies (TZI), Bremen, Germany.
    Part-Based Geometric Categorization and Object Reconstruction in Cluttered Table-Top Scenes2014In: Journal of Intelligent and Robotic Systems, ISSN 0921-0296, E-ISSN 1573-0409, Vol. 76, no 1, p. 35-56Article in journal (Refereed)
    Abstract [en]

    This paper presents an approach for 3D geometry-based object categorization in cluttered table-top scenes. In our method, objects are decomposed into different geometric parts whose spatial arrangement is represented by a graph. The matching and searching of graphs representing the objects is sped up by using a hash table which contains possible spatial configurations of the different parts that constitute the objects. Additive feature descriptors are used to label partially or completely visible object parts. In this work we categorize objects into five geometric shapes: sphere, box, flat, cylindrical, and disk/plate, as these shapes represent the majority of objects found on tables in typical households. Moreover, we reconstruct complete 3D models that include the invisible back-sides of objects as well, in order to facilitate manipulation by domestic service robots. Finally, we present an extensive set of experiments on point clouds of objects using an RGBD camera, and our results highlight the improvements over previous methods.

1 - 49 of 49
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf