oru.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 244) Show all publications
Chadalavada, R. T., Andreasson, H., Schindler, M., Palm, R. & Lilienthal, A. J. (2020). Bi-directional navigation intent communication using spatial augmented reality and eye-tracking glasses for improved safety in human-robot interaction. Robotics and Computer-Integrated Manufacturing, 61, Article ID 101830.
Open this publication in new window or tab >>Bi-directional navigation intent communication using spatial augmented reality and eye-tracking glasses for improved safety in human-robot interaction
Show others...
2020 (English)In: Robotics and Computer-Integrated Manufacturing, ISSN 0736-5845, E-ISSN 1879-2537, Vol. 61, article id 101830Article in journal (Refereed) Published
Abstract [en]

Safety, legibility and efficiency are essential for autonomous mobile robots that interact with humans. A key factor in this respect is bi-directional communication of navigation intent, which we focus on in this article with a particular view on industrial logistic applications. In the direction robot-to-human, we study how a robot can communicate its navigation intent using Spatial Augmented Reality (SAR) such that humans can intuitively understand the robot's intention and feel safe in the vicinity of robots. We conducted experiments with an autonomous forklift that projects various patterns on the shared floor space to convey its navigation intentions. We analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift and carried out stimulated recall interviews (SRI) in order to identify desirable features for projection of robot intentions. In the direction human-to-robot, we argue that robots in human co-habited environments need human-aware task and motion planning to support safety and efficiency, ideally responding to people's motion intentions as soon as they can be inferred from human cues. Eye gaze can convey information about intentions beyond what can be inferred from the trajectory and head pose of a person. Hence, we propose eye-tracking glasses as safety equipment in industrial environments shared by humans and robots. In this work, we investigate the possibility of human-to-robot implicit intention transference solely from eye gaze data and evaluate how the observed eye gaze patterns of the participants relate to their navigation decisions. We again analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift for clues that could reveal direction intent. Our analysis shows that people primarily gazed on that side of the robot they ultimately decided to pass by. We discuss implications of these results and relate to a control approach that uses human gaze for early obstacle avoidance.

Place, publisher, year, edition, pages
Elsevier, 2020
Keywords
Human-robot interaction (HRI), Mobile robots, Intention communication, Eye-tracking, Intention recognition, Spatial augmented reality, Stimulated recall interview, Obstacle avoidance, Safety, Logistics
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-78358 (URN)10.1016/j.rcim.2019.101830 (DOI)000496834800002 ()2-s2.0-85070732550 (Scopus ID)
Note

Funding Agencies:

KKS SIDUS project AIR: "Action and Intention Recognition in Human Interaction with Autonomous Systems"  20140220

H2020 project ILIAD: "Intra-Logistics with Integrated Automatic Deployment: Safe and Scalable Fleets in Shared Spaces"  732737

Available from: 2019-12-03 Created: 2019-12-03 Last updated: 2020-02-06Bibliographically approved
Burgues, J., Hernandez Bennetts, V., Lilienthal, A. J. & Marco, S. (2020). Gas Distribution Mapping and Source Localization Using a 3D Grid of Metal Oxide Semiconductor Sensors. Sensors and actuators. B, Chemical, 304, Article ID 127309.
Open this publication in new window or tab >>Gas Distribution Mapping and Source Localization Using a 3D Grid of Metal Oxide Semiconductor Sensors
2020 (English)In: Sensors and actuators. B, Chemical, ISSN 0925-4005, E-ISSN 1873-3077, Vol. 304, article id 127309Article in journal (Refereed) Published
Abstract [en]

The difficulty to obtain ground truth (i.e. empirical evidence) about how a gas disperses in an environment is one of the major hurdles in the field of mobile robotic olfaction (MRO), impairing our ability to develop efficient gas source localization strategies and to validate gas distribution maps produced by autonomous mobile robots. Previous ground truth measurements of gas dispersion have been mostly based on expensive tracer optical methods or 2D chemical sensor grids deployed only at ground level. With the ever-increasing trend towards gas-sensitive aerial robots, 3D measurements of gas dispersion become necessary to characterize the environment these platforms can explore. This paper presents ten different experiments performed with a 3D grid of 27 metal oxide semiconductor (MOX) sensors to visualize the temporal evolution of gas distribution produced by an evaporating ethanol source placed at different locations in an office room, including variations in height, release rate and air flow. We also studied which features of the MOX sensor signals are optimal for predicting the source location, considering different lengths of the measurement window. We found strongly time-varying and counter-intuitive gas distribution patterns that disprove some assumptions commonly held in the MRO field, such as that heavy gases disperse along ground level. Correspondingly, ground-level gas distributions were rarely useful for localizing the gas source and elevated measurements were much more informative. We make the dataset and the code publicly available to enable the community to develop, validate, and compare new approaches related to gas sensing in complex environments.

Place, publisher, year, edition, pages
Elsevier, 2020
Keywords
Mobile robotic olfaction, Metal oxide gas sensors, Signal processing, Sensor networks, Gas source localization, Gas distribution mapping
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-78709 (URN)10.1016/j.snb.2019.127309 (DOI)000500702500075 ()2-s2.0-85075330402 (Scopus ID)
Note

Funding Agencies:

Spanish MINECO program  BES-2015-071698 TEC2014-59229-R

H2020-ICT by the European Commission  645101

Available from: 2019-12-19 Created: 2019-12-19 Last updated: 2020-02-05Bibliographically approved
Rudenko, A., Kucner, T. P., Swaminathan, C. S., Chadalavada, R. T., Arras, K. O. & Lilienthal, A. J. (2020). THÖR: Human-Robot Navigation Data Collection and Accurate Motion Trajectories Dataset. IEEE Robotics and Automation Letters, 5(2), 676-682
Open this publication in new window or tab >>THÖR: Human-Robot Navigation Data Collection and Accurate Motion Trajectories Dataset
Show others...
2020 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 5, no 2, p. 676-682Article in journal (Refereed) Published
Abstract [en]

Understanding human behavior is key for robots and intelligent systems that share a space with people. Accordingly, research that enables such systems to perceive, track, learn and predict human behavior as well as to plan and interact with humans has received increasing attention over the last years. The availability of large human motion datasets that contain relevant levels of difficulty is fundamental to this research. Existing datasets are often limited in terms of information content, annotation quality or variability of human behavior. In this paper, we present THÖR, a new dataset with human motion trajectory and eye gaze data collected in an indoor environment with accurate ground truth for position, head orientation, gaze direction, social grouping, obstacles map and goal coordinates. THÖR also contains sensor data collected by a 3D lidar and involves a mobile robot navigating the space. We propose a set of metrics to quantitatively analyze motion trajectory datasets such as the average tracking duration, ground truth noise, curvature and speed variation of the trajectories. In comparison to prior art, our dataset has a larger variety in human motion behavior, is less noisy, and contains annotations at higher frequencies.

Place, publisher, year, edition, pages
IEEE, 2020
Keywords
Social Human-Robot Interaction, Motion and Path Planning, Human Detection and Tracking
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-79266 (URN)10.1109/LRA.2020.2965416 (DOI)
Available from: 2020-01-20 Created: 2020-01-20 Last updated: 2020-02-06
Mielle, M., Magnusson, M. & Lilienthal, A. J. (2019). A comparative analysis of radar and lidar sensing for localization and mapping. In: 2019 European Conference on Mobile Robots (ECMR): . Paper presented at 9th European Conference on Mobile Robots (ECMR 2019), Prague, Czech Republic, September 4-6, 2019. IEEE
Open this publication in new window or tab >>A comparative analysis of radar and lidar sensing for localization and mapping
2019 (English)In: 2019 European Conference on Mobile Robots (ECMR), IEEE, 2019Conference paper, Published paper (Refereed)
Abstract [en]

Lidars and cameras are the sensors most commonly used for Simultaneous Localization And Mapping (SLAM). However, they are not effective in certain scenarios, e.g. when fire and smoke are present in the environment. While radars are much less affected by such conditions, radar and lidar have rarely been compared in terms of the achievable SLAM accuracy. We present a principled comparison of the accuracy of a novel radar sensor against that of a Velodyne lidar, for localization and mapping.

We evaluate the performance of both sensors by calculating the displacement in position and orientation relative to a ground-truth reference positioning system, over three experiments in an indoor lab environment. We use two different SLAM algorithms and found that the mean displacement in position when using the radar sensor was less than 0.037 m, compared to 0.011m for the lidar. We show that while producing slightly less accurate maps than a lidar, the radar can accurately perform SLAM and build a map of the environment, even including details such as corners and small walls.

Place, publisher, year, edition, pages
IEEE, 2019
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-76976 (URN)
Conference
9th European Conference on Mobile Robots (ECMR 2019), Prague, Czech Republic, September 4-6, 2019
Available from: 2019-10-02 Created: 2019-10-02 Last updated: 2020-02-07Bibliographically approved
Hüllmann, D., Neumann, P. P., Monroy, J. & Lilienthal, A. J. (2019). A Realistic Remote Gas Sensor Model for Three-Dimensional Olfaction Simulations. In: ISOCS/IEEE International Symposium on Olfaction and Electronic Nose (ISOEN): . Paper presented at 2019 IEEE International Symposium on Olfaction and Electronic Nose (ISOEN), Fukuoka, japan, mMy 26-29, 2019. IEEE, Article ID 8823330.
Open this publication in new window or tab >>A Realistic Remote Gas Sensor Model for Three-Dimensional Olfaction Simulations
2019 (English)In: ISOCS/IEEE International Symposium on Olfaction and Electronic Nose (ISOEN), IEEE, 2019, article id 8823330Conference paper, Published paper (Refereed)
Abstract [en]

Remote gas sensors like those based on the Tunable Diode Laser Absorption Spectroscopy (TDLAS) enable mobile robots to scan huge areas for gas concentrations in reasonable time and are therefore well suited for tasks such as gas emission surveillance and environmental monitoring. A further advantage of remote sensors is that the gas distribution is not disturbed by the sensing platform itself if the measurements are carried out from a sufficient distance, which is particularly interesting when a rotary-wing platform is used. Since there is no possibility to obtain ground truth measurements of gas distributions, simulations are used to develop and evaluate suitable olfaction algorithms. For this purpose several models of in-situ gas sensors have been developed, but models of remote gas sensors are missing. In this paper we present two novel 3D ray-tracer-based TDLAS sensor models. While the first model simplifies the laser beam as a line, the second model takes the conical shape of the beam into account. Using a simulated gas plume, we compare the line model with the cone model in terms of accuracy and computational cost and show that the results generated by the cone model can differ significantly from those of the line model.

Place, publisher, year, edition, pages
IEEE, 2019
Keywords
gas detector, remote gas sensor, sensor modelling, TDLAS, gas dispersion simulation
National Category
Remote Sensing Robotics
Identifiers
urn:nbn:se:oru:diva-76220 (URN)10.1109/ISOEN.2019.8823330 (DOI)2-s2.0-85072976677 (Scopus ID)978-1-5386-8327-9 (ISBN)978-1-5386-8328-6 (ISBN)
Conference
2019 IEEE International Symposium on Olfaction and Electronic Nose (ISOEN), Fukuoka, japan, mMy 26-29, 2019
Available from: 2019-09-11 Created: 2019-09-11 Last updated: 2020-02-06Bibliographically approved
Adolfsson, D., Lowry, S., Magnusson, M., Lilienthal, A. J. & Andreasson, H. (2019). A Submap per Perspective: Selecting Subsets for SuPer Mapping that Afford Superior Localization Quality. In: 2019 European Conference on Mobile Robots (ECMR): . Paper presented at European Conference on Mobile Robotics (ECMR), Prague, Czech Republic, September 4 - 6, 2019. IEEE
Open this publication in new window or tab >>A Submap per Perspective: Selecting Subsets for SuPer Mapping that Afford Superior Localization Quality
Show others...
2019 (English)In: 2019 European Conference on Mobile Robots (ECMR), IEEE, 2019Conference paper, Published paper (Refereed)
Abstract [en]

This paper targets high-precision robot localization. We address a general problem for voxel-based map representations that the expressiveness of the map is fundamentally limited by the resolution since integration of measurements taken from different perspectives introduces imprecisions, and thus reduces localization accuracy.We propose SuPer maps that contain one Submap per Perspective representing a particular view of the environment. For localization, a robot then selects the submap that best explains the environment from its perspective. We propose SuPer mapping as an offline refinement step between initial SLAM and deploying autonomous robots for navigation. We evaluate the proposed method on simulated and real-world data that represent an important use case of an industrial scenario with high accuracy requirements in an repetitive environment. Our results demonstrate a significantly improved localization accuracy, up to 46% better compared to localization in global maps, and up to 25% better compared to alternative submapping approaches.

Place, publisher, year, edition, pages
IEEE, 2019
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-79739 (URN)10.1109/ECMR.2019.8870941 (DOI)2-s2.0-85074443858 (Scopus ID)978-1-7281-3605-9 (ISBN)
Conference
European Conference on Mobile Robotics (ECMR), Prague, Czech Republic, September 4 - 6, 2019
Funder
EU, Horizon 2020, 732737
Available from: 2020-02-03 Created: 2020-02-03 Last updated: 2020-02-14Bibliographically approved
Neumann, P. P., Hüllmann, D., Krentel, D., Kluge, M., Dzierliński, M., Lilienthal, A. J. & Bartholmai, M. (2019). Aerial-based gas tomography: from single beams to complex gas distributions. European Journal of Remote Sensing, 52(Sup. 3), 2-16
Open this publication in new window or tab >>Aerial-based gas tomography: from single beams to complex gas distributions
Show others...
2019 (English)In: European Journal of Remote Sensing, ISSN 2279-7254, Vol. 52, no Sup. 3, p. 2-16Article in journal (Refereed) Published
Abstract [en]

In this paper, we present and validate the concept of an autonomous aerial robot to reconstruct tomographic 2D slices of gas plumes in outdoor environments. Our platform, the so-called Unmanned Aerial Vehicle for Remote Gas Sensing (UAV-REGAS), combines a lightweight Tunable Diode Laser Absorption Spectroscopy (TDLAS) gas sensor with a 3-axis aerial stabilization gimbal for aiming at a versatile octocopter. While the TDLAS sensor provides integral gas concentration measurements, it does not measure the distance traveled by the laser diode?s beam nor the distribution of gas along the optical path. Thus, we complement the set-up with a laser rangefinder and apply principles of Computed Tomography (CT) to create a model of the spatial gas distribution from a set of integral concentration measurements. To allow for a fundamental ground truth evaluation of the applied gas tomography algorithm, we set up a unique outdoor test environment based on two 3D ultrasonic anemometers and a distributed array of 10 infrared gas transmitters. We present results showing its performance characteristics and 2D plume reconstruction capabilities under realistic conditions. The proposed system can be deployed in scenarios that cannot be addressed by currently available robots and thus constitutes a significant step forward for the field of Mobile Robot Olfaction (MRO).

Place, publisher, year, edition, pages
London: Taylor & Francis, 2019
Keywords
Aerial robot olfaction, mobile robot olfaction, gas tomography, TDLAS, plume
National Category
Remote Sensing Occupational Health and Environmental Health Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-76009 (URN)10.1080/22797254.2019.1640078 (DOI)000490523700001 ()
Note

Funding Agencies:

German Federal Ministry for Economic Affairs and Energy (BMWi) within the ZIM program  KF2201091HM4

BAM 

Available from: 2019-09-02 Created: 2019-09-02 Last updated: 2020-02-05Bibliographically approved
Wiedemann, T., Lilienthal, A. J. & Shutin, D. (2019). Analysis of Model Mismatch Effects for a Model-based Gas Source Localization Strategy Incorporating Advection Knowledge. Sensors, 19(3), Article ID 520.
Open this publication in new window or tab >>Analysis of Model Mismatch Effects for a Model-based Gas Source Localization Strategy Incorporating Advection Knowledge
2019 (English)In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 19, no 3, article id 520Article in journal (Refereed) Published
Abstract [en]

In disaster scenarios, where toxic material is leaking, gas source localization is a common but also dangerous task. To reduce threats for human operators, we propose an intelligent sampling strategy that enables a multi-robot system to autonomously localize unknown gas sources based on gas concentration measurements. This paper discusses a probabilistic, model-based approach for incorporating physical process knowledge into the sampling strategy. We model the spatial and temporal dynamics of the gas dispersion with a partial differential equation that accounts for diffusion and advection effects. We consider the exact number of sources as unknown, but assume that gas sources are sparsely distributed. To incorporate the sparsity assumption we make use of sparse Bayesian learning techniques. Probabilistic modeling can account for possible model mismatch effects that otherwise can undermine the performance of deterministic methods. In the paper we evaluate the proposed gas source localization strategy in simulations using synthetic data. Compared to real-world experiments, a simulated environment provides us with ground truth data and reproducibility necessary to get a deeper insight into the proposed strategy. The investigation shows that (i) the probabilistic model can compensate imperfect modeling; (ii) the sparsity assumption significantly accelerates the source localization; and (iii) a-priori advection knowledge is of advantage for source localization, however, it is only required to have a certain level of accuracy. These findings will help in the future to parameterize the proposed algorithm in real world applications.

Place, publisher, year, edition, pages
Basel, Switzerland: MDPI, 2019
Keywords
Robotic exploration, gas source localization, mobile robot olfaction, sparse Bayesian learning, multi-agent system, advection-diffusion model
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-71964 (URN)10.3390/s19030520 (DOI)000459941200083 ()30691174 (PubMedID)2-s2.0-85060572534 (Scopus ID)
Projects
SmokeBot (EC H2020, 645101)
Note

Funding Agencies:

European Commission  645101 

Valles Marineris Explorer initiative of DLR (German Aerospace Center) Space Administration 

Available from: 2019-01-31 Created: 2019-01-31 Last updated: 2020-02-07Bibliographically approved
Lindner, H. Y., Hill, W., Hermansson, L. & Lilienthal, A. J. (2019). Cognitive load and compensatory movement in learning to use a multi-function hand. In: ISPO 17th World Congress: Basics to Bionics: Abstract Book. Paper presented at ISPO 17th WORLD CONGRESS, Kobe, Japan October 5-8, 2019 (pp. 52-52). ISPO
Open this publication in new window or tab >>Cognitive load and compensatory movement in learning to use a multi-function hand
2019 (English)In: ISPO 17th World Congress: Basics to Bionics: Abstract Book, ISPO , 2019, p. 52-52Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

BACKGROUND: Recent technology provides increased dexterity in multi-function hands with the potential to reduce compensatory body movements. However, it is challenging to learn how to operate a hand that has up to 36 grips. While the cognitive load required to use these hands is unknown, it is clear that if the cognitive load is too high, the user may stop using the multi-functional hand or may not take full advantage of its advanced features.

AIM: The aim of this project was to compare cognitive load and compensatory movement in using a multi-function hand versus a conventional myo hand.

METHOD: An experienced prosthesis user was assessed using his conventional myo hand and an unfamiliar iLimb Ultra hand, with two-site control and the same wrist for both prostheses. He was trained to use power grip, lateral grip and pinch grip and then completed the SHAP test while wearing the Tobii Pro 2 eye-tracking glasses. Pupil diameter (normal range: 2-4mm during normal light) was used to indicate the amount of cognitive load.[1] The number of eye fixations on the prosthesis indicate the need of visual feedback during operation. Dartfish motion capture was used to track the maximum angles for shoulder abduction and elbow flexion.

RESULTS: Larger pupils were found in the use of Ilimb ultra (2.6-5.6mm) than in the use of conventional myo hand (2.4-3.5mm) during the SHAP abstract light tests. The pupils dilated most often during changing grips, e.g. switching to pinch grip for the tripod task (from 2.7 to 5.6mm). After training of using power grip and pinch grip repeatedly, the maximum pupil diameter decreased from 5.6 to 3.3mm. The number of eye fixations on the I-limb ultra (295 fixations) were also higher than on the conventional myo-hand (139 fixations). Smaller shoulder abduction and elbow flexion were observed in the use of I-limb ultra (16.6°, 36.1°) than in the use of conventional myo hand (57°, 52.7°).

DISCUSSION AND CONCLUSION: Although it is cognitively demanding to learn to use a multi-function hand, it is possible to decrease this demand with adequate prosthetic training. Our results suggest that using a multi-function hand enables reduction of body compensatory movement, however at the cost of a higher cognitive load. Further research with more prosthesis users and other multi-function hands is needed to confirm the study findings.

REFERENCES [1] van der Wel P, van Steenbergen H. Psychon Bull Rev 2018; 25(6):2005-15.

ACKNOWLEDGEMENTS: This project was supported financially by Norrbacka-Eugenia Foundation, Promobilia Foundation and Örebro University.

Place, publisher, year, edition, pages
ISPO, 2019
Keywords
Eye tracking, upper limb prosthetics, cognitive load, compensatory movement
National Category
Occupational Therapy Medical Ergonomics
Research subject
Rehabilitation Medicine; Occupational therapy
Identifiers
urn:nbn:se:oru:diva-78855 (URN)
Conference
ISPO 17th WORLD CONGRESS, Kobe, Japan October 5-8, 2019
Available from: 2020-01-02 Created: 2020-01-02 Last updated: 2020-02-14Bibliographically approved
Lilienthal, A. J. & Schindler, M. (2019). Current Trends in Eye Tracking Research in Mathematics Education: A PME Literature Review: A PME Survey. In: 43rd Annual Meeting of the International Group for the Psychology of Mathematics Education: . Paper presented at Annual Meeting of the International Group for the Psychology of Mathematics Education (PME-43), Pretoria, South Africa, July 7 - 12, 2019 (pp. 62-62). , 4
Open this publication in new window or tab >>Current Trends in Eye Tracking Research in Mathematics Education: A PME Literature Review: A PME Survey
2019 (English)In: 43rd Annual Meeting of the International Group for the Psychology of Mathematics Education, 2019, Vol. 4, p. 62-62Conference paper, Oral presentation with published abstract (Other academic)
Abstract [en]

Eye tracking (ET) is a research method that receives growing interest in mathematics education research (MER). This paper aims to give a literature overview, specifically focusing on the evolution of interest in this technology, ET equipment, and analysis methods used in mathematics education. To capture the current state, we focus on papers published in the proceedings of PME, one of the primary conferences dedicated to MER, of the last ten years. We identify trends in interest, methodology, and methods of analysis that are used in the community, and discuss possible future developments.

Keywords
Eye Tracking, Mathematics Education Research, Survey, PME
National Category
Educational Sciences Pedagogy
Research subject
Education
Identifiers
urn:nbn:se:oru:diva-79738 (URN)
Conference
Annual Meeting of the International Group for the Psychology of Mathematics Education (PME-43), Pretoria, South Africa, July 7 - 12, 2019
Available from: 2020-02-03 Created: 2020-02-03 Last updated: 2020-02-07Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-0217-9326

Search in DiVA

Show all publications