oru.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 252) Show all publications
Chadalavada, R. T., Andreasson, H., Schindler, M., Palm, R. & Lilienthal, A. J. (2020). Bi-directional navigation intent communication using spatial augmented reality and eye-tracking glasses for improved safety in human-robot interaction. Robotics and Computer-Integrated Manufacturing, 61, Article ID 101830.
Open this publication in new window or tab >>Bi-directional navigation intent communication using spatial augmented reality and eye-tracking glasses for improved safety in human-robot interaction
Show others...
2020 (English)In: Robotics and Computer-Integrated Manufacturing, ISSN 0736-5845, E-ISSN 1879-2537, Vol. 61, article id 101830Article in journal (Refereed) Published
Abstract [en]

Safety, legibility and efficiency are essential for autonomous mobile robots that interact with humans. A key factor in this respect is bi-directional communication of navigation intent, which we focus on in this article with a particular view on industrial logistic applications. In the direction robot-to-human, we study how a robot can communicate its navigation intent using Spatial Augmented Reality (SAR) such that humans can intuitively understand the robot's intention and feel safe in the vicinity of robots. We conducted experiments with an autonomous forklift that projects various patterns on the shared floor space to convey its navigation intentions. We analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift and carried out stimulated recall interviews (SRI) in order to identify desirable features for projection of robot intentions. In the direction human-to-robot, we argue that robots in human co-habited environments need human-aware task and motion planning to support safety and efficiency, ideally responding to people's motion intentions as soon as they can be inferred from human cues. Eye gaze can convey information about intentions beyond what can be inferred from the trajectory and head pose of a person. Hence, we propose eye-tracking glasses as safety equipment in industrial environments shared by humans and robots. In this work, we investigate the possibility of human-to-robot implicit intention transference solely from eye gaze data and evaluate how the observed eye gaze patterns of the participants relate to their navigation decisions. We again analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift for clues that could reveal direction intent. Our analysis shows that people primarily gazed on that side of the robot they ultimately decided to pass by. We discuss implications of these results and relate to a control approach that uses human gaze for early obstacle avoidance.

Place, publisher, year, edition, pages
Elsevier, 2020
Keywords
Human-robot interaction (HRI), Mobile robots, Intention communication, Eye-tracking, Intention recognition, Spatial augmented reality, Stimulated recall interview, Obstacle avoidance, Safety, Logistics
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-78358 (URN)10.1016/j.rcim.2019.101830 (DOI)000496834800002 ()2-s2.0-85070732550 (Scopus ID)
Note

Funding Agencies:

KKS SIDUS project AIR: "Action and Intention Recognition in Human Interaction with Autonomous Systems"  20140220

H2020 project ILIAD: "Intra-Logistics with Integrated Automatic Deployment: Safe and Scalable Fleets in Shared Spaces"  732737

Available from: 2019-12-03 Created: 2019-12-03 Last updated: 2020-02-06Bibliographically approved
Kucner, T. P., Lilienthal, A., Magnusson, M., Palmieri, L. & Swaminathan, C. S. (2020). Closing Remarks. In: Probabilistic Mapping of Spatial Motion Patterns for Mobile Robots: (pp. 143-151). Springer
Open this publication in new window or tab >>Closing Remarks
Show others...
2020 (English)In: Probabilistic Mapping of Spatial Motion Patterns for Mobile Robots, Springer, 2020, p. 143-151Chapter in book (Refereed)
Abstract [en]

Dynamics is an inherent feature of reality. In spite of that, the domain of maps of dynamics has not received a lot of attention yet. In this book, we present solutions for building maps of dynamics and outline how to make use of them for motion planning. In this chapter, we present discuss related research question that as of yet remain to be answered, and derive possible future research directions. 

Place, publisher, year, edition, pages
Springer, 2020
Series
Cognitive Systems Monographs, ISSN 1867-4925 ; 40
National Category
Robotics
Identifiers
urn:nbn:se:oru:diva-81667 (URN)10.1007/978-3-030-41808-3_6 (DOI)2-s2.0-85083964746 (Scopus ID)978-3-030-41807-6 (ISBN)978-3-030-41808-3 (ISBN)
Available from: 2020-05-13 Created: 2020-05-13 Last updated: 2020-05-13Bibliographically approved
Burgues, J., Hernandez Bennetts, V., Lilienthal, A. J. & Marco, S. (2020). Gas Distribution Mapping and Source Localization Using a 3D Grid of Metal Oxide Semiconductor Sensors. Sensors and actuators. B, Chemical, 304, Article ID 127309.
Open this publication in new window or tab >>Gas Distribution Mapping and Source Localization Using a 3D Grid of Metal Oxide Semiconductor Sensors
2020 (English)In: Sensors and actuators. B, Chemical, ISSN 0925-4005, E-ISSN 1873-3077, Vol. 304, article id 127309Article in journal (Refereed) Published
Abstract [en]

The difficulty to obtain ground truth (i.e. empirical evidence) about how a gas disperses in an environment is one of the major hurdles in the field of mobile robotic olfaction (MRO), impairing our ability to develop efficient gas source localization strategies and to validate gas distribution maps produced by autonomous mobile robots. Previous ground truth measurements of gas dispersion have been mostly based on expensive tracer optical methods or 2D chemical sensor grids deployed only at ground level. With the ever-increasing trend towards gas-sensitive aerial robots, 3D measurements of gas dispersion become necessary to characterize the environment these platforms can explore. This paper presents ten different experiments performed with a 3D grid of 27 metal oxide semiconductor (MOX) sensors to visualize the temporal evolution of gas distribution produced by an evaporating ethanol source placed at different locations in an office room, including variations in height, release rate and air flow. We also studied which features of the MOX sensor signals are optimal for predicting the source location, considering different lengths of the measurement window. We found strongly time-varying and counter-intuitive gas distribution patterns that disprove some assumptions commonly held in the MRO field, such as that heavy gases disperse along ground level. Correspondingly, ground-level gas distributions were rarely useful for localizing the gas source and elevated measurements were much more informative. We make the dataset and the code publicly available to enable the community to develop, validate, and compare new approaches related to gas sensing in complex environments.

Place, publisher, year, edition, pages
Elsevier, 2020
Keywords
Mobile robotic olfaction, Metal oxide gas sensors, Signal processing, Sensor networks, Gas source localization, Gas distribution mapping
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-78709 (URN)10.1016/j.snb.2019.127309 (DOI)000500702500075 ()2-s2.0-85075330402 (Scopus ID)
Note

Funding Agencies:

Spanish MINECO program  BES-2015-071698 TEC2014-59229-R

H2020-ICT by the European Commission  645101

Available from: 2019-12-19 Created: 2019-12-19 Last updated: 2020-02-05Bibliographically approved
Hou, H.-R., Lilienthal, A. J. & Meng, Q.-H. (2020). Gas Source Declaration with Tetrahedral Sensing Geometries and Median Value Filtering Extreme Learning Machine. IEEE Access, 8, 7227-7235, Article ID 8945323.
Open this publication in new window or tab >>Gas Source Declaration with Tetrahedral Sensing Geometries and Median Value Filtering Extreme Learning Machine
2020 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 8, p. 7227-7235, article id 8945323Article in journal (Refereed) Published
Abstract [en]

Gas source localization (including gas source declaration) is critical for environmental monitoring, pollution control and chemical safety. In this paper we approach the gas source declaration problem by constructing a tetrahedron, each vertex of which consists of a gas sensor and a three-dimensional (3D) anemometer. With this setup, the space sampled around a gas source can be divided into two categories, i.e. inside (“source in”) and outside (“source out”) the tetrahedron, posing gas source declaration as a classification problem. For the declaration of the “source in” or “source out” cases, we propose to directly take raw gas concentration and wind measurement data as features, and apply a median value filtering based extreme learning machine (M-ELM) method. Our experimental results show the efficacy of the proposed method, yielding accuracies of 93.2% and 100% for gas source declaration in the regular and irregular tetrahedron experiments, respectively. These results are better than that of the ELM-MFC (mass flux criterion) and other variants of ELM algorithms.

Place, publisher, year, edition, pages
IEEE, 2020
Keywords
Gas source declaration, tetrahedron, gas concentration measurement, wind information, extreme learning machine, median value filtering
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-79745 (URN)10.1109/ACCESS.2019.2963059 (DOI)000525422700039 ()2-s2.0-85078246836 (Scopus ID)
Note

Funding Agencies:

National Natural Science Foundation of China

61573253 National Key Research and Development Program of China  2017YFC0306200

Available from: 2020-02-03 Created: 2020-02-03 Last updated: 2020-04-30Bibliographically approved
Kucner, T. P., Lilienthal, A., Magnusson, M., Palmieri, L. & Swaminathan, C. S. (2020). Introduction. In: Probabilistic Mapping of Spatial Motion Patterns for Mobile Robots: (pp. 1-13). Springer
Open this publication in new window or tab >>Introduction
Show others...
2020 (English)In: Probabilistic Mapping of Spatial Motion Patterns for Mobile Robots, Springer, 2020, p. 1-13Chapter in book (Refereed)
Abstract [en]

Change and motion are inherent features of reality. The ability to recognise patterns governing changes has allowed humans to thrive in a dynamic reality. Similarly, dynamics awareness can also improve the performance of robots. Dynamics awareness is an umbrella term covering a broad spectrum of concepts. In this chapter, we present the key aspects of dynamics awareness. We introduce two motivating examples presenting the challenges for robots operating in a dynamic environment. We discuss the benefits of using spatial models of dynamics and analyse the challenges of building such models.

Place, publisher, year, edition, pages
Springer, 2020
Series
Cognitive Systems Monographs, ISSN 1867-4925 ; 40
National Category
Robotics
Identifiers
urn:nbn:se:oru:diva-81665 (URN)10.1007/978-3-030-41808-3_1 (DOI)2-s2.0-85083992773 (Scopus ID)978-3-030-41807-6 (ISBN)978-3-030-41808-3 (ISBN)
Available from: 2020-05-13 Created: 2020-05-13 Last updated: 2020-05-13Bibliographically approved
Kucner, T. P., Lilienthal, A., Magnusson, M., Palmieri, L. & Swaminathan, C. S. (2020). Maps of Dynamics. In: Probabilistic Mapping of Spatial Motion Patterns for Mobile Robots: (pp. 15-32). Springer
Open this publication in new window or tab >>Maps of Dynamics
Show others...
2020 (English)In: Probabilistic Mapping of Spatial Motion Patterns for Mobile Robots, Springer, 2020, p. 15-32Chapter in book (Refereed)
Abstract [en]

The task of building maps of dynamics is the key focus of this book, as well as how to use them for motion planning. In this chapter, we present a categorisation and overview of different types of maps of dynamics. Furthermore, we give an overview of approaches to motion planning in dynamic environments, with a focus on motion planning over maps of dynamics. 

Place, publisher, year, edition, pages
Springer, 2020
Series
Cognitive Systems Monographs, ISSN 1867-4925
National Category
Robotics
Identifiers
urn:nbn:se:oru:diva-81670 (URN)10.1007/978-3-030-41808-3_2 (DOI)2-s2.0-85083956964 (Scopus ID)978-3-030-41807-6 (ISBN)978-3-030-41808-3 (ISBN)
Available from: 2020-05-13 Created: 2020-05-13 Last updated: 2020-05-13Bibliographically approved
Kucner, T. P., Lilienthal, A., Magnusson, M., Palmieri, L. & Swaminathan, C. S. (2020). Modelling Motion Patterns with Circular-Linear Flow Field Maps. In: Probabilistic Mapping of Spatial Motion Patterns for Mobile Robots: (pp. 65-113). Springer
Open this publication in new window or tab >>Modelling Motion Patterns with Circular-Linear Flow Field Maps
Show others...
2020 (English)In: Probabilistic Mapping of Spatial Motion Patterns for Mobile Robots, Springer, 2020, p. 65-113Chapter in book (Refereed)
Abstract [en]

The shared feature of the flow of discrete objects and continuous media is that they both can be represented as velocity vectors encapsulating direction and speed of motion. In this chapter, we present a method for modelling the flow of discrete objects and continuous media as continuous Gaussian mixture fields. The proposed model associates to each part of the environment a Gaussian mixture model describing the local motion patterns. We also present a learning method, designed to build the model from a set of sparse, noisy and incomplete observations. 

Place, publisher, year, edition, pages
Springer, 2020
Series
Cognitive Systems Monographs, ISSN 1867-4925 ; 40
National Category
Fluid Mechanics and Acoustics
Identifiers
urn:nbn:se:oru:diva-81664 (URN)10.1007/978-3-030-41808-3_4 (DOI)2-s2.0-85084011370 (Scopus ID)978-3-030-41807-6 (ISBN)978-3-030-41808-3 (ISBN)
Available from: 2020-05-12 Created: 2020-05-12 Last updated: 2020-05-12Bibliographically approved
Kucner, T. P., Lilienthal, A., Magnusson, M., Palmieri, L. & Swaminathan, C. S. (2020). Modelling Motion Patterns with Conditional Transition Map. In: Probabilistic Mapping of Spatial Motion Patterns for Mobile Robots: (pp. 33-64). Springer
Open this publication in new window or tab >>Modelling Motion Patterns with Conditional Transition Map
Show others...
2020 (English)In: Probabilistic Mapping of Spatial Motion Patterns for Mobile Robots, Springer, 2020, p. 33-64Chapter in book (Refereed)
Abstract [en]

The key idea of modelling flow of discrete objects is to capture the way they move through the environment. One method to capture the flow is to observe changes in occupancy caused by the motion of discrete objects. In this chapter, we present a method to model and learn occupancy shifts caused by an object moving through the environment. The key idea is observe temporal changes changes in the occupancy of adjacent cells, and based on the temporal offset infer the direction of the occupancy flow.

Place, publisher, year, edition, pages
Springer, 2020
Series
Cognitive Systems Monographs, ISSN 1867-4925 ; 40
National Category
Oceanography, Hydrology and Water Resources
Identifiers
urn:nbn:se:oru:diva-81669 (URN)10.1007/978-3-030-41808-3_3 (DOI)2-s2.0-85083960053 (Scopus ID)978-3-030-41807-6 (ISBN)978-3-030-41808-3 (ISBN)
Available from: 2020-05-13 Created: 2020-05-13 Last updated: 2020-05-13Bibliographically approved
Kucner, T. P., Lilienthal, A., Magnusson, M., Palmieri, L. & Swaminathan, C. S. (2020). Motion Planning Using MoDs. In: Probabilistic Mapping of Spatial Motion Patterns for Mobile Robots: (pp. 115-141). Springer
Open this publication in new window or tab >>Motion Planning Using MoDs
Show others...
2020 (English)In: Probabilistic Mapping of Spatial Motion Patterns for Mobile Robots, Springer, 2020, p. 115-141Chapter in book (Refereed)
Abstract [en]

Maps of dynamics can be beneficial for motion planning. Information about motion patterns in the environment can lead to finding flow-aware paths, allowing robots to align better to the expected motion: either of other agents in the environment or the flow of air or another medium. The key idea of flow-aware motion planning is to include adherence to the flow represented in the MoD into the motion planning algorithm’s sub-units (i.e. cost function, sampling mechanism), thereby biasing the motion planner into obeying local and implicit traffic rules. 

Place, publisher, year, edition, pages
Springer, 2020
Series
Cognitive Systems Monographs, ISSN 1867-4925 ; 40
National Category
Robotics
Identifiers
urn:nbn:se:oru:diva-81668 (URN)10.1007/978-3-030-41808-3_5 (DOI)2-s2.0-85083963960 (Scopus ID)978-3-030-41807-6 (ISBN)978-3-030-41808-3 (ISBN)
Available from: 2020-05-13 Created: 2020-05-13 Last updated: 2020-05-13Bibliographically approved
Hoang, D.-C., Lilienthal, A. & Stoyanov, T. (2020). Panoptic 3D Mapping and Object Pose Estimation Using Adaptively Weighted Semantic Information. IEEE Robotics and Automation Letters, 5(2), 1962-1969
Open this publication in new window or tab >>Panoptic 3D Mapping and Object Pose Estimation Using Adaptively Weighted Semantic Information
2020 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 5, no 2, p. 1962-1969Article in journal (Refereed) Published
Abstract [en]

We present a system capable of reconstructing highly detailed object-level models and estimating the 6D pose of objects by means of an RGB-D camera. In this work, we integrate deep-learning-based semantic segmentation, instance segmentation, and 6D object pose estimation into a state of the art RGB-D mapping system. We leverage the pipeline of ElasticFusion as a backbone and propose modifications of the registration cost function to make full use of the semantic class labels in the process. The proposed objective function features tunable weights for the depth, appearance, and semantic information channels, which are learned from data. A fast semantic segmentation and registration weight prediction convolutional neural network (Fast-RGBD-SSWP) suited to efficient computation is introduced. In addition, our approach explores performing 6D object pose estimation from multiple viewpoints supported by the high-quality reconstruction system. The developed method has been verified through experimental validation on the YCB-Video dataset and a dataset of warehouse objects. Our results confirm that the proposed system performs favorably in terms of surface reconstruction, segmentation quality, and accurate object pose estimation in comparison to other state-of-the-art systems. Our code and video are available at https://sites.google.com/view/panoptic-mope.

Place, publisher, year, edition, pages
IEEE, 2020
Keywords
RGB-D perception, object detection, segmen-tation and categorization, mapping
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-81423 (URN)10.1109/LRA.2020.2970682 (DOI)000526520500038 ()2-s2.0-85079819725 (Scopus ID)
Funder
EU, Horizon 2020
Available from: 2020-04-30 Created: 2020-04-30 Last updated: 2020-04-30Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-0217-9326

Search in DiVA

Show all publications