oru.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 66) Show all publications
Chadalavada, R. T., Andreasson, H., Schindler, M., Palm, R. & Lilienthal, A. (2020). Bi-directional navigation intent communication using spatial augmented reality and eye-tracking glasses for improved safety in human-robot interaction. Robotics and Computer-Integrated Manufacturing, 61, Article ID 101830.
Open this publication in new window or tab >>Bi-directional navigation intent communication using spatial augmented reality and eye-tracking glasses for improved safety in human-robot interaction
Show others...
2020 (English)In: Robotics and Computer-Integrated Manufacturing, ISSN 0736-5845, E-ISSN 1879-2537, Vol. 61, article id 101830Article in journal (Refereed) Published
Abstract [en]

Safety, legibility and efficiency are essential for autonomous mobile robots that interact with humans. A key factor in this respect is bi-directional communication of navigation intent, which we focus on in this article with a particular view on industrial logistic applications. In the direction robot-to-human, we study how a robot can communicate its navigation intent using Spatial Augmented Reality (SAR) such that humans can intuitively understand the robot's intention and feel safe in the vicinity of robots. We conducted experiments with an autonomous forklift that projects various patterns on the shared floor space to convey its navigation intentions. We analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift and carried out stimulated recall interviews (SRI) in order to identify desirable features for projection of robot intentions. In the direction human-to-robot, we argue that robots in human co-habited environments need human-aware task and motion planning to support safety and efficiency, ideally responding to people's motion intentions as soon as they can be inferred from human cues. Eye gaze can convey information about intentions beyond what can be inferred from the trajectory and head pose of a person. Hence, we propose eye-tracking glasses as safety equipment in industrial environments shared by humans and robots. In this work, we investigate the possibility of human-to-robot implicit intention transference solely from eye gaze data and evaluate how the observed eye gaze patterns of the participants relate to their navigation decisions. We again analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift for clues that could reveal direction intent. Our analysis shows that people primarily gazed on that side of the robot they ultimately decided to pass by. We discuss implications of these results and relate to a control approach that uses human gaze for early obstacle avoidance.

Place, publisher, year, edition, pages
Elsevier, 2020
Keywords
Human-robot interaction (HRI), Mobile robots, Intention communication, Eye-tracking, Intention recognition, Spatial augmented reality, Stimulated recall interview, Obstacle avoidance, Safety, Logistics
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-78358 (URN)10.1016/j.rcim.2019.101830 (DOI)000496834800002 ()2-s2.0-85070732550 (Scopus ID)
Note

Funding Agencies:

KKS SIDUS project AIR: "Action and Intention Recognition in Human Interaction with Autonomous Systems"  20140220

H2020 project ILIAD: "Intra-Logistics with Integrated Automatic Deployment: Safe and Scalable Fleets in Shared Spaces"  732737

Available from: 2019-12-03 Created: 2019-12-03 Last updated: 2019-12-03Bibliographically approved
Della Corte, B., Andreasson, H., Stoyanov, T. & Grisetti, G. (2019). Unified Motion-Based Calibration of Mobile Multi-Sensor Platforms With Time Delay Estimation. IEEE Robotics and Automation Letters, 4(2), 902-909
Open this publication in new window or tab >>Unified Motion-Based Calibration of Mobile Multi-Sensor Platforms With Time Delay Estimation
2019 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, no 2, p. 902-909Article in journal (Refereed) Published
Abstract [en]

The ability to maintain and continuously update geometric calibration parameters of a mobile platform is a key functionality for every robotic system. These parameters include the intrinsic kinematic parameters of the platform, the extrinsic parameters of the sensors mounted on it, and their time delays. In this letter, we present a unified pipeline for motion-based calibration of mobile platforms equipped with multiple heterogeneous sensors. We formulate a unified optimization problem to concurrently estimate the platform kinematic parameters, the sensors extrinsic parameters, and their time delays. We analyze the influence of the trajectory followed by the robot on the accuracy of the estimate. Our framework automatically selects appropriate trajectories to maximize the information gathered and to obtain a more accurate parameters estimate. In combination with that, our pipeline observes the parameters evolution in long-term operation to detect possible values change in the parameters set. The experiments conducted on real data show a smooth convergence along with the ability to detect changes in parameters value. We release an open-source version of our framework to the community.

Place, publisher, year, edition, pages
IEEE, 2019
Keywords
Calibration and Identification
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-72756 (URN)10.1109/LRA.2019.2892992 (DOI)000458182100012 ()
Note

Funding Agency:

Semantic Robots Research Profile - Swedish Knowledge Foundation (KKS) 

Available from: 2019-02-25 Created: 2019-02-25 Last updated: 2019-02-25Bibliographically approved
Pecora, F., Andreasson, H., Mansouri, M. & Petkov, V. (2018). A Loosely-Coupled Approach for Multi-Robot Coordination, Motion Planning and Control, ICAPS. In: Mathijs de Weerdt, Sven Koenig, Gabriele Röger, Matthijs Spaan (Ed.), Proceedings of the International Conference on Automated Planning and Scheduling: . Paper presented at International Conference on Automated Planning and Scheduling (ICAPS 2018), Delft, The Netherland, June 24-29, 2018 (pp. 485-493). Delft, The Netherlands: AAAI Press, 2018-June, Article ID 139850.
Open this publication in new window or tab >>A Loosely-Coupled Approach for Multi-Robot Coordination, Motion Planning and Control, ICAPS
2018 (English)In: Proceedings of the International Conference on Automated Planning and Scheduling / [ed] Mathijs de Weerdt, Sven Koenig, Gabriele Röger, Matthijs Spaan, Delft, The Netherlands: AAAI Press, 2018, Vol. 2018-June, p. 485-493, article id 139850Conference paper, Published paper (Refereed)
Abstract [en]

Deploying fleets of autonomous robots in real-world applications requires addressing three problems: motion planning, coordination, and control. Application-specific features of the environment and robots often narrow down the possible motion planning and control methods that can be used. This paper proposes a lightweight coordination method that implements a high-level controller for a fleet of potentially heterogeneous robots. Very few assumptions are made on robot controllers, which are required only to be able to accept set point updates and to report their current state. The approach can be used with any motion planning method for computing kinematically-feasible paths. Coordination uses heuristics to update priorities while robots are in motion, and a simple model of robot dynamics to guarantee dynamic feasibility. The approach avoids a priori discretization of the environment or of robot paths, allowing robots to “follow each other” through critical sections. We validate the method formally and experimentally with different motion planners and robot controllers, in simulation and with real robots.

Place, publisher, year, edition, pages
Delft, The Netherlands: AAAI Press, 2018
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-64721 (URN)000492986200059 ()2-s2.0-85054990876 (Scopus ID)
Conference
International Conference on Automated Planning and Scheduling (ICAPS 2018), Delft, The Netherland, June 24-29, 2018
Projects
Semantic RobotsILIAD
Funder
Knowledge Foundation, 20140033EU, Horizon 2020, 732737Vinnova
Available from: 2018-01-31 Created: 2018-01-31 Last updated: 2019-11-12Bibliographically approved
Chadalavada, R. T., Andreasson, H., Schindler, M., Palm, R. & Lilienthal, A. (2018). Accessing your navigation plans! Human-Robot Intention Transfer using Eye-Tracking Glasses. In: Case K. &Thorvald P. (Ed.), Advances in Manufacturing Technology XXXII: Proceedings of the 16th International Conference on Manufacturing Research, incorporating the 33rd National Conference on Manufacturing Research, September 11–13, 2018, University of Skövde, Sweden. Paper presented at 16th International Conference on Manufacturing Research, incorporating the 33rd National Conference on Manufacturing Research, University of Skövde, Sweden, September 11–13, 2018 (pp. 253-258). Amsterdam, Netherlands: IOS Press
Open this publication in new window or tab >>Accessing your navigation plans! Human-Robot Intention Transfer using Eye-Tracking Glasses
Show others...
2018 (English)In: Advances in Manufacturing Technology XXXII: Proceedings of the 16th International Conference on Manufacturing Research, incorporating the 33rd National Conference on Manufacturing Research, September 11–13, 2018, University of Skövde, Sweden / [ed] Case K. &Thorvald P., Amsterdam, Netherlands: IOS Press, 2018, p. 253-258Conference paper, Published paper (Refereed)
Abstract [en]

Robots in human co-habited environments need human-aware task and motion planning, ideally responding to people’s motion intentions as soon as they can be inferred from human cues. Eye gaze can convey information about intentions beyond trajectory and head pose of a person. Hence, we propose eye-tracking glasses as safety equipment in industrial environments shared by humans and robots. This paper investigates the possibility of human-to-robot implicit intention transference solely from eye gaze data.  We present experiments in which humans wearing eye-tracking glasses encountered a small forklift truck under various conditions. We evaluate how the observed eye gaze patterns of the participants related to their navigation decisions. Our analysis shows that people primarily gazed on that side of the robot they ultimately decided to pass by. We discuss implications of these results and relate to a control approach that uses human eye gaze for early obstacle avoidance.

Place, publisher, year, edition, pages
Amsterdam, Netherlands: IOS Press, 2018
Series
Advances in Transdisciplinary Engineering, ISSN 2352-751X, E-ISSN 2352-7528 ; 8
Keywords
Human-Robot Interaction (HRI), Eye-tracking, Eye-Tracking Glasses, Navigation Intent, Implicit Intention Transference, Obstacle avoidance.
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-70706 (URN)10.3233/978-1-61499-902-7-253 (DOI)000462212700041 ()2-s2.0-85057390000 (Scopus ID)978-1-61499-901-0 (ISBN)978-1-61499-902-7 (ISBN)
Conference
16th International Conference on Manufacturing Research, incorporating the 33rd National Conference on Manufacturing Research, University of Skövde, Sweden, September 11–13, 2018
Projects
Action and Intention Recognition (AIR)ILIAD
Available from: 2018-12-12 Created: 2018-12-12 Last updated: 2019-04-04Bibliographically approved
Adolfsson, D., Lowry, S. & Andreasson, H. (2018). Improving Localisation Accuracy using Submaps in warehouses. In: : . Paper presented at IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Workshop on Robotics for Logistics in Warehouses and Environments Shared with Humans, Madrid, Spain, October 5, 2018.
Open this publication in new window or tab >>Improving Localisation Accuracy using Submaps in warehouses
2018 (English)Conference paper, Oral presentation with published abstract (Other academic)
Abstract [en]

This paper presents a method for localisation in hybrid metric-topological maps built using only local information that is, only measurements that were captured by the robot when it was in a nearby location. The motivation is that observations are typically range and viewpoint dependent and that a map a discrete map representation might not be able to explain the full structure within a voxel. The localisation system uses a method to select submap based on how frequently and where from each submap was updated. This allow the system to select the most descriptive submap, thereby improving the localisation and increasing performance by up to 40%.

National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-71844 (URN)
Conference
IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Workshop on Robotics for Logistics in Warehouses and Environments Shared with Humans, Madrid, Spain, October 5, 2018
Projects
Iliad
Available from: 2019-01-28 Created: 2019-01-28 Last updated: 2019-01-28Bibliographically approved
Lowry, S. & Andreasson, H. (2018). Lightweight, Viewpoint-Invariant Visual Place Recognition in Changing Environments. IEEE Robotics and Automation Letters, 3(2), 957-964
Open this publication in new window or tab >>Lightweight, Viewpoint-Invariant Visual Place Recognition in Changing Environments
2018 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 3, no 2, p. 957-964Article in journal (Refereed) Published
Abstract [en]

This paper presents a viewpoint-invariant place recognition algorithm which is robust to changing environments while requiring only a small memory footprint. It demonstrates that condition-invariant local features can be combined with Vectors of Locally Aggregated Descriptors (VLAD) to reduce high-dimensional representations of images to compact binary signatures while retaining place matching capability across visually dissimilar conditions. This system provides a speed-up of two orders of magnitude over direct feature matching, and outperforms a bag-of-visual-words approach with near-identical computation speed and memory footprint. The experimental results show that single-image place matching from non-aligned images can be achieved in visually changing environments with as few as 256 bits (32 bytes) per image.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2018
Keywords
Visual-based navigation, recognition, localization
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-64652 (URN)10.1109/LRA.2018.2793308 (DOI)000424646100015 ()
Note

Funding Agency:

Semantic Robots Research Profile - Swedish Knowledge Foundation

Available from: 2018-01-30 Created: 2018-01-30 Last updated: 2018-02-28Bibliographically approved
Lowry, S. & Andreasson, H. (2018). LOGOS: Local geometric support for high-outlier spatial verification. In: : . Paper presented at IEEE International Conference of Robotics and Automation (ICRA 2018), Brisbane, Australia, May 21-25, 2018 (pp. 7262-7269). IEEE Computer Society
Open this publication in new window or tab >>LOGOS: Local geometric support for high-outlier spatial verification
2018 (English)Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents LOGOS, a method of spatial verification for visual localization that is robust in the presence of a high proportion of outliers. LOGOS uses scale and orientation information from local neighbourhoods of features to determine which points are likely to be inliers. The inlier points can be used for secondary localization verification and pose estimation. LOGOS is demonstrated on a number of benchmark localization datasets and outperforms RANSAC as a method of outlier removal and localization verification in scenarios that require robustness to many outliers.

Place, publisher, year, edition, pages
IEEE Computer Society, 2018
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-68446 (URN)000446394505077 ()
Conference
IEEE International Conference of Robotics and Automation (ICRA 2018), Brisbane, Australia, May 21-25, 2018
Note

Funding Agency:

Semantic Robots Research Profile - Swedish Knowledge Foundation (KKS)

Available from: 2018-08-13 Created: 2018-08-13 Last updated: 2018-10-22Bibliographically approved
Andreasson, H., Adolfsson, D., Stoyanov, T., Magnusson, M. & Lilienthal, A. (2017). Incorporating Ego-motion Uncertainty Estimates in Range Data Registration. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), Vancouver, Canada, September 24–28, 2017 (pp. 1389-1395). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Incorporating Ego-motion Uncertainty Estimates in Range Data Registration
Show others...
2017 (English)In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 1389-1395Conference paper, Published paper (Refereed)
Abstract [en]

Local scan registration approaches commonlyonly utilize ego-motion estimates (e.g. odometry) as aninitial pose guess in an iterative alignment procedure. Thispaper describes a new method to incorporate ego-motionestimates, including uncertainty, into the objective function of aregistration algorithm. The proposed approach is particularlysuited for feature-poor and self-similar environments,which typically present challenges to current state of theart registration algorithms. Experimental evaluation showssignificant improvements in accuracy when using data acquiredby Automatic Guided Vehicles (AGVs) in industrial productionand warehouse environments.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2017
Series
Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems, ISSN 2153-0858, E-ISSN 2153-0866
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-62803 (URN)10.1109/IROS.2017.8202318 (DOI)000426978201108 ()2-s2.0-85041958720 (Scopus ID)978-1-5386-2682-5 (ISBN)978-1-5386-2683-2 (ISBN)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), Vancouver, Canada, September 24–28, 2017
Projects
Semantic RobotsILIAD
Funder
Knowledge FoundationEU, Horizon 2020, 732737
Available from: 2017-11-24 Created: 2017-11-24 Last updated: 2018-04-09Bibliographically approved
Magnusson, M., Kucner, T. P., Gholami Shahbandi, S., Andreasson, H. & Lilienthal, A. (2017). Semi-Supervised 3D Place Categorisation by Descriptor Clustering. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): . Paper presented at 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) Vancouver, Canada, September 24–28, 2017 (pp. 620-625). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Semi-Supervised 3D Place Categorisation by Descriptor Clustering
Show others...
2017 (English)In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 620-625Conference paper, Published paper (Refereed)
Abstract [en]

Place categorisation; i. e., learning to group perception data into categories based on appearance; typically uses supervised learning and either visual or 2D range data.

This paper shows place categorisation from 3D data without any training phase. We show that, by leveraging the NDT histogram descriptor to compactly encode 3D point cloud appearance, in combination with standard clustering techniques, it is possible to classify public indoor data sets with accuracy comparable to, and sometimes better than, previous supervised training methods. We also demonstrate the effectiveness of this approach to outdoor data, with an added benefit of being able to hierarchically categorise places into sub-categories based on a user-selected threshold.

This technique relieves users of providing relevant training data, and only requires them to adjust the sensitivity to the number of place categories, and provide a semantic label to each category after the process is completed.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2017
Series
Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems, ISSN 2153-0858, E-ISSN 2153-0866
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-61903 (URN)10.1109/IROS.2017.8202216 (DOI)000426978201006 ()2-s2.0-85041949592 (Scopus ID)978-1-5386-2682-5 (ISBN)978-1-5386-2683-2 (ISBN)
Conference
2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) Vancouver, Canada, September 24–28, 2017
Projects
ILIAD
Funder
EU, Horizon 2020, 732737
Note

Iliad Project: http://iliad-project.eu

Available from: 2017-10-20 Created: 2017-10-20 Last updated: 2018-04-09Bibliographically approved
Mielle, M., Magnusson, M., Andreasson, H. & Lilienthal, A. J. (2017). SLAM auto-complete: completing a robot map using an emergency map. In: 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR): . Paper presented at 15th IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR 2017), ShanghaiTech University, China, October 11-13, 2017 (pp. 35-40). IEEE conference proceedings, Article ID 8088137.
Open this publication in new window or tab >>SLAM auto-complete: completing a robot map using an emergency map
2017 (English)In: 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), IEEE conference proceedings, 2017, p. 35-40, article id 8088137Conference paper, Published paper (Refereed)
Abstract [en]

In search and rescue missions, time is an important factor; fast navigation and quickly acquiring situation awareness might be matters of life and death. Hence, the use of robots in such scenarios has been restricted by the time needed to explore and build a map. One way to speed up exploration and mapping is to reason about unknown parts of the environment using prior information. While previous research on using external priors for robot mapping mainly focused on accurate maps or aerial images, such data are not always possible to get, especially indoor. We focus on emergency maps as priors for robot mapping since they are easy to get and already extensively used by firemen in rescue missions. However, those maps can be outdated, information might be missing, and the scales of rooms are typically not consistent.

We have developed a formulation of graph-based SLAM that incorporates information from an emergency map. The graph-SLAM is optimized using a combination of robust kernels, fusing the emergency map and the robot map into one map, even when faced with scale inaccuracies and inexact start poses.

We typically have more than 50% of wrong correspondences in the settings studied in this paper, and the method we propose correctly handles them. Experiments in an office environment show that we can handle up to 70% of wrong correspondences and still get the expected result. The robot can navigate and explore while taking into account places it has not yet seen. We demonstrate this in a test scenario and also show that the emergency map is enhanced by adding information not represented such as closed doors or new walls.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2017
Keywords
SLAM, robotics, graph, graph SLAM, emergency map, rescue, exploration, auto complete, SLAM, robotics, graph, graph SLAM, plan de secours, sauvetage, exploration, auto complete
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-62057 (URN)10.1109/SSRR.2017.8088137 (DOI)000426991900007 ()2-s2.0-85040221684 (Scopus ID)978-1-5386-3923-8 (ISBN)978-1-5386-3924-5 (ISBN)
Conference
15th IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR 2017), ShanghaiTech University, China, October 11-13, 2017
Projects
EU H2020 project SmokeBot (ICT- 23-2014 645101)
Funder
Knowledge Foundation, 20140220
Note

Funding Agency:

EU  ICT-23-2014645101

Available from: 2017-11-08 Created: 2017-11-08 Last updated: 2019-10-02Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-2953-1564

Search in DiVA

Show all publications