oru.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 65) Show all publications
Della Corte, B., Andreasson, H., Stoyanov, T. & Grisetti, G. (2019). Unified Motion-Based Calibration of Mobile Multi-Sensor Platforms With Time Delay Estimation. IEEE Robotics and Automation Letters, 4(2), 902-909
Open this publication in new window or tab >>Unified Motion-Based Calibration of Mobile Multi-Sensor Platforms With Time Delay Estimation
2019 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, no 2, p. 902-909Article in journal (Refereed) Published
Abstract [en]

The ability to maintain and continuously update geometric calibration parameters of a mobile platform is a key functionality for every robotic system. These parameters include the intrinsic kinematic parameters of the platform, the extrinsic parameters of the sensors mounted on it, and their time delays. In this letter, we present a unified pipeline for motion-based calibration of mobile platforms equipped with multiple heterogeneous sensors. We formulate a unified optimization problem to concurrently estimate the platform kinematic parameters, the sensors extrinsic parameters, and their time delays. We analyze the influence of the trajectory followed by the robot on the accuracy of the estimate. Our framework automatically selects appropriate trajectories to maximize the information gathered and to obtain a more accurate parameters estimate. In combination with that, our pipeline observes the parameters evolution in long-term operation to detect possible values change in the parameters set. The experiments conducted on real data show a smooth convergence along with the ability to detect changes in parameters value. We release an open-source version of our framework to the community.

Place, publisher, year, edition, pages
IEEE, 2019
Keywords
Calibration and Identification
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-72756 (URN)10.1109/LRA.2019.2892992 (DOI)000458182100012 ()
Note

Funding Agency:

Semantic Robots Research Profile - Swedish Knowledge Foundation (KKS) 

Available from: 2019-02-25 Created: 2019-02-25 Last updated: 2019-02-25Bibliographically approved
Pecora, F., Andreasson, H., Mansouri, M. & Petkov, V. (2018). A Loosely-Coupled Approach for Multi-Robot Coordination, Motion Planning and Control. In: : . Paper presented at International Conference on Automated Planning and Scheduling (ICAPS 2018), Delft, The Netherland, June 24-29, 2018.
Open this publication in new window or tab >>A Loosely-Coupled Approach for Multi-Robot Coordination, Motion Planning and Control
2018 (English)Conference paper, Published paper (Refereed)
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-64721 (URN)
Conference
International Conference on Automated Planning and Scheduling (ICAPS 2018), Delft, The Netherland, June 24-29, 2018
Projects
Semantic RobotsILIAD
Funder
Knowledge Foundation, 20140033EU, Horizon 2020, 732737
Available from: 2018-01-31 Created: 2018-01-31 Last updated: 2018-06-11Bibliographically approved
Chadalavada, R. T., Andreasson, H., Schindler, M., Palm, R. & Lilienthal, A. (2018). Accessing your navigation plans! Human-Robot Intention Transfer using Eye-Tracking Glasses. In: Case K. &Thorvald P. (Ed.), Advances in Manufacturing Technology XXXII: Proceedings of the 16th International Conference on Manufacturing Research, incorporating the 33rd National Conference on Manufacturing Research, September 11–13, 2018, University of Skövde, Sweden. Paper presented at 16th International Conference on Manufacturing Research, incorporating the 33rd National Conference on Manufacturing Research, University of Skövde, Sweden, September 11–13, 2018 (pp. 253-258). Amsterdam, Netherlands: IOS Press
Open this publication in new window or tab >>Accessing your navigation plans! Human-Robot Intention Transfer using Eye-Tracking Glasses
Show others...
2018 (English)In: Advances in Manufacturing Technology XXXII: Proceedings of the 16th International Conference on Manufacturing Research, incorporating the 33rd National Conference on Manufacturing Research, September 11–13, 2018, University of Skövde, Sweden / [ed] Case K. &Thorvald P., Amsterdam, Netherlands: IOS Press, 2018, p. 253-258Conference paper, Published paper (Refereed)
Abstract [en]

Robots in human co-habited environments need human-aware task and motion planning, ideally responding to people’s motion intentions as soon as they can be inferred from human cues. Eye gaze can convey information about intentions beyond trajectory and head pose of a person. Hence, we propose eye-tracking glasses as safety equipment in industrial environments shared by humans and robots. This paper investigates the possibility of human-to-robot implicit intention transference solely from eye gaze data.  We present experiments in which humans wearing eye-tracking glasses encountered a small forklift truck under various conditions. We evaluate how the observed eye gaze patterns of the participants related to their navigation decisions. Our analysis shows that people primarily gazed on that side of the robot they ultimately decided to pass by. We discuss implications of these results and relate to a control approach that uses human eye gaze for early obstacle avoidance.

Place, publisher, year, edition, pages
Amsterdam, Netherlands: IOS Press, 2018
Series
Advances in Transdisciplinary Engineering, ISSN 2352-751X, E-ISSN 2352-7528
Keywords
Human-Robot Interaction (HRI), Eye-tracking, Eye-Tracking Glasses, Navigation Intent, Implicit Intention Transference, Obstacle avoidance.
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-70706 (URN)10.3233/978-1-61499-902-7-253 (DOI)2-s2.0-85057390000 (Scopus ID)978-1-61499-901-0 (ISBN)978-1-61499-902-7 (ISBN)
Conference
16th International Conference on Manufacturing Research, incorporating the 33rd National Conference on Manufacturing Research, University of Skövde, Sweden, September 11–13, 2018
Projects
Action and Intention Recognition (AIR)ILIAD
Available from: 2018-12-12 Created: 2018-12-12 Last updated: 2018-12-18Bibliographically approved
Adolfsson, D., Lowry, S. & Andreasson, H. (2018). Improving Localisation Accuracy using Submaps in warehouses. In: : . Paper presented at IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Workshop on Robotics for Logistics in Warehouses and Environments Shared with Humans, Madrid, Spain, October 5, 2018.
Open this publication in new window or tab >>Improving Localisation Accuracy using Submaps in warehouses
2018 (English)Conference paper, Oral presentation with published abstract (Other academic)
Abstract [en]

This paper presents a method for localisation in hybrid metric-topological maps built using only local information that is, only measurements that were captured by the robot when it was in a nearby location. The motivation is that observations are typically range and viewpoint dependent and that a map a discrete map representation might not be able to explain the full structure within a voxel. The localisation system uses a method to select submap based on how frequently and where from each submap was updated. This allow the system to select the most descriptive submap, thereby improving the localisation and increasing performance by up to 40%.

National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-71844 (URN)
Conference
IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Workshop on Robotics for Logistics in Warehouses and Environments Shared with Humans, Madrid, Spain, October 5, 2018
Projects
Iliad
Available from: 2019-01-28 Created: 2019-01-28 Last updated: 2019-01-28Bibliographically approved
Lowry, S. & Andreasson, H. (2018). Lightweight, Viewpoint-Invariant Visual Place Recognition in Changing Environments. IEEE Robotics and Automation Letters, 3(2), 957-964
Open this publication in new window or tab >>Lightweight, Viewpoint-Invariant Visual Place Recognition in Changing Environments
2018 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 3, no 2, p. 957-964Article in journal (Refereed) Published
Abstract [en]

This paper presents a viewpoint-invariant place recognition algorithm which is robust to changing environments while requiring only a small memory footprint. It demonstrates that condition-invariant local features can be combined with Vectors of Locally Aggregated Descriptors (VLAD) to reduce high-dimensional representations of images to compact binary signatures while retaining place matching capability across visually dissimilar conditions. This system provides a speed-up of two orders of magnitude over direct feature matching, and outperforms a bag-of-visual-words approach with near-identical computation speed and memory footprint. The experimental results show that single-image place matching from non-aligned images can be achieved in visually changing environments with as few as 256 bits (32 bytes) per image.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2018
Keywords
Visual-based navigation, recognition, localization
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-64652 (URN)10.1109/LRA.2018.2793308 (DOI)000424646100015 ()
Note

Funding Agency:

Semantic Robots Research Profile - Swedish Knowledge Foundation

Available from: 2018-01-30 Created: 2018-01-30 Last updated: 2018-02-28Bibliographically approved
Lowry, S. & Andreasson, H. (2018). LOGOS: Local geometric support for high-outlier spatial verification. In: : . Paper presented at IEEE International Conference of Robotics and Automation (ICRA 2018), Brisbane, Australia, May 21-25, 2018 (pp. 7262-7269). IEEE Computer Society
Open this publication in new window or tab >>LOGOS: Local geometric support for high-outlier spatial verification
2018 (English)Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents LOGOS, a method of spatial verification for visual localization that is robust in the presence of a high proportion of outliers. LOGOS uses scale and orientation information from local neighbourhoods of features to determine which points are likely to be inliers. The inlier points can be used for secondary localization verification and pose estimation. LOGOS is demonstrated on a number of benchmark localization datasets and outperforms RANSAC as a method of outlier removal and localization verification in scenarios that require robustness to many outliers.

Place, publisher, year, edition, pages
IEEE Computer Society, 2018
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-68446 (URN)000446394505077 ()
Conference
IEEE International Conference of Robotics and Automation (ICRA 2018), Brisbane, Australia, May 21-25, 2018
Note

Funding Agency:

Semantic Robots Research Profile - Swedish Knowledge Foundation (KKS)

Available from: 2018-08-13 Created: 2018-08-13 Last updated: 2018-10-22Bibliographically approved
Andreasson, H., Adolfsson, D., Stoyanov, T., Magnusson, M. & Lilienthal, A. (2017). Incorporating Ego-motion Uncertainty Estimates in Range Data Registration. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), Vancouver, Canada, September 24–28, 2017 (pp. 1389-1395). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Incorporating Ego-motion Uncertainty Estimates in Range Data Registration
Show others...
2017 (English)In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 1389-1395Conference paper, Published paper (Refereed)
Abstract [en]

Local scan registration approaches commonlyonly utilize ego-motion estimates (e.g. odometry) as aninitial pose guess in an iterative alignment procedure. Thispaper describes a new method to incorporate ego-motionestimates, including uncertainty, into the objective function of aregistration algorithm. The proposed approach is particularlysuited for feature-poor and self-similar environments,which typically present challenges to current state of theart registration algorithms. Experimental evaluation showssignificant improvements in accuracy when using data acquiredby Automatic Guided Vehicles (AGVs) in industrial productionand warehouse environments.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2017
Series
Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems, ISSN 2153-0858, E-ISSN 2153-0866
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-62803 (URN)10.1109/IROS.2017.8202318 (DOI)000426978201108 ()2-s2.0-85041958720 (Scopus ID)978-1-5386-2682-5 (ISBN)978-1-5386-2683-2 (ISBN)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), Vancouver, Canada, September 24–28, 2017
Projects
Semantic RobotsILIAD
Funder
Knowledge FoundationEU, Horizon 2020, 732737
Available from: 2017-11-24 Created: 2017-11-24 Last updated: 2018-04-09Bibliographically approved
Magnusson, M., Kucner, T. P., Gholami Shahbandi, S., Andreasson, H. & Lilienthal, A. (2017). Semi-Supervised 3D Place Categorisation by Descriptor Clustering. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): . Paper presented at 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) Vancouver, Canada, September 24–28, 2017 (pp. 620-625). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Semi-Supervised 3D Place Categorisation by Descriptor Clustering
Show others...
2017 (English)In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 620-625Conference paper, Published paper (Refereed)
Abstract [en]

Place categorisation; i. e., learning to group perception data into categories based on appearance; typically uses supervised learning and either visual or 2D range data.

This paper shows place categorisation from 3D data without any training phase. We show that, by leveraging the NDT histogram descriptor to compactly encode 3D point cloud appearance, in combination with standard clustering techniques, it is possible to classify public indoor data sets with accuracy comparable to, and sometimes better than, previous supervised training methods. We also demonstrate the effectiveness of this approach to outdoor data, with an added benefit of being able to hierarchically categorise places into sub-categories based on a user-selected threshold.

This technique relieves users of providing relevant training data, and only requires them to adjust the sensitivity to the number of place categories, and provide a semantic label to each category after the process is completed.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2017
Series
Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems, ISSN 2153-0858, E-ISSN 2153-0866
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-61903 (URN)10.1109/IROS.2017.8202216 (DOI)000426978201006 ()2-s2.0-85041949592 (Scopus ID)978-1-5386-2682-5 (ISBN)978-1-5386-2683-2 (ISBN)
Conference
2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) Vancouver, Canada, September 24–28, 2017
Projects
ILIAD
Funder
EU, Horizon 2020, 732737
Note

Iliad Project: http://iliad-project.eu

Available from: 2017-10-20 Created: 2017-10-20 Last updated: 2018-04-09Bibliographically approved
Mielle, M., Magnusson, M., Andreasson, H. & Lilienthal, A. J. (2017). SLAM auto-complete: completing a robot map using an emergency map. In: 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR): . Paper presented at 15th IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR 2017), ShanghaiTech University, China, October 11-13, 2017 (pp. 35-40). IEEE conference proceedings, Article ID 8088137.
Open this publication in new window or tab >>SLAM auto-complete: completing a robot map using an emergency map
2017 (English)In: 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), IEEE conference proceedings, 2017, p. 35-40, article id 8088137Conference paper, Published paper (Refereed)
Abstract [en]

In search and rescue missions, time is an important factor; fast navigation and quickly acquiring situation awareness might be matters of life and death. Hence, the use of robots in such scenarios has been restricted by the time needed to explore and build a map. One way to speed up exploration and mapping is to reason about unknown parts of the environment using prior information. While previous research on using external priors for robot mapping mainly focused on accurate maps or aerial images, such data are not always possible to get, especially indoor. We focus on emergency maps as priors for robot mapping since they are easy to get and already extensively used by firemen in rescue missions. However, those maps can be outdated, information might be missing, and the scales of rooms are typically not consistent.

We have developed a formulation of graph-based SLAM that incorporates information from an emergency map. The graph-SLAM is optimized using a combination of robust kernels, fusing the emergency map and the robot map into one map, even when faced with scale inaccuracies and inexact start poses.

We typically have more than 50% of wrong correspondences in the settings studied in this paper, and the method we propose correctly handles them. Experiments in an office environment show that we can handle up to 70% of wrong correspondences and still get the expected result. The robot can navigate and explore while taking into account places it has not yet seen. We demonstrate this in a test scenario and also show that the emergency map is enhanced by adding information not represented such as closed doors or new walls.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2017
Keywords
SLAM, robotics, graph, graph SLAM, emergency map, rescue, exploration, auto complete, SLAM, robotics, graph, graph SLAM, plan de secours, sauvetage, exploration, auto complete
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-62057 (URN)10.1109/SSRR.2017.8088137 (DOI)000426991900007 ()2-s2.0-85040221684 (Scopus ID)978-1-5386-3923-8 (ISBN)978-1-5386-3924-5 (ISBN)
Conference
15th IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR 2017), ShanghaiTech University, China, October 11-13, 2017
Projects
EU H2020 project SmokeBot (ICT- 23-2014 645101)
Funder
Knowledge Foundation, 20140220
Note

Funding Agency:

EU  ICT-23-2014645101

Available from: 2017-11-08 Created: 2017-11-08 Last updated: 2018-03-27Bibliographically approved
Mielle, M., Magnusson, M., Andreasson, H. & Lilienthal, A. (2017). Using emergency maps to add not yet explored places into SLAM. In: : . Paper presented at 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, Canada, September 24-28, 2017.
Open this publication in new window or tab >>Using emergency maps to add not yet explored places into SLAM
2017 (English)Conference paper, Poster (with or without abstract) (Other academic)
Abstract [en]

While using robots in search and rescue missions would help ensure the safety of first responders, a key issue is the time needed by the robot to operate. Even though SLAM is faster and faster, it might still be too slow to enable the use of robots in critical situations. One way to speed up operation time is to use prior information.

We aim at integrating emergency-maps into SLAM to complete the SLAM map with information about not yet explored part of the environment. By integrating prior information, we can speed up exploration time or provide valuable prior information for navigation, for example, in case of sensor blackout/failure. However, while extensively used by firemen in their operations, emergency maps are not easy to integrate in SLAM since they are often not up to date or with non consistent scales.

The main challenge we are tackling is in dealing with the imperfect scale of the rough emergency maps and integrate it with the online SLAM map in addition to challenges due to incorrect matches between these two types of map. We developed a formulation of graph-based SLAM incorporating information from an emergency map into SLAM, and propose a novel optimization process adapted to this formulation.

We extract corners from the emergency map and the SLAM map, in between which we find correspondences using a distance measure. We then build a graph representation associating information from the emergency map and the SLAM map. Corners in the emergency map, corners in the robot map, and robot poses are added as nodes in the graph, while odometry, corner observations, walls in the emergency map, and corner associations are added as edges. To conserve the topology of the emergency map, but correct its possible errors in scale, edges representing the emergency map's walls are given a covariance so that they are easy to extend or shrink but hard to rotate. Correspondences between corners represent a zero transformation for the optimization to match them as close as possible. The graph optimization is done by using a combination robust kernels. We first use the Huber kernel, to converge toward a good solution, followed by Dynamic Covariance Scaling, to handle the remaining errors.

We demonstrate our system in an office environment. We run the SLAM online during the exploration. Using the map enhanced by information from the emergency map, the robot was able to plan the shortest path toward a place it has not yet explored. This capability can be a real asset in complex buildings where exploration can take up a long time. It can also reduce exploration time by avoiding exploration of dead-ends, or search of specific places since the robot knows where it is in the emergency map.

Keywords
Search and Rescue Robots, SLAM, Mapping
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-61905 (URN)
Conference
2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, Canada, September 24-28, 2017
Projects
SmokeBot
Available from: 2017-10-20 Created: 2017-10-20 Last updated: 2018-02-01Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-2953-1564

Search in DiVA

Show all publications