oru.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 42) Show all publications
Gholami Shahbandi, S. & Magnusson, M. (2019). 2D map alignment with region decomposition. Autonomous Robots, 43(5), 1117-1136
Open this publication in new window or tab >>2D map alignment with region decomposition
2019 (English)In: Autonomous Robots, ISSN 0929-5593, E-ISSN 1573-7527, Vol. 43, no 5, p. 1117-1136Article in journal (Refereed) Published
Abstract [en]

In many applications of autonomous mobile robots the following problem is encountered. Two maps of the same environment are available, one a prior map and the other a sensor map built by the robot. To benefit from all available information in both maps, the robot must find the correct alignment between the two maps. There exist many approaches to address this challenge, however, most of the previous methods rely on assumptions such as similar modalities of the maps, same scale, or existence of an initial guess for the alignment. In this work we propose a decomposition-based method for 2D spatial map alignment which does not rely on those assumptions. Our proposed method is validated and compared with other approaches, including generic data association approaches and map alignment algorithms. Real world examples of four different environments with thirty six sensor maps and four layout maps are used for this analysis. The maps, along with an implementation of the method, are made publicly available online.

Place, publisher, year, edition, pages
Springer, 2019
Keywords
Mobile robots, Mapping, Map alignment, Decomposition, 2D, Sensor map, Robot map, Layout map, Emergency map, Region segmentation, Similarity transformation
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-71107 (URN)10.1007/s10514-018-9785-7 (DOI)000467543000002 ()2-s2.0-85050797708 (Scopus ID)
Projects
ILIAD
Funder
EU, Horizon 2020Knowledge Foundation
Available from: 2019-01-04 Created: 2019-01-04 Last updated: 2019-06-18Bibliographically approved
Mielle, M., Magnusson, M. & Lilienthal, A. (2019). A comparative analysis of radar and lidar sensing for localization and mapping. In: : . Paper presented at 9th European Conference on Mobile Robots (ECMR 2019), Prague, Czech Republic, September 4-6, 2019. IEEE
Open this publication in new window or tab >>A comparative analysis of radar and lidar sensing for localization and mapping
2019 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Lidars and cameras are the sensors most commonly used for Simultaneous Localization And Mapping (SLAM). However, they are not effective in certain scenarios, e.g. when fire and smoke are present in the environment. While radars are much less affected by such conditions, radar and lidar have rarely been compared in terms of the achievable SLAM accuracy. We present a principled comparison of the accuracy of a novel radar sensor against that of a Velodyne lidar, for localization and mapping.

We evaluate the performance of both sensors by calculating the displacement in position and orientation relative to a ground-truth reference positioning system, over three experiments in an indoor lab environment. We use two different SLAM algorithms and found that the mean displacement in position when using the radar sensor was less than 0.037 m, compared to 0.011m for the lidar. We show that while producing slightly less accurate maps than a lidar, the radar can accurately perform SLAM and build a map of the environment, even including details such as corners and small walls.

Place, publisher, year, edition, pages
IEEE, 2019
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-76976 (URN)
Conference
9th European Conference on Mobile Robots (ECMR 2019), Prague, Czech Republic, September 4-6, 2019
Available from: 2019-10-02 Created: 2019-10-02 Last updated: 2019-10-02Bibliographically approved
Mielle, M., Magnusson, M. & Lilienthal, A. (2019). The Auto-Complete Graph: Merging and Mutual Correction of Sensor and Prior Maps for SLAM. Robotics, 8(2), Article ID 40.
Open this publication in new window or tab >>The Auto-Complete Graph: Merging and Mutual Correction of Sensor and Prior Maps for SLAM
2019 (English)In: Robotics, E-ISSN 2218-6581, Vol. 8, no 2, article id 40Article in journal (Refereed) Published
Abstract [en]

Simultaneous Localization And Mapping (SLAM) usually assumes the robot starts without knowledge of the environment. While prior information, such as emergency maps or layout maps, is often available, integration is not trivial since such maps are often out of date and have uncertainty in local scale. Integration of prior map information is further complicated by sensor noise, drift in the measurements, and incorrect scan registrations in the sensor map. We present the Auto-Complete Graph (ACG), a graph-based SLAM method merging elements of sensor and prior maps into one consistent representation. After optimizing the ACG, the sensor map's errors are corrected thanks to the prior map, while the sensor map corrects the local scale inaccuracies in the prior map. We provide three datasets with associated prior maps: two recorded in campus environments, and one from a fireman training facility. Our method handled up to 40% of noise in odometry, was robust to varying levels of details between the prior and the sensor map, and could correct local scale errors of the prior. In field tests with ACG, users indicated points of interest directly on the prior before exploration. We did not record failures in reaching them.

Place, publisher, year, edition, pages
MDPI, 2019
Keywords
SLAM, prior map, emergency map, layout map, graph-based SLAM, navigation, search and rescue
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-75742 (URN)10.3390/robotics8020040 (DOI)000475325600017 ()2-s2.0-85069926702 (Scopus ID)
Funder
Knowledge Foundation, 20140220
Note

Funding Agency:

EU  ICT-26-2016 732737  ICT-23-2014 645101

Available from: 2019-08-13 Created: 2019-08-13 Last updated: 2019-10-02Bibliographically approved
Mielle, M., Magnusson, M. & Lilienthal, A. (2019). URSIM: Unique Regions for Sketch Map Interpretation and Matching. Robotics, 8(2), Article ID 43.
Open this publication in new window or tab >>URSIM: Unique Regions for Sketch Map Interpretation and Matching
2019 (English)In: Robotics, E-ISSN 2218-6581, Vol. 8, no 2, article id 43Article in journal (Refereed) Published
Abstract [en]

We present a method for matching sketch maps to a corresponding metric map, with the aim of later using the sketch as an intuitive interface for human-robot interactions. While sketch maps are not metrically accurate and many details, which are deemed unnecessary, are omitted, they represent the topology of the environment well and are typically accurate at key locations. Thus, for sketch map interpretation and matching, one cannot only rely on metric information. Our matching method first finds the most distinguishable, or unique, regions of two maps. The topology of the maps, the positions of the unique regions, and the size of all regions are used to build region descriptors. Finally, a sequential graph matching algorithm uses the region descriptors to find correspondences between regions of the sketch and metric maps. Our method obtained higher accuracy than both a state-of-the-art matching method for inaccurate map matching, and our previous work on the subject. The state of the art was unable to match sketch maps while our method performed only 10% worse than a human expert.

Place, publisher, year, edition, pages
MDPI, 2019
Keywords
Map matching, sketch, human-robot interaction, interface, graph matching, segmentation
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-75741 (URN)10.3390/robotics8020043 (DOI)000475325600020 ()2-s2.0-85069975721 (Scopus ID)
Funder
Knowledge Foundation, 20140220
Note

Funding Agency:

EU  ICT-26-2016 732737

Available from: 2019-08-13 Created: 2019-08-13 Last updated: 2019-10-10Bibliographically approved
Fan, H., Lu, D., Kucner, T. P., Magnusson, M. & Lilienthal, A. (2018). 2D Spatial Keystone Transform for Sub-Pixel Motion Extraction from Noisy Occupancy Grid Map. In: Proceedings of 21st International Conference on Information Fusion (FUSION): . Paper presented at 21st International Conference on Information Fusion (FUSION), Cambridge, UK, July 10 - 13, 2018 (pp. 2400-2406).
Open this publication in new window or tab >>2D Spatial Keystone Transform for Sub-Pixel Motion Extraction from Noisy Occupancy Grid Map
Show others...
2018 (English)In: Proceedings of 21st International Conference on Information Fusion (FUSION), 2018, p. 2400-2406Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we propose a novel sub-pixel motionextraction method, called as Two Dimensional Spatial KeystoneTransform (2DS-KST), for the motion detection and estimationfrom successive noisy Occupancy Grid Maps (OGMs). It extendsthe KST in radar imaging or motion compensation to 2Dreal spatial case, based on multiple hypotheses about possibledirections of moving obstacles. Simulation results show that 2DSKSThas a good performance on the extraction of sub-pixelmotions in very noisy environment, especially for those slowlymoving obstacles.

Keywords
robotics, occupancy grid map, motion extraction, keystone transform, 2DS-KST, sub-pixel
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-71953 (URN)10.23919/ICIF.2018.8455274 (DOI)978-0-9964527-6-2 (ISBN)978-1-5386-4330-3 (ISBN)
Conference
21st International Conference on Information Fusion (FUSION), Cambridge, UK, July 10 - 13, 2018
Available from: 2019-01-30 Created: 2019-01-30 Last updated: 2019-02-01Bibliographically approved
Fan, H., Kucner, T. P., Magnusson, M., Li, T. & Lilienthal, A. (2018). A Dual PHD Filter for Effective Occupancy Filtering in a Highly Dynamic Environment. IEEE transactions on intelligent transportation systems (Print), 19(9), 2977-2993
Open this publication in new window or tab >>A Dual PHD Filter for Effective Occupancy Filtering in a Highly Dynamic Environment
Show others...
2018 (English)In: IEEE transactions on intelligent transportation systems (Print), ISSN 1524-9050, E-ISSN 1558-0016, Vol. 19, no 9, p. 2977-2993Article in journal (Refereed) Published
Abstract [en]

Environment monitoring remains a major challenge for mobile robots, especially in densely cluttered or highly populated dynamic environments, where uncertainties originated from environment and sensor significantly challenge the robot's perception. This paper proposes an effective occupancy filtering method called the dual probability hypothesis density (DPHD) filter, which models uncertain phenomena, such as births, deaths, occlusions, false alarms, and miss detections, by using random finite sets. The key insight of our method lies in the connection of the idea of dynamic occupancy with the concepts of the phase space density in gas kinetic and the PHD in multiple target tracking. By modeling the environment as a mixture of static and dynamic parts, the DPHD filter separates the dynamic part from the static one with a unified filtering process, but has a higher computational efficiency than existing Bayesian Occupancy Filters (BOFs). Moreover, an adaptive newborn function and a detection model considering occlusions are proposed to improve the filtering efficiency further. Finally, a hybrid particle implementation of the DPHD filter is proposed, which uses a box particle filter with constant discrete states and an ordinary particle filter with a time-varying number of particles in a continuous state space to process the static part and the dynamic part, respectively. This filter has a linear complexity with respect to the number of grid cells occupied by dynamic obstacles. Real-world experiments on data collected by a lidar at a busy roundabout demonstrate that our approach can handle monitoring of a highly dynamic environment in real time.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2018
Keywords
Mobile robot, occupancy filtering, PHD filter, BOF, particle filter, random finite set
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-63981 (URN)10.1109/TITS.2017.2770152 (DOI)000444611400021 ()2-s2.0-85038368968 (Scopus ID)
Note

Funding Agencies:

EU Project SPENCER  600877 

Marie Sklodowska-Curie Individual Fellowship  709267 

National Twelfth Five-Year Plan for Science and Technology Support of China  2014BAK12B03 

Available from: 2018-01-09 Created: 2018-01-09 Last updated: 2018-09-28Bibliographically approved
Mielle, M., Magnusson, M. & Lilienthal, A. J. (2018). A method to segment maps from different modalities using free space layout MAORIS: map of ripples segmentation. In: : . Paper presented at IEEE International Conference on Robotics and Automation (ICRA 2018), Brisbane, Australia, May 21-25, 2018 (pp. 4993-4999). IEEE Computer Society
Open this publication in new window or tab >>A method to segment maps from different modalities using free space layout MAORIS: map of ripples segmentation
2018 (English)Conference paper, Published paper (Refereed)
Abstract [en]

How to divide floor plans or navigation maps into semantic representations, such as rooms and corridors, is an important research question in fields such as human-robot interaction, place categorization, or semantic mapping. While most works focus on segmenting robot built maps, those are not the only types of map a robot, or its user, can use. We present a method for segmenting maps from different modalities, focusing on robot built maps and hand-drawn sketch maps, and show better results than state of the art for both types.

Our method segments the map by doing a convolution between the distance image of the map and a circular kernel, and grouping pixels of the same value. Segmentation is done by detecting ripple-like patterns where pixel values vary quickly, and merging neighboring regions with similar values.

We identify a flaw in the segmentation evaluation metric used in recent works and propose a metric based on Matthews correlation coefficient (MCC). We compare our results to ground-truth segmentations of maps from a publicly available dataset, on which we obtain a better MCC than the state of the art with 0.98 compared to 0.65 for a recent Voronoi-based segmentation method and 0.70 for the DuDe segmentation method.

We also provide a dataset of sketches of an indoor environment, with two possible sets of ground truth segmentations, on which our method obtains an MCC of 0.56 against 0.28 for the Voronoi-based segmentation method and 0.30 for DuDe.

Place, publisher, year, edition, pages
IEEE Computer Society, 2018
Keywords
map segmentation, free space, layout
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-68421 (URN)10.1109/ICRA.2018.8461128 (DOI)000446394503114 ()
Conference
IEEE International Conference on Robotics and Automation (ICRA 2018), Brisbane, Australia, May 21-25, 2018
Funder
EU, Horizon 2020, ICT-23-2014 645101 SmokeBotKnowledge Foundation, 20140220
Available from: 2018-08-09 Created: 2018-08-09 Last updated: 2019-10-10Bibliographically approved
Amigoni, F., Yu, W., Andre, T., Holz, D., Magnusson, M., Matteucci, M., . . . Madhavan, R. (2018). A Standard for Map Data Representation: IEEE 1873-2015 Facilitates Interoperability Between Robots. IEEE robotics & automation magazine, 25(1), 65-76
Open this publication in new window or tab >>A Standard for Map Data Representation: IEEE 1873-2015 Facilitates Interoperability Between Robots
Show others...
2018 (English)In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 25, no 1, p. 65-76Article in journal (Refereed) Published
Abstract [en]

The availability of environment maps for autonomous robots enables them to complete several tasks. A new IEEE standard, IEEE 1873-2015, Robot Map Data Representation for Navigation (MDR) [15], sponsored by the IEEE Robotics and Automation Society (RAS) and approved by the IEEE Standards Association Standards Board in September 2015, defines a common representation for two-dimensional (2-D) robot maps and is intended to facilitate interoperability among navigating robots. The standard defines an extensible markup language (XML) data format for exchanging maps between different systems. This article illustrates how metric maps, topological maps, and their combinations can be represented according to the standard.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2018
Keywords
Standards, Service robots, XML, Data models, Interoperability, Measurement
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-64331 (URN)10.1109/MRA.2017.2746179 (DOI)000427426900012 ()2-s2.0-85040906777 (Scopus ID)
Available from: 2018-01-17 Created: 2018-01-17 Last updated: 2018-08-30Bibliographically approved
Swaminathan, C. S., Kucner, T. P., Magnusson, M., Palmieri, L. & Lilienthal, A. (2018). Down the CLiFF: Flow-Aware Trajectory Planning under Motion Pattern Uncertainty. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): . Paper presented at 31st IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, October 1-5, 2018 (pp. 7403-7409). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Down the CLiFF: Flow-Aware Trajectory Planning under Motion Pattern Uncertainty
Show others...
2018 (English)In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE), 2018, p. 7403-7409Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we address the problem of flow-aware trajectory planning in dynamic environments considering flow model uncertainty. Flow-aware planning aims to plan trajectories that adhere to existing flow motion patterns in the environment, with the goal to make robots more efficient, less intrusive and safer. We use a statistical model called CLiFF-map that can map flow patterns for both continuous media and discrete objects. We propose novel cost and biasing functions for an RRT* planning algorithm, which exploits all the information available in the CLiFF-map model, including uncertainties due to flow variability or partial observability. Qualitatively, a benefit of our approach is that it can also be tuned to yield trajectories with different qualities such as exploratory or cautious, depending on application requirements. Quantitatively, we demonstrate that our approach produces more flow-compliant trajectories, compared to two baselines.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2018
Series
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858, E-ISSN 2153-0866
Keywords
Trajectory, Robots, Planning, Cost function, Uncertainty, Veichle dynamics, Aerospace electronics
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-70143 (URN)10.1109/IROS.2018.8593905 (DOI)000458872706106 ()978-1-5386-8094-0 (ISBN)978-1-5386-8095-7 (ISBN)
Conference
31st IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, October 1-5, 2018
Projects
ILIAD
Funder
EU, Horizon 2020, 732737
Available from: 2018-11-12 Created: 2018-11-12 Last updated: 2019-03-14Bibliographically approved
Almqvist, H., Magnusson, M., Kucner, T. P. & Lilienthal, A. (2018). Learning to detect misaligned point clouds. Journal of Field Robotics, 35(5), 662-677
Open this publication in new window or tab >>Learning to detect misaligned point clouds
2018 (English)In: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 35, no 5, p. 662-677Article in journal (Refereed) Published
Abstract [en]

Matching and merging overlapping point clouds is a common procedure in many applications, including mobile robotics, three-dimensional mapping, and object visualization. However, fully automatic point-cloud matching, without manual verification, is still not possible because no matching algorithms exist today that can provide any certain methods for detecting misaligned point clouds. In this article, we make a comparative evaluation of geometric consistency methods for classifying aligned and nonaligned point-cloud pairs. We also propose a method that combines the results of the evaluated methods to further improve the classification of the point clouds. We compare a range of methods on two data sets from different environments related to mobile robotics and mapping. The results show that methods based on a Normal Distributions Transform representation of the point clouds perform best under the circumstances presented herein.

Place, publisher, year, edition, pages
John Wiley & Sons, 2018
Keywords
perception, mapping, position estimation
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-62985 (URN)10.1002/rob.21768 (DOI)000437836900002 ()2-s2.0-85037622789 (Scopus ID)
Projects
ILIADALLO
Funder
EU, Horizon 2020, 732737Knowledge Foundation, 20110214
Available from: 2017-12-05 Created: 2017-12-05 Last updated: 2018-07-27Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-8658-2985

Search in DiVA

Show all publications