To Örebro University

oru.seÖrebro University Publications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
The Auto-Complete Graph: Merging and Mutual Correction of Sensor and Prior Maps for SLAM
Örebro University, School of Science and Technology.ORCID iD: 0000-0002-3079-0512
Örebro University, School of Science and Technology.ORCID iD: 0000-0001-8658-2985
Örebro University, School of Science and Technology.ORCID iD: 0000-0003-0217-9326
2019 (English)In: Robotics, E-ISSN 2218-6581, Vol. 8, no 2, article id 40Article in journal (Refereed) Published
Abstract [en]

Simultaneous Localization And Mapping (SLAM) usually assumes the robot starts without knowledge of the environment. While prior information, such as emergency maps or layout maps, is often available, integration is not trivial since such maps are often out of date and have uncertainty in local scale. Integration of prior map information is further complicated by sensor noise, drift in the measurements, and incorrect scan registrations in the sensor map. We present the Auto-Complete Graph (ACG), a graph-based SLAM method merging elements of sensor and prior maps into one consistent representation. After optimizing the ACG, the sensor map's errors are corrected thanks to the prior map, while the sensor map corrects the local scale inaccuracies in the prior map. We provide three datasets with associated prior maps: two recorded in campus environments, and one from a fireman training facility. Our method handled up to 40% of noise in odometry, was robust to varying levels of details between the prior and the sensor map, and could correct local scale errors of the prior. In field tests with ACG, users indicated points of interest directly on the prior before exploration. We did not record failures in reaching them.

Place, publisher, year, edition, pages
MDPI , 2019. Vol. 8, no 2, article id 40
Keywords [en]
SLAM, prior map, emergency map, layout map, graph-based SLAM, navigation, search and rescue
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:oru:diva-75742DOI: 10.3390/robotics8020040ISI: 000475325600017Scopus ID: 2-s2.0-85069926702OAI: oai:DiVA.org:oru-75742DiVA, id: diva2:1342185
Funder
Knowledge Foundation, 20140220
Note

Funding Agency:

EU  ICT-26-2016 732737  ICT-23-2014 645101

Available from: 2019-08-13 Created: 2019-08-13 Last updated: 2020-02-06Bibliographically approved
In thesis
1. Helping robots help us: Using prior information for localization, navigation, and human-robot interaction
Open this publication in new window or tab >>Helping robots help us: Using prior information for localization, navigation, and human-robot interaction
2019 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Maps are often used to provide information and guide people. Emergency maps or floor plans are often displayed on walls and sketch maps can easily be drawn to give directions. However, robots typically assume that no knowledge of the environment is available before exploration even though making use of prior maps could enhance robotic mapping. For example, prior maps can be used to provide map data of places that the robot has not yet seen, to correct errors in robot maps, as well as to transfer information between map representations.

I focus on two types of prior maps representing the walls of an indoor environment: layout maps and sketch maps. I study ways to relate information of sketch or layout maps with an equivalent metric map and study how to use layout maps to improve the robot’s mapping. Compared to metric maps such as sensor-built maps, layout and sketch maps can have local scale errors or miss elements of the environment, which makes matching and aligning such heterogeneous map types a hard problem.

I aim to answer three research questions: how to interpret prior maps by finding meaningful features? How to find correspondences between the features of a prior map and a metric map representing the same environment? How to integrate prior maps in SLAM so that both the prior map and the map built by the robot are improved?

The first contribution of this thesis is an algorithm that can find correspondences between regions of a hand-drawn sketch map and an equivalent metric map and achieves an overall accuracy that is within 10% of that of a human. The second contribution is a method that enables the integration of layout map data in SLAM and corrects errors both in the layout and the sensor map.

These results provide ways to use prior maps with local scale errors and different levels of detail, whether they are close to metric maps, e.g. layout maps, or non-metric maps, e.g. sketch maps. The methods presented in this work were used in field tests with professional fire-fighters for search and rescue applications in low-visibility environments. A novel radar sensor was used to perform SLAM in smoke and, using a layout map as a prior map, users could indicate points of interest to the robot on the layout map, not only during and after exploration, but even before it took place.

Place, publisher, year, edition, pages
Örebro: Örebro University, 2019. p. 83
Series
Örebro Studies in Technology, ISSN 1650-8580 ; 86
Keywords
graph-based SLAM, prior map, sketch map, emergency map, map matching, graph matching, segmentation, search and rescue
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-75877 (URN)978-91-7529-299-1 (ISBN)
Public defence
2019-10-29, Örebro universitet, Teknikhuset, Hörsal T, Fakultetsgatan 1, Örebro, 13:15 (English)
Opponent
Supervisors
Available from: 2019-08-23 Created: 2019-08-23 Last updated: 2024-01-03Bibliographically approved

Open Access in DiVA

The Auto-Complete Graph: Merging and Mutual Correction of Sensor and Prior Maps for SLAM(6925 kB)683 downloads
File information
File name FULLTEXT01.pdfFile size 6925 kBChecksum SHA-512
4eda6368cbfafc6fd77800b84cab6bcc04e1ffca059dff05b9e7ac168adf10e4dfb89ee414f47cd059e6458b120887d1edf48ab525553133688317898abbcd51
Type fulltextMimetype application/pdf
Publiceringsmedgivande(10 kB)113 downloads
File information
File name FULLTEXT02.txtFile size 10 kBChecksum SHA-512
9e47a8f248842b3de3482c2301660cead4446148193c58e77fdc2d25ee05f26ec0aef5f1a14cb7d2d2d796a4bb1cb58e001565b73cec01c562f77b8ea997aca5
Type fulltextMimetype text/plain

Other links

Publisher's full textScopus

Authority records

Mielle, MalcolmMagnusson, MartinLilienthal, Achim J.

Search in DiVA

By author/editor
Mielle, MalcolmMagnusson, MartinLilienthal, Achim J.
By organisation
School of Science and Technology
In the same journal
Robotics
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar
Total: 797 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 640 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf