To Örebro University

oru.seÖrebro University Publications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A comparative analysis of radar and lidar sensing for localization and mapping
Örebro University, School of Science and Technology. (Center of Applied Autonomous Sensor Systems (AASS))ORCID iD: 0000-0002-3079-0512
Örebro University, School of Science and Technology. (Center of Applied Autonomous Sensor Systems (AASS))ORCID iD: 0000-0001-8658-2985
Örebro University, School of Science and Technology. (Center of Applied Autonomous Sensor Systems (AASS))ORCID iD: 0000-0003-0217-9326
2019 (English)In: 2019 European Conference on Mobile Robots (ECMR), IEEE, 2019Conference paper, Published paper (Refereed)
Abstract [en]

Lidars and cameras are the sensors most commonly used for Simultaneous Localization And Mapping (SLAM). However, they are not effective in certain scenarios, e.g. when fire and smoke are present in the environment. While radars are much less affected by such conditions, radar and lidar have rarely been compared in terms of the achievable SLAM accuracy. We present a principled comparison of the accuracy of a novel radar sensor against that of a Velodyne lidar, for localization and mapping.

We evaluate the performance of both sensors by calculating the displacement in position and orientation relative to a ground-truth reference positioning system, over three experiments in an indoor lab environment. We use two different SLAM algorithms and found that the mean displacement in position when using the radar sensor was less than 0.037 m, compared to 0.011m for the lidar. We show that while producing slightly less accurate maps than a lidar, the radar can accurately perform SLAM and build a map of the environment, even including details such as corners and small walls.

Place, publisher, year, edition, pages
IEEE, 2019.
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:oru:diva-76976DOI: 10.1109/ECMR.2019.8870345ISI: 000558081900002Scopus ID: 2-s2.0-85074389854ISBN: 978-1-7281-3605-9 (electronic)ISBN: 978-1-7281-3606-6 (print)OAI: oai:DiVA.org:oru-76976DiVA, id: diva2:1356645
Conference
9th European Conference on Mobile Robots (ECMR 2019), Prague, Czech Republic, September 4-6, 2019
Funder
Knowledge Foundation, 20140220
Note

Funding Agency:

EIT Raw Materials project FIREMII  18011

Available from: 2019-10-02 Created: 2019-10-02 Last updated: 2020-09-16Bibliographically approved
In thesis
1. Helping robots help us: Using prior information for localization, navigation, and human-robot interaction
Open this publication in new window or tab >>Helping robots help us: Using prior information for localization, navigation, and human-robot interaction
2019 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Maps are often used to provide information and guide people. Emergency maps or floor plans are often displayed on walls and sketch maps can easily be drawn to give directions. However, robots typically assume that no knowledge of the environment is available before exploration even though making use of prior maps could enhance robotic mapping. For example, prior maps can be used to provide map data of places that the robot has not yet seen, to correct errors in robot maps, as well as to transfer information between map representations.

I focus on two types of prior maps representing the walls of an indoor environment: layout maps and sketch maps. I study ways to relate information of sketch or layout maps with an equivalent metric map and study how to use layout maps to improve the robot’s mapping. Compared to metric maps such as sensor-built maps, layout and sketch maps can have local scale errors or miss elements of the environment, which makes matching and aligning such heterogeneous map types a hard problem.

I aim to answer three research questions: how to interpret prior maps by finding meaningful features? How to find correspondences between the features of a prior map and a metric map representing the same environment? How to integrate prior maps in SLAM so that both the prior map and the map built by the robot are improved?

The first contribution of this thesis is an algorithm that can find correspondences between regions of a hand-drawn sketch map and an equivalent metric map and achieves an overall accuracy that is within 10% of that of a human. The second contribution is a method that enables the integration of layout map data in SLAM and corrects errors both in the layout and the sensor map.

These results provide ways to use prior maps with local scale errors and different levels of detail, whether they are close to metric maps, e.g. layout maps, or non-metric maps, e.g. sketch maps. The methods presented in this work were used in field tests with professional fire-fighters for search and rescue applications in low-visibility environments. A novel radar sensor was used to perform SLAM in smoke and, using a layout map as a prior map, users could indicate points of interest to the robot on the layout map, not only during and after exploration, but even before it took place.

Place, publisher, year, edition, pages
Örebro: Örebro University, 2019. p. 83
Series
Örebro Studies in Technology, ISSN 1650-8580 ; 86
Keywords
graph-based SLAM, prior map, sketch map, emergency map, map matching, graph matching, segmentation, search and rescue
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-75877 (URN)978-91-7529-299-1 (ISBN)
Public defence
2019-10-29, Örebro universitet, Teknikhuset, Hörsal T, Fakultetsgatan 1, Örebro, 13:15 (English)
Opponent
Supervisors
Available from: 2019-08-23 Created: 2019-08-23 Last updated: 2024-01-03Bibliographically approved

Open Access in DiVA

A comparative analysis of radar and lidar sensing for localization and mapping(5002 kB)1722 downloads
File information
File name FULLTEXT01.pdfFile size 5002 kBChecksum SHA-512
c8b688440108380341ee4854c0163f08267c5d71be78d10e5ae002392f4551940d05524fc3cebae6b563ff7426086de84007d151835c195514c9a362cc89f70a
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Mielle, MalcolmMagnusson, MartinLilienthal, Achim J.

Search in DiVA

By author/editor
Mielle, MalcolmMagnusson, MartinLilienthal, Achim J.
By organisation
School of Science and Technology
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar
Total: 1741 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 1290 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf