To Örebro University

oru.seÖrebro University Publications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Learning geometric and photometric features from panoramic LiDAR scans for outdoor place categorization
Graduate School of Information Science and Electrical Engeneering, Kyushu University, Fukuoka, Japan.
Graduate School of Information Science and Electrical Engeneering, Kyushu University, Fukuoka, Japan.
Graduate School of Information Science and Electrical Engeneering, Kyushu University, Fukuoka, Japan.
Jet Propulsion Laboratory, California Institute of Technology, Pasadena, USA.
Show others and affiliations
2018 (English)In: Advanced Robotics, ISSN 0169-1864, E-ISSN 1568-5535, Vol. 32, no 14, p. 750-765Article in journal (Refereed) Published
Abstract [en]

Semantic place categorization, which is one of the essential tasks for autonomous robots and vehicles, allows them to have capabilities of self-decision and navigation in unfamiliar environments. In particular, outdoor places are more difficult targets than indoor ones due to perceptual variations, such as dynamic illuminance over 24 hours and occlusions by cars and pedestrians. This paper presents a novel method of categorizing outdoor places using convolutional neural networks (CNNs), which take omnidirectional depth/reflectance images obtained by 3D LiDARs as the inputs. First, we construct a large-scale outdoor place dataset named Multi-modal Panoramic 3D Outdoor (MPO) comprising two types of point clouds captured by two different LiDARs. They are labeled with six outdoor place categories: coast, forest, indoor/outdoor parking, residential area, and urban area. Second, we provide CNNs for LiDAR-based outdoor place categorization and evaluate our approach with the MPO dataset. Our results on the MPO dataset outperform traditional approaches and show the effectiveness in which we use both depth and reflectance modalities. To analyze our trained deep networks, we visualize the learned features.

Place, publisher, year, edition, pages
Taylor & Francis, 2018. Vol. 32, no 14, p. 750-765
Keywords [en]
Outdoor place categorization, convolutional neural networks, multi-modal data, laser scanner
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:oru:diva-83662DOI: 10.1080/01691864.2018.1501279ISI: 000442278500003Scopus ID: 2-s2.0-85051145962OAI: oai:DiVA.org:oru-83662DiVA, id: diva2:1447620
Available from: 2020-06-26 Created: 2020-06-26 Last updated: 2020-08-04Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Martinez Mozos, Oscar

Search in DiVA

By author/editor
Martinez Mozos, Oscar
In the same journal
Advanced Robotics
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 297 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf