oru.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping
Örebro University, Department of Technology. (AASS)
Department of Computing and Informatics, University of Lincoln, Lincoln, Uk.
Örebro University, Department of Technology. (AASS)ORCID iD: 0000-0003-0217-9326
2007 (English)In: Proceedings of the IROS Workshop "From Sensors to Human Spatial Concepts", 2007, p. 17-24Conference paper, Published paper (Refereed)
Abstract [en]

This paper investigates the use of semantic information to link ground-level occupancy maps and aerial images. A ground-level semantic map is obtained by a mobile robot equipped with an omnidirectional camera, differential GPS and a laser range finder. The mobile robot uses a virtual sensor for building detection (based on omnidirectional images) to compute the ground-level semantic map, which indicates the probability of the cells being occupied by the wall of a building. These wall estimates from a ground perspective are then matched with edges detected in an aerial image. The result is used to direct a region- and boundary-based segmentation algorithm for building detection in the aerial image. This approach addresses two difficulties simultaneously: 1) the range limitation of mobile robot sensors and 2) the difficulty of detecting buildings in monocular aerial images. With the suggested method building outlines can be detected faster than the mobile robot can explore the area by itself, giving the robot an ability to "see" around corners. At the same time, the approach can compensate for the absence of elevation data in segmentation of aerial images. Our experiments demonstrate that ground-level semantic information (wall estimates) allows to focus the segmentation of the aerial image to find buildings and produce a ground-level semantic map that covers a larger area than can be built using the onboard sensors.

Place, publisher, year, edition, pages
2007. p. 17-24
National Category
Engineering and Technology Computer and Information Sciences
Research subject
Computer and Systems Science
Identifiers
URN: urn:nbn:se:oru:diva-4262OAI: oai:DiVA.org:oru-4262DiVA, id: diva2:138561
Conference
IROS Workshop "From Sensors to Human Spatial Concepts", Nov., 2007, San Diego, CA, USA
Available from: 2007-12-13 Created: 2007-12-13 Last updated: 2018-06-12Bibliographically approved

Open Access in DiVA

Fusion of Aerial Images and Sensor Data from a Ground Vehicle for Improved Semantic Mapping(384 kB)31 downloads
File information
File name FULLTEXT01.pdfFile size 384 kBChecksum SHA-512
d11f5545be8e6641989de759045dbc5b2aa36df24a0787621d48377136f518cb6b299a8919661fbdd278bd75fdc49f1c4688b15044b2faf00ec3b65517ae383a
Type fulltextMimetype application/pdf

Authority records BETA

Persson, MartinLilienthal, Achim J.

Search in DiVA

By author/editor
Persson, MartinLilienthal, Achim J.
By organisation
Department of Technology
Engineering and TechnologyComputer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 31 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 340 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf