oru.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping
Örebro University, Department of Technology. (Learning Systems Lab)
University of Lincoln, Uk. (Department of Computing and Informatics)
Örebro University, Department of Technology. (Learning Systems Lab)ORCID iD: 0000-0003-0217-9326
2007 (English)In: Proceedings of the IROS Workshop "From Sensors to Human Spatial Concepts", 2007, p. 17-24Conference paper, Published paper (Refereed)
Abstract [en]

This paper investigates the use of semantic information to link ground-level occupancy maps and aerial images. A ground-level semantic map is obtained by a mobile robot equipped with an omnidirectional camera, differential GPS and a laser range finder. The mobile robot uses a virtual sensor for building detection (based on omnidirectional images) to compute the ground-level semantic map, which indicates the probability of the cells being occupied by the wall of a building. These wall estimates from a ground perspective are then matched with edges detected in an aerial image. The result is used to direct a region- and boundary-based segmentation algorithm for building detection in the aerial image. This approach addresses two difficulties simultaneously: 1) the range limitation of mobile robot sensors and 2) the difficulty of detecting buildings in monocular aerial images. With the suggested method building outlines can be detected faster than the mobile robot can explore the area by itself, giving the robot an ability to "see" around corners. At the same time, the approach can compensate for the absence of elevation data in segmentation of aerial images. Our experiments demonstrate that ground-level semantic information (wall estimates) allows to focus the segmentation of the aerial image to find buildings and produce a ground-level semantic map that covers a larger area than can be built using the onboard sensors.

Place, publisher, year, edition, pages
2007. p. 17-24
National Category
Engineering and Technology Computer and Information Sciences
Research subject
Computer and Systems Science
Identifiers
URN: urn:nbn:se:oru:diva-4262OAI: oai:DiVA.org:oru-4262DiVA, id: diva2:138561
Conference
IROS Workshop "From Sensors to Human Spatial Concepts"
Available from: 2007-12-13 Created: 2007-12-13 Last updated: 2018-01-13Bibliographically approved

Open Access in DiVA

No full text in DiVA

Authority records BETA

Persson, MartinLilienthal, Achim J.

Search in DiVA

By author/editor
Persson, MartinLilienthal, Achim J.
By organisation
Department of Technology
Engineering and TechnologyComputer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 301 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf