oru.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Virtual sensors for human concepts: building detection by an outdoor mobile robot
Örebro University, Department of Technology. (AASS)
Department of Computing and Informatics, University of Lincoln, Lincoln, UK.
Örebro University, Department of Technology. (AASS)ORCID iD: 0000-0003-0217-9326
2007 (English)In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 55, no 5, p. 383-390Article in journal (Refereed) Published
Abstract [en]

In human-robot communication it is often important to relate robot sensor readings to concepts used by humans. We suggest the use of a virtual sensor (one or several physical sensors with a dedicated signal processing unit for the recognition of real world concepts) and a method with which the virtual sensor can learn from a set of generic features. The virtual sensor robustly establishes the link between sensor data and a particular human concept. In this work, we present a virtual sensor for building detection that uses vision and machine learning to classify the image content in a particular direction as representing buildings or non-buildings. The virtual sensor is trained on a diverse set of image data, using features extracted from grey level images. The features are based on edge orientation, the configurations of these edges, and on grey level clustering. To combine these features, the AdaBoost algorithm is applied. Our experiments with an outdoor mobile robot show that the method is able to separate buildings from nature with a high classification rate, and to extrapolate well to images collected under different conditions. Finally, the virtual sensor is applied on the mobile robot, combining its classifications of sub-images from a panoramic view with spatial information (in the form of location and orientation of the robot) in order to communicate the likely locations of buildings to a remote human operator. (c) 2006 Elsevier B.V. All rights reserved.

Place, publisher, year, edition, pages
Amsterdam, Netherlands: Elsevier, 2007. Vol. 55, no 5, p. 383-390
National Category
Computer Sciences
Research subject
Computer and Systems Science
Identifiers
URN: urn:nbn:se:oru:diva-19894DOI: 10.1016/j.robot.2006.12.002ISI: 000246609500004Scopus ID: 2-s2.0-34247125634OAI: oai:DiVA.org:oru-19894DiVA, id: diva2:447970
Conference
Workshop on From Sensors to Human Spatial Concepts, Beijing, Peoples R. China, Oct., 2006
Available from: 2011-10-13 Created: 2011-10-12 Last updated: 2018-06-12Bibliographically approved

Open Access in DiVA

Virtual Sensors for Human Concepts: Building Detection by an Outdoor Mobile Robot(481 kB)84 downloads
File information
File name FULLTEXT01.pdfFile size 481 kBChecksum SHA-512
fe100f60f526d5e88152abae20b9e3067b92b461c2444780e57d18e8a7812c41b681da4ea076d0aeaf816ffa453ded0355032752f2e3009c9991a8d4c17ddcbd
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records BETA

Persson, MartinLilienthal, Achim J.

Search in DiVA

By author/editor
Persson, MartinLilienthal, Achim J.
By organisation
Department of Technology
In the same journal
Robotics and Autonomous Systems
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 84 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 448 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf