To Örebro University

oru.seÖrebro University Publications
Change search
Refine search result
1 - 11 of 11
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Bothe, Hans-H.
    et al.
    Örebro University, Department of Technology.
    Persson, Martin
    Örebro University, Department of Technology.
    Biel, Lena
    Örebro University, Department of Technology.
    Rosenholm, Magnus
    Örebro University, Department of Technology.
    Multivariate sensor fusion by a neural network modelManuscript (preprint) (Other academic)
  • 2.
    Persson, Martin
    Örebro University, Department of Technology.
    A simulation environment for visual servoing2002Licentiate thesis, monograph (Other academic)
  • 3.
    Persson, Martin
    Örebro University, Department of Technology.
    Semantic mapping using virtual sensors and fusion of aerial images with sensor data from a ground vehicle2008Doctoral thesis, monograph (Other academic)
    Abstract [en]

    In this thesis, semantic mapping is understood to be the process of putting a tag or label on objects or regions in a map. This label should be interpretable by and have a meaning for a human. The use of semantic information has several application areas in mobile robotics. The largest area is in human-robot interaction where the semantics is necessary for a common understanding between robot and human of the operational environment. Other areas include localization through connection of human spatial concepts to particular locations, improving 3D models of indoor and outdoor environments, and model validation.

    This thesis investigates the extraction of semantic information for mobile robots in outdoor environments and the use of semantic information to link ground-level occupancy maps and aerial images. The thesis concentrates on three related issues: i) recognition of human spatial concepts in a scene, ii) the ability to incorporate semantic knowledge in a map, and iii) the ability to connect information collected by a mobile robot with information extracted from an aerial image.

    The first issue deals with a vision-based virtual sensor for classification of views (images). The images are fed into a set of learned virtual sensors, where each virtual sensor is trained for classification of a particular type of human spatial concept. The virtual sensors are evaluated with images from both ordinary cameras and an omni-directional camera, showing robust properties that can cope with variations such as changing season.

    In the second part a probabilistic semantic map is computed based on an occupancy grid map and the output from a virtual sensor. A local semantic map is built around the robot for each position where images have been acquired. This map is a grid map augmented with semantic information in the form of probabilities that the occupied grid cells belong to a particular class. The local maps are fused into a global probabilistic semantic map covering the area along the trajectory of the mobile robot.

    In the third part information extracted from an aerial image is used to improve the mapping process. Region and object boundaries taken from the probabilistic semantic map are used to initialize segmentation of the aerial image. Algorithms for both local segmentation related to the borders and global segmentation of the entire aerial image, exemplified with the two classes ground and buildings, are presented. Ground-level semantic information allows focusing of the segmentation of the aerial image to desired classes and generation of a semantic map that covers a larger area than can be built using only the onboard sensors.

    Download full text (pdf)
    FULLTEXT01
    Download (pdf)
    COVER01
    Download (pdf)
    ERRATA01
  • 4.
    Persson, Martin
    et al.
    Örebro University, Department of Technology.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, Uk.
    Lilienthal, Achim J.
    Örebro University, Department of Technology.
    Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping2007In: Proceedings of the IROS Workshop "From Sensors to Human Spatial Concepts", 2007, p. 17-24Conference paper (Refereed)
    Abstract [en]

    This paper investigates the use of semantic information to link ground-level occupancy maps and aerial images. A ground-level semantic map is obtained by a mobile robot equipped with an omnidirectional camera, differential GPS and a laser range finder. The mobile robot uses a virtual sensor for building detection (based on omnidirectional images) to compute the ground-level semantic map, which indicates the probability of the cells being occupied by the wall of a building. These wall estimates from a ground perspective are then matched with edges detected in an aerial image. The result is used to direct a region- and boundary-based segmentation algorithm for building detection in the aerial image. This approach addresses two difficulties simultaneously: 1) the range limitation of mobile robot sensors and 2) the difficulty of detecting buildings in monocular aerial images. With the suggested method building outlines can be detected faster than the mobile robot can explore the area by itself, giving the robot an ability to "see" around corners. At the same time, the approach can compensate for the absence of elevation data in segmentation of aerial images. Our experiments demonstrate that ground-level semantic information (wall estimates) allows to focus the segmentation of the aerial image to find buildings and produce a ground-level semantic map that covers a larger area than can be built using the onboard sensors.

    Download full text (pdf)
    Fusion of Aerial Images and Sensor Data from a Ground Vehicle for Improved Semantic Mapping
  • 5.
    Persson, Martin
    et al.
    Örebro University, School of Science and Technology.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, UK.
    Lilienthal, Achim J.
    Örebro University, Department of Natural Sciences.
    Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping2008In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 56, no 6, p. 483-492Article in journal (Refereed)
    Abstract [en]

    This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omni-directional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground.

    Download full text (pdf)
    Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping
  • 6.
    Persson, Martin
    et al.
    Örebro University, School of Science and Technology.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, UK.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Improved mapping and image segmentation by using semantic information to link aerial images and ground-level information2008In: Recent Progress in Robotics: Viable Robotic Service to Human, Berlin, Germany: Springer, 2008, p. 157-169Conference paper (Other academic)
    Abstract [en]

    This paper investigates the use of semantic information to link ground-level occupancy maps and aerial images. A ground-level semantic map is obtained by a mobile robot equipped with an omnidirectional camera, differential GPS and a laser range finder. The mobile robot uses a virtual sensor for building detection (based on omnidirectional images) to compute the ground-level semantic map, which indicates the probability of the cells being occupied by the wall of a building. These wall estimates from a ground perspective are then matched with edges detected in an aerial image. The result is used to direct a region- and boundary-based segmentation algorithm for building detection in the aerial image. This approach addresses two difficulties simultaneously: 1) the range limitation of mobile robot sensors and 2) the difficulty of detecting buildings in monocular aerial images. With the suggested method building outlines can be detected faster than the mobile robot can explore the area by itself, giving the robot an ability to “see” around corners. At the same time, the approach can compensate for the absence of elevation data in segmentation of aerial images. Our experiments demonstrate that ground-level semantic information (wall estimates) allows to focus the segmentation of the aerial image to find buildings and produce a ground-level semantic map that covers a larger area than can be built using the onboard sensors.

    Download full text (pdf)
    Improved mapping and image segmentation by using semantic information to link aerial images and ground-level information
  • 7.
    Persson, Martin
    et al.
    Örebro University, Department of Technology.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, United Kingdom.
    Lilienthal, Achim J.
    Örebro University, Department of Technology.
    Improved mapping and image segmentation by using semantic information to link aerial images and ground-level information2007In: Proceedings of the IEEE international conference on advanced robotics: ICAR 2007, 2007, p. 924-929Conference paper (Refereed)
    Abstract [en]

    This paper investigates the use of semantic information to link ground-level occupancy maps and aerial images. In the suggested approach a ground-level semantic map is obtained by a mobile robot equipped with an omnidirectional camera, differential GPS and a laser range finder. The mobile robot uses a virtual sensor for building detection (based on omnidirectional images) to compute the ground-level semantic map, which indicates the probability of the cells being occupied by the wall of a building. These wall estimates from a ground perspective are then matched with edges detected in an aerial image. The result is used to direct a region- and boundary-based segmentation algorithm for building detection in the aerial image. This approach addresses two difficulties simultaneously: 1) the range limitation of mobile robot sensors and 2) the difficulty of detecting buildings in monocular aerial images. With the suggested method building outlines can be detected faster than the mobile robot can explore the area by itself, giving the robot an ability to "see" around corners. At the same time, the approach can compensate for the absence of elevation data in segmentation of aerial images. Our experiments demonstrate that ground-level semantic information (wall estimates) allows to focus the segmentation of the aerial image to find buildings and produce a groundlevel semantic map that covers a larger area than can be built using the onboard sensors along the robot trajectory.

    Download full text (pdf)
    Improved Mapping and Image Segmentation by Using Semantic Information to Link Aerial Images and Ground-Level Information
  • 8.
    Persson, Martin
    et al.
    Örebro University, Department of Technology.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, UK.
    Lilienthal, Achim J.
    Örebro University, Department of Technology.
    Virtual sensors for human concepts: building detection by an outdoor mobile robot2007In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 55, no 5, p. 383-390Article in journal (Refereed)
    Abstract [en]

    In human-robot communication it is often important to relate robot sensor readings to concepts used by humans. We suggest the use of a virtual sensor (one or several physical sensors with a dedicated signal processing unit for the recognition of real world concepts) and a method with which the virtual sensor can learn from a set of generic features. The virtual sensor robustly establishes the link between sensor data and a particular human concept. In this work, we present a virtual sensor for building detection that uses vision and machine learning to classify the image content in a particular direction as representing buildings or non-buildings. The virtual sensor is trained on a diverse set of image data, using features extracted from grey level images. The features are based on edge orientation, the configurations of these edges, and on grey level clustering. To combine these features, the AdaBoost algorithm is applied. Our experiments with an outdoor mobile robot show that the method is able to separate buildings from nature with a high classification rate, and to extrapolate well to images collected under different conditions. Finally, the virtual sensor is applied on the mobile robot, combining its classifications of sub-images from a panoramic view with spatial information (in the form of location and orientation of the robot) in order to communicate the likely locations of buildings to a remote human operator. (c) 2006 Elsevier B.V. All rights reserved.

    Download full text (pdf)
    Virtual Sensors for Human Concepts: Building Detection by an Outdoor Mobile Robot
  • 9.
    Persson, Martin
    et al.
    Örebro University, Department of Technology.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, UK.
    Lilienthal, Achim J.
    Örebro University, Department of Technology.
    Virtual sensors for human concepts: building detection by an outdoor mobile robot2006In: Proceedings of the IROS 2006 workshop: From Sensors toHuman Spatial Concepts, IEEE, 2006, p. 21-26Conference paper (Refereed)
    Abstract [en]

    In human–robot communication it is often important to relate robot sensor readings to concepts used by humans. We suggest the use of a virtual sensor (one or several physical sensors with a dedicated signal processing unit for the recognition of real world concepts) and a method with which the virtual sensor can learn from a set of generic features. The virtual sensor robustly establishes the link between sensor data and a particular human concept. In this work, we present a virtual sensor for building detection that uses vision and machine learning to classify the image content in a particular direction as representing buildings or non-buildings. The virtual sensor is trained on a diverse set of image data, using features extracted from grey level images. The features are based on edge orientation, the configurations of these edges, and on grey level clustering. To combine these features, the AdaBoost algorithm is applied. Our experiments with an outdoor mobile robot show that the method is able to separate buildings from nature with a high classification rate, and to extrapolate well to images collected under different conditions. Finally, the virtual sensor is applied on the mobile robot, combining its classifications of sub-images from a panoramic view with spatial information (in the form of location and orientation of the robot) in order to communicate the likely locations of buildings to a remote human operator.

    Download full text (pdf)
    Virtual Sensors for Human Concepts: Building Detection by an Outdoor Mobile Robot
  • 10.
    Persson, Martin
    et al.
    Örebro University, Department of Technology.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, United Kingdom.
    Valgren, Christoffer
    Department of Technology, Örebro University, Örebro, Sweden.
    Lilienthal, Achim J.
    Örebro University, Department of Technology.
    Probabilistic semantic mapping with a virtual sensor for building/nature detection2007In: Proceedings of the 2007 IEEE International symposium on computational intelligence in robotics and automation, CIRA 2007, New York, NY, USA: IEEE, 2007, p. 236-242, article id 4269870Conference paper (Refereed)
    Abstract [en]

    In human-robot communication it is often important to relate robot sensor readings to concepts used by humans. We believe that access to semantic maps will make it possible for robots to better communicate information to a human operator and vice versa. The main contribution of this paper is a method that fuses data from different sensor modalities, range sensors and vision sensors are considered, to create a probabilistic semantic map of an outdoor environment. The method combines a learned virtual sensor (understood as one or several physical sensors with a dedicated signal processing unit for recognition of real world concepts) for building detection with a standard occupancy map. The virtual sensor is applied on a mobile robot, combining classifications of sub-images from a panoramic view with spatial information (location and orientation of the robot) giving the likely locations of buildings. This information is combined with an occupancy map to calculate a probabilistic semantic map. Our experiments with an outdoor mobile robot show that the method produces semantic maps with correct labeling and an evident distinction between "building" objects from "nature" objects

    Download full text (pdf)
    Probabilistic Semantic Mapping with a Virtual Sensor for Building/Nature detection
  • 11.
    Persson, Martin
    et al.
    Örebro University, Department of Technology.
    Wide, Peter
    Örebro University, Department of Technology.
    Using a sensor source intelligence cell to connect and distribute visual information from a commercial game engine in a disaster management exercise2007In: IEEE instrumentation and measurement technology conference proceedings, IMTC 2007, New York: IEEE , 2007, p. 1-5Conference paper (Refereed)
    Abstract [en]

    This paper presents a system where different scenarios can be played in a synthetic natural environment in form of a modified commercial game used for scenario simulation. This environment is connected to a command and control system that can visualize, process, store, and distribute sensor data and their interpretations within several command levels. It is specifically intended for mobile sensors used in remote sensing tasks. The system has been used in a disaster management exercise and there distributed information from a virtual accident to different command levels in the crisis management. The information consisted of live and recorded video, reports and map objects.

1 - 11 of 11
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf