To Örebro University

oru.seÖrebro University Publications
Change search
Link to record
Permanent link

Direct link
Persson, Martin
Publications (10 of 11) Show all publications
Persson, M., Duckett, T. & Lilienthal, A. J. (2008). Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping. Paper presented at IROS Workshop on From Sensors to Human Spatial Concepts 2007, San Diego, CA, USA, Nov. 2007. Robotics and Autonomous Systems, 56(6), 483-492
Open this publication in new window or tab >>Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping
2008 (English)In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 56, no 6, p. 483-492Article in journal (Refereed) Published
Abstract [en]

This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omni-directional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground.

Place, publisher, year, edition, pages
Amsterdam: Elsevier, 2008
Keywords
Semantic Mapping, Aerial Images, Mobile Robotics
National Category
Engineering and Technology Computer and Information Sciences
Research subject
Computer and Systems Science
Identifiers
urn:nbn:se:oru:diva-3274 (URN)10.1016/j.robot.2008.03.002 (DOI)000256986100002 ()2-s2.0-44149107914 (Scopus ID)
Conference
IROS Workshop on From Sensors to Human Spatial Concepts 2007, San Diego, CA, USA, Nov. 2007
Available from: 2008-11-28 Created: 2008-11-28 Last updated: 2022-07-06Bibliographically approved
Persson, M., Duckett, T. & Lilienthal, A. J. (2008). Improved mapping and image segmentation by using semantic information to link aerial images and ground-level information. In: Recent Progress in Robotics: Viable Robotic Service to Human. Paper presented at 13th International Conference on Advanced Robotics, Jeju Isl, South Korea (pp. 157-169). Berlin, Germany: Springer
Open this publication in new window or tab >>Improved mapping and image segmentation by using semantic information to link aerial images and ground-level information
2008 (English)In: Recent Progress in Robotics: Viable Robotic Service to Human, Berlin, Germany: Springer, 2008, p. 157-169Conference paper, Published paper (Other academic)
Abstract [en]

This paper investigates the use of semantic information to link ground-level occupancy maps and aerial images. A ground-level semantic map is obtained by a mobile robot equipped with an omnidirectional camera, differential GPS and a laser range finder. The mobile robot uses a virtual sensor for building detection (based on omnidirectional images) to compute the ground-level semantic map, which indicates the probability of the cells being occupied by the wall of a building. These wall estimates from a ground perspective are then matched with edges detected in an aerial image. The result is used to direct a region- and boundary-based segmentation algorithm for building detection in the aerial image. This approach addresses two difficulties simultaneously: 1) the range limitation of mobile robot sensors and 2) the difficulty of detecting buildings in monocular aerial images. With the suggested method building outlines can be detected faster than the mobile robot can explore the area by itself, giving the robot an ability to “see” around corners. At the same time, the approach can compensate for the absence of elevation data in segmentation of aerial images. Our experiments demonstrate that ground-level semantic information (wall estimates) allows to focus the segmentation of the aerial image to find buildings and produce a ground-level semantic map that covers a larger area than can be built using the onboard sensors.

Place, publisher, year, edition, pages
Berlin, Germany: Springer, 2008
Series
Lecture Notes in Control and Information Sciences, ISSN 0170-8643 ; 370
Keywords
Semantic Mapping, Aerial Images, Mobile Robotics
National Category
Engineering and Technology Computer and Information Sciences
Research subject
Computer and Systems Science
Identifiers
urn:nbn:se:oru:diva-3297 (URN)10.1007/978-3-540-76729-9_13 (DOI)000252925100011 ()2-s2.0-36749046368 (Scopus ID)978-3-540-76728-2 (ISBN)
Conference
13th International Conference on Advanced Robotics, Jeju Isl, South Korea
Available from: 2008-11-28 Created: 2008-11-28 Last updated: 2018-06-13Bibliographically approved
Persson, M. (2008). Semantic mapping using virtual sensors and fusion of aerial images with sensor data from a ground vehicle. (Doctoral dissertation). Örebro: Örebro universitet
Open this publication in new window or tab >>Semantic mapping using virtual sensors and fusion of aerial images with sensor data from a ground vehicle
2008 (English)Doctoral thesis, monograph (Other academic)
Abstract [en]

In this thesis, semantic mapping is understood to be the process of putting a tag or label on objects or regions in a map. This label should be interpretable by and have a meaning for a human. The use of semantic information has several application areas in mobile robotics. The largest area is in human-robot interaction where the semantics is necessary for a common understanding between robot and human of the operational environment. Other areas include localization through connection of human spatial concepts to particular locations, improving 3D models of indoor and outdoor environments, and model validation.

This thesis investigates the extraction of semantic information for mobile robots in outdoor environments and the use of semantic information to link ground-level occupancy maps and aerial images. The thesis concentrates on three related issues: i) recognition of human spatial concepts in a scene, ii) the ability to incorporate semantic knowledge in a map, and iii) the ability to connect information collected by a mobile robot with information extracted from an aerial image.

The first issue deals with a vision-based virtual sensor for classification of views (images). The images are fed into a set of learned virtual sensors, where each virtual sensor is trained for classification of a particular type of human spatial concept. The virtual sensors are evaluated with images from both ordinary cameras and an omni-directional camera, showing robust properties that can cope with variations such as changing season.

In the second part a probabilistic semantic map is computed based on an occupancy grid map and the output from a virtual sensor. A local semantic map is built around the robot for each position where images have been acquired. This map is a grid map augmented with semantic information in the form of probabilities that the occupied grid cells belong to a particular class. The local maps are fused into a global probabilistic semantic map covering the area along the trajectory of the mobile robot.

In the third part information extracted from an aerial image is used to improve the mapping process. Region and object boundaries taken from the probabilistic semantic map are used to initialize segmentation of the aerial image. Algorithms for both local segmentation related to the borders and global segmentation of the entire aerial image, exemplified with the two classes ground and buildings, are presented. Ground-level semantic information allows focusing of the segmentation of the aerial image to desired classes and generation of a semantic map that covers a larger area than can be built using only the onboard sensors.

Place, publisher, year, edition, pages
Örebro: Örebro universitet, 2008. p. 170
Series
Örebro Studies in Technology, ISSN 1650-8580 ; 30
Keywords
semantic mapping, aerial image, mobile robot, supervised learning, semi-supervised learning
National Category
Engineering and Technology
Research subject
Industrial Measurement Technology
Identifiers
urn:nbn:se:oru:diva-2186 (URN)978-91-7668-593-8 (ISBN)
Public defence
2008-09-12, Hörsal T, Teknikhuset, Örebro, 13:00 (English)
Opponent
Supervisors
Available from: 2008-05-28 Created: 2008-05-28 Last updated: 2017-10-18Bibliographically approved
Persson, M., Duckett, T. & Lilienthal, A. J. (2007). Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping. In: Proceedings of the IROS Workshop "From Sensors to Human Spatial Concepts": . Paper presented at IROS Workshop "From Sensors to Human Spatial Concepts", Nov., 2007, San Diego, CA, USA (pp. 17-24).
Open this publication in new window or tab >>Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping
2007 (English)In: Proceedings of the IROS Workshop "From Sensors to Human Spatial Concepts", 2007, p. 17-24Conference paper, Published paper (Refereed)
Abstract [en]

This paper investigates the use of semantic information to link ground-level occupancy maps and aerial images. A ground-level semantic map is obtained by a mobile robot equipped with an omnidirectional camera, differential GPS and a laser range finder. The mobile robot uses a virtual sensor for building detection (based on omnidirectional images) to compute the ground-level semantic map, which indicates the probability of the cells being occupied by the wall of a building. These wall estimates from a ground perspective are then matched with edges detected in an aerial image. The result is used to direct a region- and boundary-based segmentation algorithm for building detection in the aerial image. This approach addresses two difficulties simultaneously: 1) the range limitation of mobile robot sensors and 2) the difficulty of detecting buildings in monocular aerial images. With the suggested method building outlines can be detected faster than the mobile robot can explore the area by itself, giving the robot an ability to "see" around corners. At the same time, the approach can compensate for the absence of elevation data in segmentation of aerial images. Our experiments demonstrate that ground-level semantic information (wall estimates) allows to focus the segmentation of the aerial image to find buildings and produce a ground-level semantic map that covers a larger area than can be built using the onboard sensors.

National Category
Engineering and Technology Computer and Information Sciences
Research subject
Computer and Systems Science
Identifiers
urn:nbn:se:oru:diva-4262 (URN)
Conference
IROS Workshop "From Sensors to Human Spatial Concepts", Nov., 2007, San Diego, CA, USA
Available from: 2007-12-13 Created: 2007-12-13 Last updated: 2018-06-12Bibliographically approved
Persson, M., Duckett, T. & Lilienthal, A. J. (2007). Improved mapping and image segmentation by using semantic information to link aerial images and ground-level information. In: Proceedings of the IEEE international conference on advanced robotics: ICAR 2007. Paper presented at 13th IEEE International Conference on Advanced Robotics, ICAR 2007, Jeju Isl, South Korea, Aug. 22-25, 2007 (pp. 924-929).
Open this publication in new window or tab >>Improved mapping and image segmentation by using semantic information to link aerial images and ground-level information
2007 (English)In: Proceedings of the IEEE international conference on advanced robotics: ICAR 2007, 2007, p. 924-929Conference paper, Published paper (Refereed)
Abstract [en]

This paper investigates the use of semantic information to link ground-level occupancy maps and aerial images. In the suggested approach a ground-level semantic map is obtained by a mobile robot equipped with an omnidirectional camera, differential GPS and a laser range finder. The mobile robot uses a virtual sensor for building detection (based on omnidirectional images) to compute the ground-level semantic map, which indicates the probability of the cells being occupied by the wall of a building. These wall estimates from a ground perspective are then matched with edges detected in an aerial image. The result is used to direct a region- and boundary-based segmentation algorithm for building detection in the aerial image. This approach addresses two difficulties simultaneously: 1) the range limitation of mobile robot sensors and 2) the difficulty of detecting buildings in monocular aerial images. With the suggested method building outlines can be detected faster than the mobile robot can explore the area by itself, giving the robot an ability to "see" around corners. At the same time, the approach can compensate for the absence of elevation data in segmentation of aerial images. Our experiments demonstrate that ground-level semantic information (wall estimates) allows to focus the segmentation of the aerial image to find buildings and produce a groundlevel semantic map that covers a larger area than can be built using the onboard sensors along the robot trajectory.

National Category
Engineering and Technology Computer and Information Sciences
Research subject
Computer and Systems Science
Identifiers
urn:nbn:se:oru:diva-4267 (URN)
Conference
13th IEEE International Conference on Advanced Robotics, ICAR 2007, Jeju Isl, South Korea, Aug. 22-25, 2007
Available from: 2007-12-13 Created: 2007-12-13 Last updated: 2022-08-05Bibliographically approved
Persson, M., Duckett, T., Valgren, C. & Lilienthal, A. J. (2007). Probabilistic semantic mapping with a virtual sensor for building/nature detection. In: Proceedings of the 2007 IEEE International symposium on computational intelligence in robotics and automation, CIRA 2007: . Paper presented at International symposium on computational intelligence in robotics and automation, CIRA 2007. 20-23 June 2007, Jacksonville, FL, USA (pp. 236-242). New York, NY, USA: IEEE, Article ID 4269870.
Open this publication in new window or tab >>Probabilistic semantic mapping with a virtual sensor for building/nature detection
2007 (English)In: Proceedings of the 2007 IEEE International symposium on computational intelligence in robotics and automation, CIRA 2007, New York, NY, USA: IEEE, 2007, p. 236-242, article id 4269870Conference paper, Published paper (Refereed)
Abstract [en]

In human-robot communication it is often important to relate robot sensor readings to concepts used by humans. We believe that access to semantic maps will make it possible for robots to better communicate information to a human operator and vice versa. The main contribution of this paper is a method that fuses data from different sensor modalities, range sensors and vision sensors are considered, to create a probabilistic semantic map of an outdoor environment. The method combines a learned virtual sensor (understood as one or several physical sensors with a dedicated signal processing unit for recognition of real world concepts) for building detection with a standard occupancy map. The virtual sensor is applied on a mobile robot, combining classifications of sub-images from a panoramic view with spatial information (location and orientation of the robot) giving the likely locations of buildings. This information is combined with an occupancy map to calculate a probabilistic semantic map. Our experiments with an outdoor mobile robot show that the method produces semantic maps with correct labeling and an evident distinction between "building" objects from "nature" objects

Place, publisher, year, edition, pages
New York, NY, USA: IEEE, 2007
National Category
Engineering and Technology Computer and Information Sciences
Research subject
Computer and Systems Science
Identifiers
urn:nbn:se:oru:diva-4268 (URN)10.1109/CIRA.2007.382870 (DOI)000249266100034 ()2-s2.0-34948902289 (Scopus ID)978-1-4244-0789-7 (ISBN)
Conference
International symposium on computational intelligence in robotics and automation, CIRA 2007. 20-23 June 2007, Jacksonville, FL, USA
Note

Funding Agency:

Swedish Defence Material Administration

Available from: 2007-12-13 Created: 2007-12-13 Last updated: 2022-08-05Bibliographically approved
Persson, M. & Wide, P. (2007). Using a sensor source intelligence cell to connect and distribute visual information from a commercial game engine in a disaster management exercise. In: IEEE instrumentation and measurement technology conference proceedings, IMTC 2007. Paper presented at IEEE instrumentation and measurement technology conference, IMTC 2007, 1-3 May, Warsaw (pp. 1-5). New York: IEEE
Open this publication in new window or tab >>Using a sensor source intelligence cell to connect and distribute visual information from a commercial game engine in a disaster management exercise
2007 (English)In: IEEE instrumentation and measurement technology conference proceedings, IMTC 2007, New York: IEEE , 2007, p. 1-5Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents a system where different scenarios can be played in a synthetic natural environment in form of a modified commercial game used for scenario simulation. This environment is connected to a command and control system that can visualize, process, store, and distribute sensor data and their interpretations within several command levels. It is specifically intended for mobile sensors used in remote sensing tasks. The system has been used in a disaster management exercise and there distributed information from a virtual accident to different command levels in the crisis management. The information consisted of live and recorded video, reports and map objects.

Place, publisher, year, edition, pages
New York: IEEE, 2007
National Category
Engineering and Technology Computer and Information Sciences
Research subject
Computer and Systems Science
Identifiers
urn:nbn:se:oru:diva-4405 (URN)10.1109/IMTC.2007.379463 (DOI)1-4244-0588-2 (ISBN)
Conference
IEEE instrumentation and measurement technology conference, IMTC 2007, 1-3 May, Warsaw
Available from: 2008-02-25 Created: 2008-02-25 Last updated: 2018-01-13Bibliographically approved
Persson, M., Duckett, T. & Lilienthal, A. J. (2007). Virtual sensors for human concepts: building detection by an outdoor mobile robot. Paper presented at Workshop on From Sensors to Human Spatial Concepts, Beijing, Peoples R. China, Oct., 2006. Robotics and Autonomous Systems, 55(5), 383-390
Open this publication in new window or tab >>Virtual sensors for human concepts: building detection by an outdoor mobile robot
2007 (English)In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 55, no 5, p. 383-390Article in journal (Refereed) Published
Abstract [en]

In human-robot communication it is often important to relate robot sensor readings to concepts used by humans. We suggest the use of a virtual sensor (one or several physical sensors with a dedicated signal processing unit for the recognition of real world concepts) and a method with which the virtual sensor can learn from a set of generic features. The virtual sensor robustly establishes the link between sensor data and a particular human concept. In this work, we present a virtual sensor for building detection that uses vision and machine learning to classify the image content in a particular direction as representing buildings or non-buildings. The virtual sensor is trained on a diverse set of image data, using features extracted from grey level images. The features are based on edge orientation, the configurations of these edges, and on grey level clustering. To combine these features, the AdaBoost algorithm is applied. Our experiments with an outdoor mobile robot show that the method is able to separate buildings from nature with a high classification rate, and to extrapolate well to images collected under different conditions. Finally, the virtual sensor is applied on the mobile robot, combining its classifications of sub-images from a panoramic view with spatial information (in the form of location and orientation of the robot) in order to communicate the likely locations of buildings to a remote human operator. (c) 2006 Elsevier B.V. All rights reserved.

Place, publisher, year, edition, pages
Amsterdam, Netherlands: Elsevier, 2007
National Category
Computer Sciences
Research subject
Computer and Systems Science
Identifiers
urn:nbn:se:oru:diva-19894 (URN)10.1016/j.robot.2006.12.002 (DOI)000246609500004 ()2-s2.0-34247125634 (Scopus ID)
Conference
Workshop on From Sensors to Human Spatial Concepts, Beijing, Peoples R. China, Oct., 2006
Available from: 2011-10-13 Created: 2011-10-12 Last updated: 2018-06-12Bibliographically approved
Persson, M., Duckett, T. & Lilienthal, A. J. (2006). Virtual sensors for human concepts: building detection by an outdoor mobile robot. In: Proceedings of the IROS 2006 workshop: From Sensors toHuman Spatial Concepts. Paper presented at IROS Workshop: From Sensors to Human Spatial Concepts, Beijing, China, October 10, 2006 (pp. 21-26). IEEE
Open this publication in new window or tab >>Virtual sensors for human concepts: building detection by an outdoor mobile robot
2006 (English)In: Proceedings of the IROS 2006 workshop: From Sensors toHuman Spatial Concepts, IEEE, 2006, p. 21-26Conference paper, Published paper (Refereed)
Abstract [en]

In human–robot communication it is often important to relate robot sensor readings to concepts used by humans. We suggest the use of a virtual sensor (one or several physical sensors with a dedicated signal processing unit for the recognition of real world concepts) and a method with which the virtual sensor can learn from a set of generic features. The virtual sensor robustly establishes the link between sensor data and a particular human concept. In this work, we present a virtual sensor for building detection that uses vision and machine learning to classify the image content in a particular direction as representing buildings or non-buildings. The virtual sensor is trained on a diverse set of image data, using features extracted from grey level images. The features are based on edge orientation, the configurations of these edges, and on grey level clustering. To combine these features, the AdaBoost algorithm is applied. Our experiments with an outdoor mobile robot show that the method is able to separate buildings from nature with a high classification rate, and to extrapolate well to images collected under different conditions. Finally, the virtual sensor is applied on the mobile robot, combining its classifications of sub-images from a panoramic view with spatial information (in the form of location and orientation of the robot) in order to communicate the likely locations of buildings to a remote human operator.

Place, publisher, year, edition, pages
IEEE, 2006
Keywords
Human–robot communication, Human concepts, Virtual sensor, Automatic building detection, AdaBoost
National Category
Engineering and Technology Computer and Information Sciences
Research subject
Computer and Systems Science
Identifiers
urn:nbn:se:oru:diva-3958 (URN)
Conference
IROS Workshop: From Sensors to Human Spatial Concepts, Beijing, China, October 10, 2006
Available from: 2007-08-27 Created: 2007-08-27 Last updated: 2018-06-11Bibliographically approved
Persson, M. (2002). A simulation environment for visual servoing. (Licentiate dissertation). Örebro: Örebro universitetsbibliotek
Open this publication in new window or tab >>A simulation environment for visual servoing
2002 (English)Licentiate thesis, monograph (Other academic)
Place, publisher, year, edition, pages
Örebro: Örebro universitetsbibliotek, 2002. p. 87
Series
Örebro Studies in Technology, ISSN 1650-8580 ; 4
National Category
Computer Sciences
Research subject
Computer and Systems Science
Identifiers
urn:nbn:se:oru:diva-4266 (URN)9176683117 (ISBN)
Available from: 2007-07-08 Created: 2007-07-08 Last updated: 2021-03-03Bibliographically approved
Organisations

Search in DiVA

Show all publications