Till Örebro universitet

oru.seÖrebro universitets publikationer
Ändra sökning
Avgränsa sökresultatet
1 - 11 av 11
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Bothe, Hans-H.
    et al.
    Örebro universitet, Institutionen för teknik.
    Persson, Martin
    Örebro universitet, Institutionen för teknik.
    Biel, Lena
    Örebro universitet, Institutionen för teknik.
    Rosenholm, Magnus
    Örebro universitet, Institutionen för teknik.
    Multivariate sensor fusion by a neural network modelManuskript (preprint) (Övrigt vetenskapligt)
  • 2.
    Persson, Martin
    Örebro universitet, Institutionen för teknik.
    A simulation environment for visual servoing2002Licentiatavhandling, monografi (Övrigt vetenskapligt)
  • 3.
    Persson, Martin
    Örebro universitet, Institutionen för teknik.
    Semantic mapping using virtual sensors and fusion of aerial images with sensor data from a ground vehicle2008Doktorsavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    In this thesis, semantic mapping is understood to be the process of putting a tag or label on objects or regions in a map. This label should be interpretable by and have a meaning for a human. The use of semantic information has several application areas in mobile robotics. The largest area is in human-robot interaction where the semantics is necessary for a common understanding between robot and human of the operational environment. Other areas include localization through connection of human spatial concepts to particular locations, improving 3D models of indoor and outdoor environments, and model validation.

    This thesis investigates the extraction of semantic information for mobile robots in outdoor environments and the use of semantic information to link ground-level occupancy maps and aerial images. The thesis concentrates on three related issues: i) recognition of human spatial concepts in a scene, ii) the ability to incorporate semantic knowledge in a map, and iii) the ability to connect information collected by a mobile robot with information extracted from an aerial image.

    The first issue deals with a vision-based virtual sensor for classification of views (images). The images are fed into a set of learned virtual sensors, where each virtual sensor is trained for classification of a particular type of human spatial concept. The virtual sensors are evaluated with images from both ordinary cameras and an omni-directional camera, showing robust properties that can cope with variations such as changing season.

    In the second part a probabilistic semantic map is computed based on an occupancy grid map and the output from a virtual sensor. A local semantic map is built around the robot for each position where images have been acquired. This map is a grid map augmented with semantic information in the form of probabilities that the occupied grid cells belong to a particular class. The local maps are fused into a global probabilistic semantic map covering the area along the trajectory of the mobile robot.

    In the third part information extracted from an aerial image is used to improve the mapping process. Region and object boundaries taken from the probabilistic semantic map are used to initialize segmentation of the aerial image. Algorithms for both local segmentation related to the borders and global segmentation of the entire aerial image, exemplified with the two classes ground and buildings, are presented. Ground-level semantic information allows focusing of the segmentation of the aerial image to desired classes and generation of a semantic map that covers a larger area than can be built using only the onboard sensors.

    Ladda ner fulltext (pdf)
    FULLTEXT01
    Ladda ner (pdf)
    COVER01
    Ladda ner (pdf)
    ERRATA01
  • 4.
    Persson, Martin
    et al.
    Örebro universitet, Institutionen för teknik.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, Uk.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för teknik.
    Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping2007Ingår i: Proceedings of the IROS Workshop "From Sensors to Human Spatial Concepts", 2007, s. 17-24Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper investigates the use of semantic information to link ground-level occupancy maps and aerial images. A ground-level semantic map is obtained by a mobile robot equipped with an omnidirectional camera, differential GPS and a laser range finder. The mobile robot uses a virtual sensor for building detection (based on omnidirectional images) to compute the ground-level semantic map, which indicates the probability of the cells being occupied by the wall of a building. These wall estimates from a ground perspective are then matched with edges detected in an aerial image. The result is used to direct a region- and boundary-based segmentation algorithm for building detection in the aerial image. This approach addresses two difficulties simultaneously: 1) the range limitation of mobile robot sensors and 2) the difficulty of detecting buildings in monocular aerial images. With the suggested method building outlines can be detected faster than the mobile robot can explore the area by itself, giving the robot an ability to "see" around corners. At the same time, the approach can compensate for the absence of elevation data in segmentation of aerial images. Our experiments demonstrate that ground-level semantic information (wall estimates) allows to focus the segmentation of the aerial image to find buildings and produce a ground-level semantic map that covers a larger area than can be built using the onboard sensors.

    Ladda ner fulltext (pdf)
    Fusion of Aerial Images and Sensor Data from a Ground Vehicle for Improved Semantic Mapping
  • 5.
    Persson, Martin
    et al.
    Örebro universitet, Akademin för naturvetenskap och teknik.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, UK.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap.
    Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping2008Ingår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 56, nr 6, s. 483-492Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omni-directional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground.

    Ladda ner fulltext (pdf)
    Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping
  • 6.
    Persson, Martin
    et al.
    Örebro universitet, Akademin för naturvetenskap och teknik.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, UK.
    Lilienthal, Achim J.
    Örebro universitet, Akademin för naturvetenskap och teknik.
    Improved mapping and image segmentation by using semantic information to link aerial images and ground-level information2008Ingår i: Recent Progress in Robotics: Viable Robotic Service to Human, Berlin, Germany: Springer, 2008, s. 157-169Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    This paper investigates the use of semantic information to link ground-level occupancy maps and aerial images. A ground-level semantic map is obtained by a mobile robot equipped with an omnidirectional camera, differential GPS and a laser range finder. The mobile robot uses a virtual sensor for building detection (based on omnidirectional images) to compute the ground-level semantic map, which indicates the probability of the cells being occupied by the wall of a building. These wall estimates from a ground perspective are then matched with edges detected in an aerial image. The result is used to direct a region- and boundary-based segmentation algorithm for building detection in the aerial image. This approach addresses two difficulties simultaneously: 1) the range limitation of mobile robot sensors and 2) the difficulty of detecting buildings in monocular aerial images. With the suggested method building outlines can be detected faster than the mobile robot can explore the area by itself, giving the robot an ability to “see” around corners. At the same time, the approach can compensate for the absence of elevation data in segmentation of aerial images. Our experiments demonstrate that ground-level semantic information (wall estimates) allows to focus the segmentation of the aerial image to find buildings and produce a ground-level semantic map that covers a larger area than can be built using the onboard sensors.

    Ladda ner fulltext (pdf)
    Improved mapping and image segmentation by using semantic information to link aerial images and ground-level information
  • 7.
    Persson, Martin
    et al.
    Örebro universitet, Institutionen för teknik.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, United Kingdom.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för teknik.
    Improved mapping and image segmentation by using semantic information to link aerial images and ground-level information2007Ingår i: Proceedings of the IEEE international conference on advanced robotics: ICAR 2007, 2007, s. 924-929Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper investigates the use of semantic information to link ground-level occupancy maps and aerial images. In the suggested approach a ground-level semantic map is obtained by a mobile robot equipped with an omnidirectional camera, differential GPS and a laser range finder. The mobile robot uses a virtual sensor for building detection (based on omnidirectional images) to compute the ground-level semantic map, which indicates the probability of the cells being occupied by the wall of a building. These wall estimates from a ground perspective are then matched with edges detected in an aerial image. The result is used to direct a region- and boundary-based segmentation algorithm for building detection in the aerial image. This approach addresses two difficulties simultaneously: 1) the range limitation of mobile robot sensors and 2) the difficulty of detecting buildings in monocular aerial images. With the suggested method building outlines can be detected faster than the mobile robot can explore the area by itself, giving the robot an ability to "see" around corners. At the same time, the approach can compensate for the absence of elevation data in segmentation of aerial images. Our experiments demonstrate that ground-level semantic information (wall estimates) allows to focus the segmentation of the aerial image to find buildings and produce a groundlevel semantic map that covers a larger area than can be built using the onboard sensors along the robot trajectory.

    Ladda ner fulltext (pdf)
    Improved Mapping and Image Segmentation by Using Semantic Information to Link Aerial Images and Ground-Level Information
  • 8.
    Persson, Martin
    et al.
    Örebro universitet, Institutionen för teknik.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, UK.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för teknik.
    Virtual sensors for human concepts: building detection by an outdoor mobile robot2007Ingår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 55, nr 5, s. 383-390Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In human-robot communication it is often important to relate robot sensor readings to concepts used by humans. We suggest the use of a virtual sensor (one or several physical sensors with a dedicated signal processing unit for the recognition of real world concepts) and a method with which the virtual sensor can learn from a set of generic features. The virtual sensor robustly establishes the link between sensor data and a particular human concept. In this work, we present a virtual sensor for building detection that uses vision and machine learning to classify the image content in a particular direction as representing buildings or non-buildings. The virtual sensor is trained on a diverse set of image data, using features extracted from grey level images. The features are based on edge orientation, the configurations of these edges, and on grey level clustering. To combine these features, the AdaBoost algorithm is applied. Our experiments with an outdoor mobile robot show that the method is able to separate buildings from nature with a high classification rate, and to extrapolate well to images collected under different conditions. Finally, the virtual sensor is applied on the mobile robot, combining its classifications of sub-images from a panoramic view with spatial information (in the form of location and orientation of the robot) in order to communicate the likely locations of buildings to a remote human operator. (c) 2006 Elsevier B.V. All rights reserved.

    Ladda ner fulltext (pdf)
    Virtual Sensors for Human Concepts: Building Detection by an Outdoor Mobile Robot
  • 9.
    Persson, Martin
    et al.
    Örebro universitet, Institutionen för teknik.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, UK.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för teknik.
    Virtual sensors for human concepts: building detection by an outdoor mobile robot2006Ingår i: Proceedings of the IROS 2006 workshop: From Sensors toHuman Spatial Concepts, IEEE, 2006, s. 21-26Konferensbidrag (Refereegranskat)
    Abstract [en]

    In human–robot communication it is often important to relate robot sensor readings to concepts used by humans. We suggest the use of a virtual sensor (one or several physical sensors with a dedicated signal processing unit for the recognition of real world concepts) and a method with which the virtual sensor can learn from a set of generic features. The virtual sensor robustly establishes the link between sensor data and a particular human concept. In this work, we present a virtual sensor for building detection that uses vision and machine learning to classify the image content in a particular direction as representing buildings or non-buildings. The virtual sensor is trained on a diverse set of image data, using features extracted from grey level images. The features are based on edge orientation, the configurations of these edges, and on grey level clustering. To combine these features, the AdaBoost algorithm is applied. Our experiments with an outdoor mobile robot show that the method is able to separate buildings from nature with a high classification rate, and to extrapolate well to images collected under different conditions. Finally, the virtual sensor is applied on the mobile robot, combining its classifications of sub-images from a panoramic view with spatial information (in the form of location and orientation of the robot) in order to communicate the likely locations of buildings to a remote human operator.

    Ladda ner fulltext (pdf)
    Virtual Sensors for Human Concepts: Building Detection by an Outdoor Mobile Robot
  • 10.
    Persson, Martin
    et al.
    Örebro universitet, Institutionen för teknik.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, United Kingdom.
    Valgren, Christoffer
    Department of Technology, Örebro University, Örebro, Sweden.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för teknik.
    Probabilistic semantic mapping with a virtual sensor for building/nature detection2007Ingår i: Proceedings of the 2007 IEEE International symposium on computational intelligence in robotics and automation, CIRA 2007, New York, NY, USA: IEEE, 2007, s. 236-242, artikel-id 4269870Konferensbidrag (Refereegranskat)
    Abstract [en]

    In human-robot communication it is often important to relate robot sensor readings to concepts used by humans. We believe that access to semantic maps will make it possible for robots to better communicate information to a human operator and vice versa. The main contribution of this paper is a method that fuses data from different sensor modalities, range sensors and vision sensors are considered, to create a probabilistic semantic map of an outdoor environment. The method combines a learned virtual sensor (understood as one or several physical sensors with a dedicated signal processing unit for recognition of real world concepts) for building detection with a standard occupancy map. The virtual sensor is applied on a mobile robot, combining classifications of sub-images from a panoramic view with spatial information (location and orientation of the robot) giving the likely locations of buildings. This information is combined with an occupancy map to calculate a probabilistic semantic map. Our experiments with an outdoor mobile robot show that the method produces semantic maps with correct labeling and an evident distinction between "building" objects from "nature" objects

    Ladda ner fulltext (pdf)
    Probabilistic Semantic Mapping with a Virtual Sensor for Building/Nature detection
  • 11.
    Persson, Martin
    et al.
    Örebro universitet, Institutionen för teknik.
    Wide, Peter
    Örebro universitet, Institutionen för teknik.
    Using a sensor source intelligence cell to connect and distribute visual information from a commercial game engine in a disaster management exercise2007Ingår i: IEEE instrumentation and measurement technology conference proceedings, IMTC 2007, New York: IEEE , 2007, s. 1-5Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper presents a system where different scenarios can be played in a synthetic natural environment in form of a modified commercial game used for scenario simulation. This environment is connected to a command and control system that can visualize, process, store, and distribute sensor data and their interpretations within several command levels. It is specifically intended for mobile sensors used in remote sensing tasks. The system has been used in a disaster management exercise and there distributed information from a virtual accident to different command levels in the crisis management. The information consisted of live and recorded video, reports and map objects.

1 - 11 av 11
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf