oru.sePublications
Change search
Refine search result
12345 101 - 150 of 204
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
  • 101.
    Magnusson, Martin
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Nüchter, A.
    Jacobs University Bremen, Bremen, Germany.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Appearance-based loop detection from 3D laser data using the normal distributions transform2009In: IEEE International Conference on Robotics and Automation 2009 (ICRA '09), IEEE conference proceedings, 2009, p. 23-28Conference paper (Other academic)
    Abstract [en]

    We propose a new approach to appearance based loop detection from metric 3D maps, exploiting the NDT surface representation. Locations are described with feature histograms based on surface orientation and smoothness, and loop closure can be detected by matching feature histograms. We also present a quantitative performance evaluation using two realworld data sets, showing that the proposed method works well in different environments.© 2009 IEEE.

  • 102.
    Magnusson, Martin
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Nüchter, Andreas
    Jacobs University Bremen.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Automatic appearance-based loop detection from three-dimensional laser data using the normal distributions transform2009In: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 26, no 11-12, p. 892-914Article in journal (Refereed)
    Abstract [en]

    We propose a new approach to appearance-based loop detection for mobile robots, usingthree-dimensional (3D) laser scans. Loop detection is an important problem in the simultaneouslocalization and mapping (SLAM) domain, and, because it can be seen as theproblem of recognizing previously visited places, it is an example of the data associationproblem. Without a flat-floor assumption, two-dimensional laser-based approaches arebound to fail in many cases. Two of the problems with 3D approaches that we address inthis paper are how to handle the greatly increased amount of data and how to efficientlyobtain invariance to 3D rotations.We present a compact representation of 3D point cloudsthat is still discriminative enough to detect loop closures without false positives (i.e.,detecting loop closure where there is none). A low false-positive rate is very important becausewrong data association could have disastrous consequences in a SLAM algorithm.Our approach uses only the appearance of 3D point clouds to detect loops and requires nopose information. We exploit the normal distributions transform surface representationto create feature histograms based on surface orientation and smoothness. The surfaceshape histograms compress the input data by two to three orders of magnitude. Becauseof the high compression rate, the histograms can be matched efficiently to compare theappearance of two scans. Rotation invariance is achieved by aligning scans with respectto dominant surface orientations. We also propose to use expectation maximization to fit a gamma mixture model to the output similarity measures in order to automatically determinethe threshold that separates scans at loop closures from nonoverlapping ones.Wediscuss the problem of determining ground truth in the context of loop detection and thedifficulties in comparing the results of the few available methods based on range information.Furthermore, we present quantitative performance evaluations using three realworlddata sets, one of which is highly self-similar, showing that the proposed methodachieves high recall rates (percentage of correctly identified loop closures) at low falsepositiverates in environments with different characteristics.

  • 103.
    Magnusson, Martin
    et al.
    Örebro University, School of Science and Technology.
    Kucner, Tomasz
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Quantitative Evaluation of Coarse-To-Fine Loading Strategies for Material Rehandling2015In: Proceedings of the IEEE International Conference on Automation Science and Engineering (CASE), New York: IEEE conference proceedings , 2015, p. 450-455Conference paper (Refereed)
    Abstract [en]

    Autonomous handling of piled materials is an emerging topic in automation science and engineering. A central question for material rehandling tasks (transporting materials that have been assembled in piles) is “where to dig, in order to optimise performance”? In particular, we are interested in the application of autonomous wheel loaders to handle piles of gravel. Still, the methodology proposed in this paper relates to granular materials in other applications too. Although initial work on suggesting strategies for where to dig has been done by a few other groups, there has been a lack of structured evaluation of the usefulness of the proposed strategies. In an attempt to further the field, we present a quantitative evaluation of loading strategies; both coarse ones, aiming to maintain a good pile shape over long-term operation; and refined ones, aiming to detect the locally best attack pose for acquiring a good fill grade in the bucket. Using real-world data from a semi-automated test platform, we present an assessment of how previously proposed pile shape measures can be mapped to the amount of material in the bucket after loading. We also present experimental data for long-term strategies, using simulations based on real-world 3D scan data from a production site.

  • 104.
    Magnusson, Martin
    et al.
    Örebro University, School of Science and Technology.
    Kucner, Tomasz Piotr
    Örebro University, School of Science and Technology.
    Gholami Shahbandi, Saeed
    IS lab, Halmstad University, Halmstad, Sweden.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Semi-Supervised 3D Place Categorisation by Descriptor Clustering2017In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 620-625Conference paper (Refereed)
    Abstract [en]

    Place categorisation; i. e., learning to group perception data into categories based on appearance; typically uses supervised learning and either visual or 2D range data.

    This paper shows place categorisation from 3D data without any training phase. We show that, by leveraging the NDT histogram descriptor to compactly encode 3D point cloud appearance, in combination with standard clustering techniques, it is possible to classify public indoor data sets with accuracy comparable to, and sometimes better than, previous supervised training methods. We also demonstrate the effectiveness of this approach to outdoor data, with an added benefit of being able to hierarchically categorise places into sub-categories based on a user-selected threshold.

    This technique relieves users of providing relevant training data, and only requires them to adjust the sensitivity to the number of place categories, and provide a semantic label to each category after the process is completed.

  • 105.
    Magnusson, Martin
    et al.
    Örebro University, Department of Technology.
    Lilienthal, Achim J.
    Örebro University, Department of Technology.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, United Kingdom.
    Scan registration for autonomous mining vehicles using 3D-NDT2007In: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 24, no 10, p. 803-827Article in journal (Refereed)
    Abstract [en]

    Scan registration is an essential sub-task when building maps based on range finder data from mobile robots. The problem is to deduce how the robot has moved between consecutive scans, based on the shape of overlapping portions of the scans. This paper presents a new algorithm for registration of 3D data. The algorithm is a generalisation and improvement of the normal distributions transform (NDT) for 2D data developed by Biber and Straßer, which allows for accurate registration using a memory-efficient representation of the scan surface. A detailed quantitative and qualitative comparison of the new algorithm with the 3D version of the popular ICP (iterative closest point) algorithm is presented. Results with actual mine data, some of which were collected with a new prototype 3D laser scanner, show that the presented algorithm is faster and slightly more reliable than the standard ICP algorithm for 3D registration, while using a more memory-efficient scan surface representation.

  • 106.
    Magnusson, Martin
    et al.
    Örebro University, Department of Technology.
    Nüchter, Andreas
    Institute of Computer Science, University of Osnabrück, Osnabrück, Germany.
    Lörken, Christopher
    Institute of Computer Science, University of Osnabrück, Osnabrück, Germany.
    Lilienthal, Achim J.
    Örebro University, Department of Technology.
    Hertzberg, Joachim
    Institute of Computer Science, University of Osnabrück, Osnabrück, Germany.
    3D mapping the Kvarntorp mine: a rield experiment for evaluation of 3D scan matching algorithms2008In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Workshop, 2008Conference paper (Other academic)
    Abstract [en]

    This paper presents the results of a field experiment in the Kvarntorp mine outside of Örebro in Sweden. 3D mapping of the underground mine has been used to compare two scan matching methods, namely the iterative closest point algorithm (ICP) and the normal distributions transform (NDT). The experimental results of the algorithm are compared in terms of robustness and speed. For robustness we measure how reliably 3D scans are registered with respect to different starting pose estimates. Speed is evaluated running the authors’ best implementations on the same hardware. This leads to an unbiased comparison. In these experiments, NDT was shown to converge form a larger range of initial pose estimates than ICP, and to perform faster.

  • 107.
    Magnusson, Martin
    et al.
    Örebro University, School of Science and Technology.
    Nüchter, Andreas
    Jacobs University Bremen, Bremen, Germany; Knowledge Systems Research Group of the Institute of Computer Science, University of Osnabrück, Germany.
    Lörken, Christopher
    Institute of Computer Science, University of Osnabrück, Germany.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Hertzberg, Joachim
    Institute of Computer Science, University of Osnabrück, Germany.
    Evaluation of 3D registration reliability and speed: a comparison of ICP and NDT2009In: Proceedings of the 2009 IEEE international conference on Robotics and Automation, ICRA'09, IEEE conference proceedings, 2009, p. 2263-2268Conference paper (Refereed)
    Abstract [en]

    To advance robotic science it is important to perform experiments that can be replicated by other researchers to compare different methods. However, these comparisons tend to be biased, since re-implementations of reference methods often lack thoroughness and do not include the hands-on experience obtained during the original development process. This paper presents a thorough comparison of 3D scan registration algorithms based on a 3D mapping field experiment, carried out by two research groups that are leading in the field of 3D robotic mapping. The iterative closest points algorithm (ICP) is compared to the normal distributions transform (NDT). We also present an improved version of NDT with a substantially larger valley of convergence than previously published versions.

  • 108.
    Mielle, Malcolm
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Using emergency maps to add not yet explored places into SLAM2017Conference paper (Other academic)
    Abstract [en]

    While using robots in search and rescue missions would help ensure the safety of first responders, a key issue is the time needed by the robot to operate. Even though SLAM is faster and faster, it might still be too slow to enable the use of robots in critical situations. One way to speed up operation time is to use prior information.

    We aim at integrating emergency-maps into SLAM to complete the SLAM map with information about not yet explored part of the environment. By integrating prior information, we can speed up exploration time or provide valuable prior information for navigation, for example, in case of sensor blackout/failure. However, while extensively used by firemen in their operations, emergency maps are not easy to integrate in SLAM since they are often not up to date or with non consistent scales.

    The main challenge we are tackling is in dealing with the imperfect scale of the rough emergency maps and integrate it with the online SLAM map in addition to challenges due to incorrect matches between these two types of map. We developed a formulation of graph-based SLAM incorporating information from an emergency map into SLAM, and propose a novel optimization process adapted to this formulation.

    We extract corners from the emergency map and the SLAM map, in between which we find correspondences using a distance measure. We then build a graph representation associating information from the emergency map and the SLAM map. Corners in the emergency map, corners in the robot map, and robot poses are added as nodes in the graph, while odometry, corner observations, walls in the emergency map, and corner associations are added as edges. To conserve the topology of the emergency map, but correct its possible errors in scale, edges representing the emergency map's walls are given a covariance so that they are easy to extend or shrink but hard to rotate. Correspondences between corners represent a zero transformation for the optimization to match them as close as possible. The graph optimization is done by using a combination robust kernels. We first use the Huber kernel, to converge toward a good solution, followed by Dynamic Covariance Scaling, to handle the remaining errors.

    We demonstrate our system in an office environment. We run the SLAM online during the exploration. Using the map enhanced by information from the emergency map, the robot was able to plan the shortest path toward a place it has not yet explored. This capability can be a real asset in complex buildings where exploration can take up a long time. It can also reduce exploration time by avoiding exploration of dead-ends, or search of specific places since the robot knows where it is in the emergency map.

  • 109.
    Mielle, Malcolm
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    SLAM auto-complete: completing a robot map using an emergency map2017In: 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), IEEE conference proceedings, 2017, p. 35-40, article id 8088137Conference paper (Refereed)
    Abstract [en]

    In search and rescue missions, time is an important factor; fast navigation and quickly acquiring situation awareness might be matters of life and death. Hence, the use of robots in such scenarios has been restricted by the time needed to explore and build a map. One way to speed up exploration and mapping is to reason about unknown parts of the environment using prior information. While previous research on using external priors for robot mapping mainly focused on accurate maps or aerial images, such data are not always possible to get, especially indoor. We focus on emergency maps as priors for robot mapping since they are easy to get and already extensively used by firemen in rescue missions. However, those maps can be outdated, information might be missing, and the scales of rooms are typically not consistent.

    We have developed a formulation of graph-based SLAM that incorporates information from an emergency map. The graph-SLAM is optimized using a combination of robust kernels, fusing the emergency map and the robot map into one map, even when faced with scale inaccuracies and inexact start poses.

    We typically have more than 50% of wrong correspondences in the settings studied in this paper, and the method we propose correctly handles them. Experiments in an office environment show that we can handle up to 70% of wrong correspondences and still get the expected result. The robot can navigate and explore while taking into account places it has not yet seen. We demonstrate this in a test scenario and also show that the emergency map is enhanced by adding information not represented such as closed doors or new walls.

  • 110.
    Mielle, Malcolm
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Using sketch-maps for robot navigation: interpretation and matching2016In: 2016 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), New York: Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 252-257Conference paper (Refereed)
    Abstract [en]

    We present a study on sketch-map interpretationand sketch to robot map matching, where maps have nonuniform scale, different shapes or can be incomplete. For humans, sketch-maps are an intuitive way to communicate navigation information, which makes it interesting to use sketch-maps forhuman robot interaction; e.g., in emergency scenarios.

    To interpret the sketch-map, we propose to use a Voronoi diagram that is obtained from the distance image on which a thinning parameter is used to remove spurious branches. The diagram is extracted as a graph and an efficient error-tolerant graph matching algorithm is used to find correspondences, while keeping time and memory complexity low.

    A comparison against common algorithms for graph extraction shows that our method leads to twice as many good matches. For simple maps, our method gives 95% good matches even for heavily distorted sketches, and for a more complex real-world map, up to 58%. This paper is a first step toward using unconstrained sketch-maps in robot navigation.

  • 111.
    Mojtahedzadeh, Rasoul
    et al.
    Örebro University, School of Science and Technology.
    Bouguerra, Abdelbaki
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Automatic relational scene representation for safe robotic manipulation tasks2013Conference paper (Refereed)
    Abstract [en]

    In this paper, we propose a new approach forautomatically building symbolic relational descriptions of staticconfigurations of objects to be manipulated by a robotic system.The main goal of our work is to provide advanced cognitiveabilities for such robotic systems to make them more aware ofthe outcome of their actions. We describe how such symbolicrelations are automatically extracted for configurations ofbox-shaped objects using notions from geometry and staticequilibrium in classical mechanics. We also present extensivesimulation results as well as some real-world experiments aimedat verifying the output of the proposed approach.

  • 112.
    Mojtahedzadeh, Rasoul
    et al.
    Örebro University, School of Science and Technology.
    Bouguerra, Abdelbaki
    Örebro University, School of Science and Technology.
    Schaffernicht, Erik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Probabilistic Relational Scene Representation and Decision Making Under Incomplete Information for Robotic Manipulation Tasks2014In: Robotics and Automation (ICRA), 2014 IEEE International Conference on, IEEE Robotics and Automation Society, 2014, p. 5685-5690Conference paper (Refereed)
    Abstract [en]

    In this paper, we propose an approach for robotic manipulation systems to autonomously reason about their environments under incomplete information. The target application is to automate the task of unloading the content of shipping containers. Our goal is to capture possible support relations between objects in partially known static configurations. We employ support vector machines (SVM) to estimate the probability of a support relation between pairs of detected objects using features extracted from their geometrical properties and 3D sampled points of the scene. The set of probabilistic support relations is then used for reasoning about optimally selecting an object to be unloaded first. The proposed approach has been extensively tested and verified on data sets generated in simulation and from real world configurations.

  • 113.
    Mojtahedzadeh, Rasoul
    et al.
    Örebro University, School of Science and Technology.
    Bouguerra, Abdelbaki
    Örebro University, School of Science and Technology.
    Schaffernicht, Erik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J
    Örebro University, School of Science and Technology.
    Support relation analysis and decision making for safe robotic manipulation tasks2015In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 71, no SI, p. 99-117Article in journal (Refereed)
    Abstract [en]

    In this article, we describe an approach to address the issue of automatically building and using high-level symbolic representations that capture physical interactions between objects in static configurations. Our work targets robotic manipulation systems where objects need to be safely removed from piles that come in random configurations. We assume that a 3D visual perception module exists so that objects in the piles can be completely or partially detected. Depending on the outcome of the perception, we divide the issue into two sub-issues: 1) all objects in the configuration are detected; 2) only a subset of objects are correctly detected. For the first case, we use notions from geometry and static equilibrium in classical mechanics to automatically analyze and extract act and support relations between pairs of objects. For the second case, we use machine learning techniques to estimate the probability of objects supporting each other. Having the support relations extracted, a decision making process is used to identify which object to remove from the configuration so that an expected minimum cost is optimized. The proposed methods have been extensively tested and validated on data sets generated in simulation and from real world configurations for the scenario of unloading goods from shipping containers.

  • 114.
    Mojtahedzadeh, Rasoul
    et al.
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    A principle of minimum translation search approach for object pose refinement2015In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) / [ed] IEEE, IEEE Press, 2015, p. 2897-2903Conference paper (Refereed)
    Abstract [en]

    The state-of-the-art object pose estimation approaches represent the set of detected poses together with corresponding uncertainty. The inaccurate noisy poses may result in a configuration of overlapping objects especially in cluttered environments. Under a rigid body assumption the inter-penetrations between pairs of objects are geometrically inconsistent. In this paper, we propose the principle of minimum translation search, PROMTS, to find an inter-penetration-free configuration of the initially detected objects. The target application is to automate the task of unloading shipping containers, where a geometrically consistent configuration of objects is required for high level reasoning and manipulation. We find that the proposed approach to resolve geometrical inconsistencies improves the overall pose estimation accuracy. We examine the utility of two selected search methods: A-star and Depth-Limited search. The performance of the search algorithms are tested on data sets generated in simulation and from real-world scenarios. The results show overall improvement of the estimated poses and suggest that depth-limited search presents the best overall performance.

  • 115.
    Mojtahedzadeh, Rasoul
    et al.
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Application Based 3D Sensor Evaluation: A Case Study in 3D Object Pose Estimation for Automated Unloading of Containers2013In: Proceedings of the European Conference on Mobile Robots (ECMR), IEEE conference proceedings, 2013, p. 313-318Conference paper (Other academic)
    Abstract [en]

    A fundamental task in the design process of a complex system that requires 3D visual perception is the choice of suitable 3D range sensors. Identifying the utility of 3D range sensors in an industrial application solely based on an evaluation of their distance accuracy and the noise level may lead to an inappropriate selection. To assess the actual effect on the performance of the system as a whole requires a more involved analysis. In this paper, we examine the problem of selecting a set of 3D range sensors when designing autonomous systems for specific industrial applications in a holistic manner. As an instance of this problem we present a case study with an experimental evaluation of the utility of four 3D range sensors for object pose estimation in the process of automation of unloading containers.

  • 116. Monroy, Javier G.
    et al.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Blanco, Jose Luis
    González-Jimenez, Javier
    Trincavelli, Marco
    Örebro University, School of Science and Technology.
    Calibration of mox gas sensors in open sampling systems based on gaussian processes2012In: Proceedings of the IEEE Sensors Conference, 2012, IEEE conference proceedings, 2012, p. 1-4Conference paper (Refereed)
    Abstract [en]

    Calibration of metal oxide (MOX) gas sensor for continuous monitoring is a complex problem due to the highly dynamic characteristics of the gas sensor signal when exposed to natural environment (Open Sampling System - OSS). This work presents a probabilistic approach to the calibration of a MOX gas sensor based on Gaussian Processes (GP). The proposed approach estimates for every sensor measurement a probability distribution of the gas concentration. This enables the calculation of confidence intervals for the predicted concentrations. This is particularly important since exact calibration is hard to obtain due to the chaotic nature that dominates gas dispersal. The proposed approach has been tested with an experimental setup where an array of MOX sensors and a Photo Ionization Detector (PID) are placed downwind w.r.t. the gas source. The PID is used to obtain ground truth concentration. Comparison with standard calibration methods demonstrates the advantage of the proposed approach.

  • 117.
    Monroy, Javier
    et al.
    Machine Perception and Intelligent Robotics group (MAPIR), Instituto de Investigación Biomedica de Malaga (IBIMA), Universidad de Malaga, Malaga, Spain.
    Hernandez Bennetts, Victor
    Örebro University, School of Science and Technology.
    Fan, Han
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Gonzalez-Jimenez, Javier
    Machine Perception and Intelligent Robotics group (MAPIR), Instituto de Investigación Biomedica de Malaga (IBIMA), Universidad de Malaga, Malaga, Spain.
    GADEN: A 3D Gas Dispersion Simulator for Mobile Robot Olfaction in Realistic Environments2017In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 17, no 7, p. 1479-1494Article in journal (Refereed)
    Abstract [en]

    This work presents a simulation framework developed under the widely used Robot Operating System (ROS) to enable the validation of robotics systems and gas sensing algorithms under realistic environments. The framework is rooted in the principles of computational fluid dynamics and filament dispersion theory, modeling wind flow and gas dispersion in 3D real-world scenarios (i.e., accounting for walls, furniture, etc.). Moreover, it integrates the simulation of different environmental sensors, such as metal oxide gas sensors, photo ionization detectors, or anemometers. We illustrate the potential and applicability of the proposed tool by presenting a simulation case in a complex and realistic office-like environment where gas leaks of different chemicals occur simultaneously. Furthermore, we accomplish quantitative and qualitative validation by comparing our simulated results against real-world data recorded inside a wind tunnel where methane was released under different wind flow profiles. Based on these results, we conclude that our simulation framework can provide a good approximation to real world measurements when advective airflows are present in the environment.

  • 118.
    Mosberger, Rafael
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    A customized vision system for tracking humans wearing reflective safety clothing from industrial vehicles and machinery2014In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 14, no 10, p. 17952-17980Article in journal (Refereed)
    Abstract [en]

    This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions.

  • 119.
    Mosberger, Rafael
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Multi-human Tracking using High-visibility Clothing for Industrial Safety2013In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2013, p. 638-644Conference paper (Refereed)
    Abstract [en]

    We propose and evaluate a system for detecting and tracking multiple humans wearing high-visibility clothing from vehicles operating in industrial work environments. We use a customized stereo camera setup equipped with IR flash and IR filter to detect the reflective material on the worker's garments and estimate their trajectories in 3D space. An evaluation in two distinct industrial environments with different degrees of complexity demonstrates the approach to be robust and accurate for tracking workers in arbitrary body poses, under occlusion, and under a wide range of different illumination settings.

  • 120.
    Mosberger, Rafael
    et al.
    Örebro University, School of Science and Technology.
    Leibe, Bastian
    Aachen University, Aachen, Germany.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Multi-band Hough Forests for detecting humans with Reflective Safety Clothing from mobile machinery2015In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), IEEE Computer Society, 2015, p. 697-703Conference paper (Refereed)
    Abstract [en]

    We address the problem of human detection from heavy mobile machinery and robotic equipment operating at industrial working sites. Exploiting the fact that workers are typically obliged to wear high-visibility clothing with reflective markers, we propose a new recognition algorithm that specifically incorporates the highly discriminative features of the safety garments in the detection process. Termed Multi-band Hough Forest, our detector fuses the input from active near-infrared (NIR) and RGB color vision to learn a human appearance model that not only allows us to detect and localize industrial workers, but also to estimate their body orientation. We further propose an efficient pipeline for automated generation of training data with high-quality body part annotations that are used in training to increase detector performance. We report a thorough experimental evaluation on challenging image sequences from a real-world production environment, where persons appear in a variety of upright and non-upright body positions.

  • 121.
    Mosberger, Rafael
    et al.
    Örebro University, School of Science and Technology.
    Schaffernicht, Erik
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Inferring human body posture information from reflective patterns of protective work garments2016In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 4131-4136Conference paper (Refereed)
    Abstract [en]

    We address the problem of extracting human body posture labels, upper body orientation and the spatial location of individual body parts from near-infrared (NIR) images depicting patterns of retro-reflective markers. The analyzed patterns originate from the observation of humans equipped with protective high-visibility garments that represent common safety equipment in the industrial sector. Exploiting the shape of the observed reflectors we adopt shape matching based on the chamfer distance and infer one of seven discrete body posture labels as well as the approximate upper body orientation with respect to the camera. We then proceed to analyze the NIR images on a pixel scale and estimate a figure-ground segmentation together with human body part labels using classification of densely extracted local image patches. Our results indicate a body posture classification accuracy of 80% and figure-ground segmentations with 87% accuracy.

  • 122.
    Neumann, Patrick
    et al.
    Bundesanstalt für Materialforschung und -prüfung (BAM), Berlin, Germany.
    Asadi, Sahar
    Örebro University, School of Science and Technology.
    Hernandez Bennetts, Victor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Bartholmai, Matthias
    Bundesanstalt für Materialforschung und -prüfung (BAM), Berlin, Germany.
    Monitoring of CCS areas using micro unmanned aerial vehicles (MUAVs)2013In: Energy Procedia, ISSN 1876-6102, E-ISSN 1876-6102, Vol. 37, p. 4182-4190Article in journal (Refereed)
    Abstract [en]

    Carbon capture & storage (CCS) is one of the most promis ing technologies for greenhouse gas (GHG) management.However, an unsolved issue of CCS is the development of appropriate long-term monitoring systems for leakdetection of the stored CO2. To complement already existing monitoring infrastructure for CO2 storage areas, and toincrease the granularity of gas concentration measurements, a quickly deployab le, mobile measurement device isneeded. In this paper, we present an autonomous gas-sensitive micro-drone, which can be used to monitor GHGemissions, more specifically, CO2. Two different measurement strategies are proposed to address this task. First, theuse of predefined sensing trajectories is evaluated for the task of gas distribution mapping using the micro-drone.Alternatively, we present an adaptive strategy, which suggests sampling points based on an artific ial potential field(APF). The results of real-world experiments demonstrate the feas ibility of using gas-sensitive micro-drones for GHG monitoring missions. Thus, we suggest a multi-layered surveillance system for CO2 storage areas.

  • 123.
    Neumann, Patrick
    et al.
    Bundesanstalt für Materialforschung und -prüfung (BAM), Berlin, Germany.
    Asadi, Sahar
    Örebro University, School of Science and Technology.
    Schiller, Jochen H.
    Institute of Computer Science, Freie Universität Berlin, Berlin, Germany.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Bartholmai, Matthias
    Bundesanstalt für Materialforschung und -prüfung (BAM), Berlin, Germany.
    An artificial potential field based sampling strategy for a gas-sensitive micro-drone2011Conference paper (Refereed)
    Abstract [en]

    This paper presents a sampling strategy for mobile gas sensors. Sampling points are selected using a modified artificial potential field (APF) approach, which balances multiple criteria to direct sensor measurements towards locations of high mean concentration, high concentration variance and areas for which the uncertainty about the gas distribution model is still large. By selecting in each step the most often suggested close-by measurement location, the proposed approach introduces a locality constraint that allows planning suitable paths for mobile gas sensors. Initial results in simulation and in real-world experiments witha gas-sensitive micro-drone demonstrate the suitability of the proposed sampling strategy for gas distribution mapping and its use for gas source localization.

  • 124.
    Neumann, Patrick
    et al.
    Federal Institute for Materials Research and Testing (BAM), Berlin, Germany.
    Hernandez Bennetts, Victor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Bartholmai, Matthias
    Federal Institute for Materials Research and Testing (BAM), Berlin, Germany.
    Schiller, Jochen H.
    Institute of Computer Science, Freie Universität, Berlin, Germany.
    Gas source localization with a micro-drone using bio-inspired and particle filter-based algorithms2013In: Advanced Robotics, ISSN 0169-1864, E-ISSN 1568-5535, ISSN 0169-1864, Vol. 27, no 9, p. 725-738Article in journal (Refereed)
    Abstract [en]

    Gas source localization (GSL) with mobile robots is a challenging task due to the unpredictable nature of gas dispersion,the limitations of the currents sensing technologies, and the mobility constraints of ground-based robots. This work proposesan integral solution for the GSL task, including source declaration. We present a novel pseudo-gradient-basedplume tracking algorithm and a particle filter-based source declaration approach, and apply it on a gas-sensitivemicro-drone. We compare the performance of the proposed system in simulations and real-world experiments againsttwo commonly used tracking algorithms adapted for aerial exploration missions.

  • 125.
    Neumann, Patrick P.
    et al.
    BAM Federal Institute for Materials Research and Testing, Berlin, Germany.
    Asadi, Sahar
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Bartholmai, Matthias
    Sensors and Measurement Systems Working Group, BAM Federal Institute for Materials Research and Testing, Berlin, Germany.
    Schiller, Jochen H.
    Computer Systems and Telematics Working Group, Institute of Computer Science, Freie Universität, Berlin, Germany.
    Autonomous gas-sensitive microdrone wind vector estimation and gas distribution mapping2012In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 19, no 1, p. 50-61Article in journal (Refereed)
    Abstract [en]

    This article presents the development and validation of an autonomous, gas sensitive microdrone that is capable of estimating the wind vector in real time using only the onboard control unit of the microdrone and performing gas distribution mapping (DM). Two different sampling approaches are suggested to address this problem. On the one hand, a predefined trajectory is used to explore the target area with the microdrone in a real-world gas DM experiment. As an alternative sampling approach, we introduce an adaptive strategy that suggests next sampling points based on an artificial potential field (APF). Initial results in real-world experiments demonstrate the capability of the proposed adaptive sampling strategy for gas DM and its use for gas source localization.

  • 126.
    Neumann, Patrick P.
    et al.
    BAM Federal Institute for Materials Research and Testing, Berlin, Germany.
    Hernandez Bennetts, Victor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Bartholmai, Matthias
    BAM Federal Institute for Materials Research and Testing, Berlin, Germany.
    From Insects to Micro Air Vehicles: A Comparison of Reactive Plume Tracking Strategies2016In: Intelligent Autonomous Systems 13, Springer, 2016, p. 1533-1548Conference paper (Refereed)
    Abstract [en]

    Insect behavior is a common source of inspiration for roboticists and computer scientists when designing gas-sensitive mobile robots. More specifically, tracking airborne odor plumes, and localization of distant gas sources are abilities that suit practical applications such as leak localization and emission monitoring. Gas sensing with mobile robots has been mostly addressed with ground-based platforms and under simplified conditions and thus, there exist a significant gap between the outstanding insect abilities and state-of-the-art robotics systems. As a step toward practical applications, we evaluated the performance of three biologically inspired plume tracking algorithms. The evaluation is carried out not only with computer simulations, but also with real-world experiments in which, a quadrocopter-based micro Unmanned Aerial Vehicle autonomously follows a methane trail toward the emitting source. Compared to ground robots, micro UAVs bring several advantages such as their superior steering capabilities and fewer mobility restrictions in complex terrains. The experimental evaluation shows that, under certain environmental conditions, insect like behavior in gas-sensitive UAVs is feasible in real-world environments.

  • 127.
    Neumann, Patrick P.
    et al.
    Bundesanstalt für Materialforschung und -prüfung (BAM), Berlin, Germany.
    Kohlhoff, Harald
    Bundesanstalt für Materialforschung und -prüfung (BAM), Berlin, Germany.
    Hüllmann, Dino
    Bundesanstalt für Materialforschung und -prüfung (BAM), Berlin, Germany.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Kluge, Martin
    Bundesanstalt für Materialforschung und -prüfung (BAM), Berlin, Germany.
    Bringing Mobile Robot Olfaction to the Next Dimension - UAV-based Remote Sensing of Gas Clouds and Source Localization2017In: 2017 IEEE International Conference on Robotics and Automation (ICRA), Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 3910-3916Conference paper (Refereed)
    Abstract [en]

    This paper introduces a novel robotic platform for aerial remote gas sensing. Spectroscopic measurement methods for remote sensing of selected gases lend themselves for use on mini-copters, which offer a number of advantages for inspection and surveillance. No direct contact with the target gas is needed and thus the influence of the aerial platform on the measured gas plume can be kept to a minimum. This allows to overcome one of the major issues with gas-sensitive mini-copters. On the other hand, remote gas sensors, most prominently Tunable Diode Laser Absorption Spectroscopy (TDLAS) sensors have been too bulky given the payload and energy restrictions of mini-copters. Here, we introduce and present the Unmanned Aerial Vehicle for Remote Gas Sensing (UAV-REGAS), which combines a novel lightweight TDLAS sensor with a 3-axis aerial stabilization gimbal for aiming on a versatile hexacopter. The proposed system can be deployed in scenarios that cannot be addressed by currently available robots and thus constitutes a significant step forward for the field of Mobile Robot Olfaction (MRO). It enables tomographic reconstruction of gas plumes and a localization of gas sources. We also present first results showing the gas sensing and aiming capabilities under realistic conditions.

  • 128.
    Neumann, Patrick P.
    et al.
    BAM Federal Institute for Materials Research and Testing, Berlin, Germany.
    Schnürmacher, Michael
    BAM Federal Institute for Materials Research and Testing, Berlin, Germany.
    Hernandez Bennetts, Victor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Bartholmai, Matthias
    BAM Federal Institute for Materials Research and Testing, Berlin, Germany.
    Schiller, Jochen
    BAM Federal Institute for Materials Research and Testing, Berlin, Germany.
    A Probabilistic Gas Patch Path Prediction Approach for Airborne Gas Source Localization in Non-Uniform Wind FieldsIn: Sensor Letters, ISSN 1546-198XArticle in journal (Refereed)
    Abstract [en]

    In this paper, we show that a micro unmanned aerial vehicle (UAV) equipped with commercially available gas sensors can address environmental monitoring and gas source localization (GSL) tasks. To account for the challenges of gas sensing under real-world conditions, we present a probabilistic approach to GSL that is based on a particle filter (PF). Simulation and real-world experiments demonstrate the suitability of this algorithm for micro UAV platforms.

  • 129.
    Neumann, Patrick P.
    et al.
    BAM Federal Institute for Materials Research and Testing, Berlin, Germany.
    Schnürmacher, Michael
    Institute of Computer Science, FU University, Berlin, Germany.
    Hernandez Bennetts, Victor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Bartholmai, Matthias
    BAM Federal Institute for Materials Research and Testing, Berlin, Germany.
    Schiller, Jochen H.
    Institute of Computer Science, FU University, Berlin, Germany.
    A Probabilistic Gas Patch Path Prediction Approach for Airborne Gas Source Localization in Non-Uniform Wind Fields2014In: Sensor Letters, ISSN 1546-198X, Vol. 12, no 6-7, p. 1113-1118Article in journal (Refereed)
    Abstract [en]

    In this paper, we show that a micro unmanned aerial vehicle (UAV) equipped with commercially available gas sensors can addressenvironmental monitoring and gas source localization (GSL) tasks. To account for the challenges of gas sensing under real-world conditions,we present a probabilistic approach to GSL that is based on a particle filter (PF). Simulation and real-world experiments demonstrate thesuitability of this algorithm for micro UAV platforms.

  • 130.
    Neumann, Patrick
    et al.
    BAM Bundesanstalt für Materialforschung und -prüfung (BAM), Berlin, Germany.
    Schnürmacher, Michael
    BAM Bundesanstalt für Materialforschung und -prüfung (BAM), Berlin, Germany.
    Hernandez Bennetts, Victor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Bartholmai, Matthias
    BAM Bundesanstalt für Materialforschung und -prüfung (BAM), Berlin, Germany.
    Schiller, Jochen
    BAM Bundesanstalt für Materialforschung und -prüfung (BAM), Berlin, Germany.
    A Probabilistic Gas Patch Prediction Approach for Airborne Gas Source Localization in Non-Uniform Wind Fields2013In: Proceedings of the 15th ISOEN, 2013Conference paper (Refereed)
  • 131.
    Palm, Rainer
    et al.
    Örebro University, School of Science and Technology.
    Bouguerra, Abdelbaki
    Örebro University, School of Science and Technology.
    Abdullah, Muhammad
    The university of Faisalabad, Faisalabad, Pakistan.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Navigation in Human-Robot and Robot-Robot Interaction using Optimization Methods2016In: SMC 2016: 2016 IEEE International Conference on Systems, Man, and Cybernetics, IEEE, 2016, p. 4489-4494Conference paper (Refereed)
    Abstract [en]

    Human-robot interaction and robot-robot interaction and cooperation in shared spatial areas is a challenging field of research regarding safety, stability and performance. In this paper the collision avoidance between human and robot by extrapolation of human intentions and a suitable optimization of tracking velocities is discussed. Furthermore for robot-robot interactions in a shared area traffic rules and artificial force potential fields and their optimization by market-based approach are applied for obstacle avoidance. For testing and verification, the navigation strategy is implemented and tested in simulation of more realistic vehicles. Extensive simulation experiments are performed to examine the improvement of the traditional potential field (PF) method by the MBO strategy.

  • 132.
    Palm, Rainer
    et al.
    Örebro University, School of Science and Technology.
    Chadalavada, Ravi
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Fuzzy Modeling and Control for Intention Recognition in Human-Robot Systems2016In: Proceedings of the 8th International Joint Conference on Computational Intelligence (IJCCI 2016), Setúbal, Portugal: SciTePress, 2016, Vol. 2, p. 67-74Conference paper (Refereed)
    Abstract [en]

    The recognition of human intentions from trajectories in the framework of human-robot interaction is a challenging field of research. In this paper some control problems of the human-robot interaction and their intentions to compete or cooperate in shared work spaces are addressed and the time schedule of the information flow is discussed. The expected human movements relative to the robot are summarized in a so-called "compass dial" from which fuzzy control rules for the robot's reactions are derived. To avoid collisions between robot and human very early the computation of collision times at predicted human-robot intersections is discussed and a switching controller for collision avoidance is proposed. In the context of the recognition of human intentions to move to certain goals, pedestrian tracks are modeled by fuzzy clustering, lanes preferred by human agents are identified, and the identification of degrees of membership of a pedestrian track to specific lanes are discussed. Computations based on simulated and experimental data show the applicability of the methods presented.

  • 133.
    Palm, Rainer
    et al.
    Örebro University, School of Science and Technology.
    Chadalavada, Ravi
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Recognition of Human-Robot Motion Intentions by Trajectory Observation2016In: 2016 9th International Conference on Human System Interactions, HSI 2016: Proceedings, New York: Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 229-235Conference paper (Refereed)
    Abstract [en]

    The intention of humans and autonomous robots to interact in shared spatial areas is a challenging field of research regarding human safety, system stability and performance of the system's behavior. In this paper the intention recognition between human and robot from the control point of view are addressed and the time schedule of the exchanged signals is discussed. After a description of the kinematic and geometric relations between human and robot a so-called 'compass dial' with the relative velocities is presented from which suitable fuzzy control rules are derived. The computation of the collision times at intersections and possible avoidance strategies are further discussed. Computations based on simulated and experimental data show the applicability of the methods presented.

  • 134.
    Palm, Rainer
    et al.
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Long distance prediction and short distance control in Human-Robot Systems2017In: 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Institute of Electrical and Electronics Engineers (IEEE), 2017, article id 8015396Conference paper (Refereed)
    Abstract [en]

    The study of the interaction between autonomous robots and human agents in common working areas is an emerging field of research. Main points thereby are human safety, system stability, performance and optimality of the whole interaction process. Two approaches to deal with human-robot interaction can be distinguished: Long distance prediction which requires the recognition of intentions of other agents, and short distance control which deals with actions and reactions between agents and mutual reactive control of their motions and behaviors. In this context obstacle avoidance plays a prominent role. In this paper long distance prediction is represented by the identification of human intentions to use specific lanes by using fuzzy time clustering of pedestrian tracks. Another issue is the extrapolation of parts of both human and robot trajectories in the presence of scattered/uncertain measurements to guarantee a collision-free robot motion. Short distance control is represented by obstacle avoidance between agents using the method of velocity obstacles and both analytical and fuzzy control methods.

  • 135.
    Palmieri, Luigi
    et al.
    University of Freiburg, Computer Science Department, Germany.
    Kucner, Tomasz
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Arras, Kai
    Bosch Corporate Research, Stuttgart, Germany.
    Kinodynamic Motion Planning on Gaussian Mixture Fields2017In: IEEE International Conference on Robotics and Automation (ICRA 2017), 2017Conference paper (Refereed)
    Abstract [en]

    We present a mobile robot motion planning ap-proach under kinodynamic constraints that exploits learnedperception priors in the form of continuous Gaussian mixturefields. Our Gaussian mixture fields are statistical multi-modalmotion models of discrete objects or continuous media in theenvironment that encode e.g. the dynamics of air or pedestrianflows. We approach this task using a recently proposed circularlinear flow field map based on semi-wrapped GMMs whosemixture components guide sampling and rewiring in an RRT*algorithm using a steer function for non-holonomic mobilerobots. In our experiments with three alternative baselines,we show that this combination allows the planner to veryefficiently generate high-quality solutions in terms of pathsmoothness, path length as well as natural yet minimum controleffort motions through multi-modal representations of Gaussianmixture fields.

  • 136.
    Pashami, Sepideh
    et al.
    Örebro University, School of Science and Technology.
    Asadi, Sahar
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Integration of OpenFOAM Flow Simulation and Filament-Based Gas Propagation Models for Gas Dispersion Simulation2010Conference paper (Refereed)
    Abstract [en]

    In this paper, we present a gas dispersal simulation package which integrates OpenFOAM flow simulation and a filament-based gas propagation model to simulate gas dispersion for compressible flows with a realistic turbulence model. Gas dispersal simulation can be useful for many applications. In this paper, we focus on the evaluation of statistical gas distribution models. Simulated data offer several advantages for this purpose, including the availability of ground truth information, repetition of experiments with the exact same constraints and that intricate issue which come with using real gas sensors can be avoided.Apart from simulation results obtained in a simulated wind tunnel (designed to be equivalent to its real-world counterpart), we present initial results with time-independent and time-dependent statistical modelling approaches applied to simulated and real-world data.

  • 137.
    Pashami, Sepideh
    et al.
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Schaffernicht, Erik
    Örebro University, School of Science and Technology.
    Trincavelli, Marco
    Örebro University, School of Science and Technology.
    rTREFEX: Reweighting norms for detecting changes in the response of MOX gas sensors2014In: Sensor Letters, ISSN 1546-198X, E-ISSN 1546-1971, Vol. 12, no 6/7, p. 1123-1127Article in journal (Refereed)
    Abstract [en]

     The detection of changes in the response of metal oxide (MOX) gas sensors deployed in an open sampling system is a hard problem. It is relevant for applications such as gas leak detection in mines or large-scale pollution monitoring where it is impractical to continuously store or transfer sensor readings and reliable calibration is hard to achieve. Under these circumstances, it is desirable to detect points in the signal where a change indicates a significant event, e.g. the presence of gas or a sudden change of concentration. The key idea behind the proposed change detection approach is that a change in the emission modality of a gas source appears locally as an exponential function in the response of MOX sensors due to their long response and recovery times. The algorithm proposed in this paper, rTREFEX, is an extension of the previously proposed TREFEX algorithm. rTREFEX interprets the sensor response by fitting piecewise exponential functions with different time constants for the response and recovery phase. The number of exponentials, which has to be kept as low as possible, is determined automatically using an iterative approach that solves a sequence of convex optimization problems based on l1-norm. The algorithm is evaluated with an experimental setup where a gas source changes in intensity, compound, and mixture ratio, and the gas source is delivered to the sensors exploiting natural advection and turbulence mechanisms. rTREFEX is compared against the previously proposed TREFEX, which already proved superior to other algorithms.

  • 138.
    Pashami, Sepideh
    et al.
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Schaffernicht, Erik
    Örebro University, School of Science and Technology.
    Trincavelli, Marco
    Örebro University, School of Science and Technology.
    TREFEX: trend estimation and change detection in the response of mox gas sensors2013In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 13, no 6, p. 7323-7344Article in journal (Refereed)
    Abstract [en]

    Many applications of metal oxide gas sensors can benefit from reliable algorithmsto detect significant changes in the sensor response. Significant changes indicate a changein the emission modality of a distant gas source and occur due to a sudden change ofconcentration or exposure to a different compound. As a consequence of turbulent gastransport and the relatively slow response and recovery times of metal oxide sensors,their response in open sampling configuration exhibits strong fluctuations that interferewith the changes of interest. In this paper we introduce TREFEX, a novel change pointdetection algorithm, especially designed for metal oxide gas sensors in an open samplingsystem. TREFEX models the response of MOX sensors as a piecewise exponentialsignal and considers the junctions between consecutive exponentials as change points. Weformulate non-linear trend filtering and change point detection as a parameter-free convexoptimization problem for single sensors and sensor arrays. We evaluate the performanceof the TREFEX algorithm experimentally for different metal oxide sensors and severalgas emission profiles. A comparison with the previously proposed GLR method shows aclearly superior performance of the TREFEX algorithm both in detection performance andin estimating the change time.

  • 139.
    Pashami, Sepideh
    et al.
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Trincavelli, Marco
    Örebro University, School of Science and Technology.
    A trend filtering approach for change point detection in MOX gas sensors2013Conference paper (Refereed)
    Abstract [en]

    Detecting changes in the response of metal oxide (MOX) gas sensors deployed in an open sampling system is a hard problem. It is relevant for applicationssuch as gas leak detection in coal mines[1],[2] or large scale pollution monitoring [3],[4] where it is unpractical to continuously store or transfer sensor readings and reliable calibration is hard to achieve. Under these circumstances it is desirable to detect points in the signal where a change indicates a significant event, e.g. the presence of gas or a sudden change of concentration. The key idea behind the proposed change detection approach isthat a change in the emission modality of a gas source appears locally as an exponential function in the response of MOX sensors due to their long response and recovery times. The proposed method interprets the sensor responseby fitting piecewise exponential functions with different time constants for the response and recovery phase. The number of exponentials is determined automatically using an approximate method based on the L1-norm. This asymmetric exponential trend filtering problem is formulated as a convex optimization problem, which is particularly advantageous from the computational point of view. The algorithm is evaluated with an experimental setup where a gas source changes in intensity, compound, and mixture ratio, and it is compared against the previously proposed Generalized Likelihood Ratio (GLR) based algorithm [6].

  • 140.
    Pashami, Sepideh
    et al.
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Trincavelli, Marco
    Örebro University, School of Science and Technology.
    Change detection in an array of MOX sensors2012Conference paper (Refereed)
    Abstract [en]

    In this article we present an algorithm for online detection of change points in the response of an array of metal oxide (MOX) gas sensors deployed in an open sampling system.True change points occur due to changes in the emission modality of the gas source. The main challenge for change point detection in an open sampling system is the chaotic nature of gas dispersion, which causes fluctuations in the sensor response that are not related to changes in the gas source. These fluctuations should not be considered change points in the sensor response. The presented algorithm is derived from the well known Generalized Likelihood Ratio algorithm and it is used both on the output of a single sensor as well on the output of two or more sensors on the array. The algorithm is evaluated with an experimental setup where a gas source changes in intensity, compound, or mixture ratio. The performance measures considered are the detection rate, the number of false alarms and the delay of detection.

  • 141.
    Pashami, Sepideh
    et al.
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Trincavelli, Marco
    Örebro University, School of Science and Technology.
    Detecting changes of a distant gas source with an array of MOX gas sensors2012In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 12, no 12, p. 16404-16419Article in journal (Refereed)
    Abstract [en]

    We address the problem of detecting changes in the activity of a distant gas source from the response of an array of metal oxide (MOX) gas sensors deployed in an open sampling system. The main challenge is the turbulent nature of gas dispersion and the response dynamics of the sensors. We propose a change point detection approach and evaluate it on individual gas sensors in an experimental setup where a gas source changes in intensity, compound, or mixture ratio. We also introduce an efficient sensor selection algorithm and evaluate the change point detection approach with the selected sensor array subsets.

  • 142.
    Persson, Martin
    et al.
    Örebro University, Department of Technology.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, Uk.
    Lilienthal, Achim J.
    Örebro University, Department of Technology.
    Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping2007In: Proceedings of the IROS Workshop "From Sensors to Human Spatial Concepts", 2007, p. 17-24Conference paper (Refereed)
    Abstract [en]

    This paper investigates the use of semantic information to link ground-level occupancy maps and aerial images. A ground-level semantic map is obtained by a mobile robot equipped with an omnidirectional camera, differential GPS and a laser range finder. The mobile robot uses a virtual sensor for building detection (based on omnidirectional images) to compute the ground-level semantic map, which indicates the probability of the cells being occupied by the wall of a building. These wall estimates from a ground perspective are then matched with edges detected in an aerial image. The result is used to direct a region- and boundary-based segmentation algorithm for building detection in the aerial image. This approach addresses two difficulties simultaneously: 1) the range limitation of mobile robot sensors and 2) the difficulty of detecting buildings in monocular aerial images. With the suggested method building outlines can be detected faster than the mobile robot can explore the area by itself, giving the robot an ability to "see" around corners. At the same time, the approach can compensate for the absence of elevation data in segmentation of aerial images. Our experiments demonstrate that ground-level semantic information (wall estimates) allows to focus the segmentation of the aerial image to find buildings and produce a ground-level semantic map that covers a larger area than can be built using the onboard sensors.

  • 143.
    Persson, Martin
    et al.
    Örebro University, School of Science and Technology.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, UK.
    Lilienthal, Achim J.
    Örebro University, Department of Natural Sciences.
    Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping2008In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 56, no 6, p. 483-492Article in journal (Refereed)
    Abstract [en]

    This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omni-directional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground.

  • 144.
    Persson, Martin
    et al.
    Örebro University, Department of Technology.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, United Kingdom.
    Lilienthal, Achim J.
    Örebro University, Department of Technology.
    Improved mapping and image segmentation by using semantic information to link aerial images and ground-level information2007In: Proceedings of the IEEE international conference on advanced robotics: ICAR 2007, 2007, p. 924-929Conference paper (Refereed)
    Abstract [en]

    This paper investigates the use of semantic information to link ground-level occupancy maps and aerial images. In the suggested approach a ground-level semantic map is obtained by a mobile robot equipped with an omnidirectional camera, differential GPS and a laser range finder. The mobile robot uses a virtual sensor for building detection (based on omnidirectional images) to compute the ground-level semantic map, which indicates the probability of the cells being occupied by the wall of a building. These wall estimates from a ground perspective are then matched with edges detected in an aerial image. The result is used to direct a region- and boundary-based segmentation algorithm for building detection in the aerial image. This approach addresses two difficulties simultaneously: 1) the range limitation of mobile robot sensors and 2) the difficulty of detecting buildings in monocular aerial images. With the suggested method building outlines can be detected faster than the mobile robot can explore the area by itself, giving the robot an ability to "see" around corners. At the same time, the approach can compensate for the absence of elevation data in segmentation of aerial images. Our experiments demonstrate that ground-level semantic information (wall estimates) allows to focus the segmentation of the aerial image to find buildings and produce a groundlevel semantic map that covers a larger area than can be built using the onboard sensors along the robot trajectory.

  • 145.
    Persson, Martin
    et al.
    Örebro University, School of Science and Technology.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, UK.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Improved mapping and image segmentation by using semantic information to link aerial images and ground-level information2008In: Recent Progress in Robotics: Viable Robotic Service to Human, Berlin, Germany: Springer, 2008, p. 157-169Conference paper (Other academic)
    Abstract [en]

    This paper investigates the use of semantic information to link ground-level occupancy maps and aerial images. A ground-level semantic map is obtained by a mobile robot equipped with an omnidirectional camera, differential GPS and a laser range finder. The mobile robot uses a virtual sensor for building detection (based on omnidirectional images) to compute the ground-level semantic map, which indicates the probability of the cells being occupied by the wall of a building. These wall estimates from a ground perspective are then matched with edges detected in an aerial image. The result is used to direct a region- and boundary-based segmentation algorithm for building detection in the aerial image. This approach addresses two difficulties simultaneously: 1) the range limitation of mobile robot sensors and 2) the difficulty of detecting buildings in monocular aerial images. With the suggested method building outlines can be detected faster than the mobile robot can explore the area by itself, giving the robot an ability to “see” around corners. At the same time, the approach can compensate for the absence of elevation data in segmentation of aerial images. Our experiments demonstrate that ground-level semantic information (wall estimates) allows to focus the segmentation of the aerial image to find buildings and produce a ground-level semantic map that covers a larger area than can be built using the onboard sensors.

  • 146.
    Persson, Martin
    et al.
    Örebro University, Department of Technology.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, UK.
    Lilienthal, Achim J.
    Örebro University, Department of Technology.
    Virtual sensors for human concepts: building detection by an outdoor mobile robot2006In: Proceedings of the IROS 2006 workshop: From Sensors toHuman Spatial Concepts, IEEE, 2006, p. 21-26Conference paper (Refereed)
    Abstract [en]

    In human–robot communication it is often important to relate robot sensor readings to concepts used by humans. We suggest the use of a virtual sensor (one or several physical sensors with a dedicated signal processing unit for the recognition of real world concepts) and a method with which the virtual sensor can learn from a set of generic features. The virtual sensor robustly establishes the link between sensor data and a particular human concept. In this work, we present a virtual sensor for building detection that uses vision and machine learning to classify the image content in a particular direction as representing buildings or non-buildings. The virtual sensor is trained on a diverse set of image data, using features extracted from grey level images. The features are based on edge orientation, the configurations of these edges, and on grey level clustering. To combine these features, the AdaBoost algorithm is applied. Our experiments with an outdoor mobile robot show that the method is able to separate buildings from nature with a high classification rate, and to extrapolate well to images collected under different conditions. Finally, the virtual sensor is applied on the mobile robot, combining its classifications of sub-images from a panoramic view with spatial information (in the form of location and orientation of the robot) in order to communicate the likely locations of buildings to a remote human operator.

  • 147.
    Persson, Martin
    et al.
    Örebro University, Department of Technology.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, UK.
    Lilienthal, Achim J.
    Örebro University, Department of Technology.
    Virtual sensors for human concepts: building detection by an outdoor mobile robot2007In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 55, no 5, p. 383-390Article in journal (Refereed)
    Abstract [en]

    In human-robot communication it is often important to relate robot sensor readings to concepts used by humans. We suggest the use of a virtual sensor (one or several physical sensors with a dedicated signal processing unit for the recognition of real world concepts) and a method with which the virtual sensor can learn from a set of generic features. The virtual sensor robustly establishes the link between sensor data and a particular human concept. In this work, we present a virtual sensor for building detection that uses vision and machine learning to classify the image content in a particular direction as representing buildings or non-buildings. The virtual sensor is trained on a diverse set of image data, using features extracted from grey level images. The features are based on edge orientation, the configurations of these edges, and on grey level clustering. To combine these features, the AdaBoost algorithm is applied. Our experiments with an outdoor mobile robot show that the method is able to separate buildings from nature with a high classification rate, and to extrapolate well to images collected under different conditions. Finally, the virtual sensor is applied on the mobile robot, combining its classifications of sub-images from a panoramic view with spatial information (in the form of location and orientation of the robot) in order to communicate the likely locations of buildings to a remote human operator. (c) 2006 Elsevier B.V. All rights reserved.

  • 148.
    Persson, Martin
    et al.
    Örebro University, Department of Technology.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, United Kingdom.
    Valgren, Christoffer
    Department of Technology, Örebro University, Örebro, Sweden.
    Lilienthal, Achim J.
    Örebro University, Department of Technology.
    Probabilistic semantic mapping with a virtual sensor for building/nature detection2007In: Proceedings of the 2007 IEEE International symposium on computational intelligence in robotics and automation, CIRA 2007, New York, NY, USA: IEEE, 2007, p. 236-242, article id 4269870Conference paper (Refereed)
    Abstract [en]

    In human-robot communication it is often important to relate robot sensor readings to concepts used by humans. We believe that access to semantic maps will make it possible for robots to better communicate information to a human operator and vice versa. The main contribution of this paper is a method that fuses data from different sensor modalities, range sensors and vision sensors are considered, to create a probabilistic semantic map of an outdoor environment. The method combines a learned virtual sensor (understood as one or several physical sensors with a dedicated signal processing unit for recognition of real world concepts) for building detection with a standard occupancy map. The virtual sensor is applied on a mobile robot, combining classifications of sub-images from a panoramic view with spatial information (location and orientation of the robot) giving the likely locations of buildings. This information is combined with an occupancy map to calculate a probabilistic semantic map. Our experiments with an outdoor mobile robot show that the method produces semantic maps with correct labeling and an evident distinction between "building" objects from "nature" objects

  • 149. Petrovic, Ivan
    et al.
    Lilienthal, Achim J.Örebro University, School of Science and Technology.
    Proceedings of the 4th European Conferenceon Mobile Robots: ECMR’092009Conference proceedings (editor) (Other academic)
  • 150. Petrovitc, Ivan
    et al.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Special issue ECMR 20092011In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 59, no 5, p. 263-264Article in journal (Refereed)
12345 101 - 150 of 204
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf