oru.sePublications
Change search
Refine search result
12 1 - 50 of 62
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Camera based navigation by mobile robots: local visual feature based localisation and mapping2009Book (Other academic)
    Abstract [en]

    The most important property of a mobile robot is the fact that it is mobile. How to give a robot the skills required to navigate around its environment is therefore an important topic in mobile robotics. Navigation, both for robots and humans, typically involves a map. The map can be used, for example, to estimate a pose based on observations (localisation) or determine a suitable path between to locations. Maps are available nowadays for us humans with few exceptions, however, maps suitable for mobile robots rarely exists. In addition, to relate sensor readings to a map requires that the map content and the observation is compatible, i.e. different robots may require different maps for the same area. This book addresses some of the fundamental problems related to mobile robot navigation (registration, localisation and mapping) using cameras as the primary sensor input. Small salient regions (local visual features) are extracted from each camera image, where each region can be seen as a fingerprint. Many fingerprint matches implicates a high likelihood that they corresponding images originate from a similar location, which is a central property utilised in this work.

  • 2.
    Andreasson, Henrik
    Örebro University, Department of Technology.
    Local visual feature based localisation and mapping by mobile robots2008Doctoral thesis, monograph (Other academic)
    Abstract [en]

    This thesis addresses the problems of registration, localisation and simultaneous localisation and mapping (SLAM), relying particularly on local visual features extracted from camera images. These fundamental problems in mobile robot navigation are tightly coupled. Localisation requires a representation of the environment (a map) and registration methods to estimate the pose of the robot relative to the map given the robot’s sensory readings. To create a map, sensor data must be accumulated into a consistent representation and therefore the pose of the robot needs to be estimated, which is again the problem of localisation.

    The major contributions of this thesis are new methods proposed to address the registration, localisation and SLAM problems, considering two different sensor configurations. The first part of the thesis concerns a sensor configuration consisting of an omni-directional camera and odometry, while the second part assumes a standard camera together with a 3D laser range scanner. The main difference is that the former configuration allows for a very inexpensive set-up and (considering the possibility to include visual odometry) the realisation of purely visual navigation approaches. By contrast, the second configuration was chosen to study the usefulness of colour or intensity information in connection with 3D point clouds (“coloured point clouds”), both for improved 3D resolution (“super resolution”) and approaches to the fundamental problems of navigation that exploit the complementary strengths of visual and range information.

    Considering the omni-directional camera/odometry setup, the first part introduces a new registration method based on a measure of image similarity. This registration method is then used to develop a localisation method, which is robust to the changes in dynamic environments, and a visual approach to metric SLAM, which does not require position estimation of local image features and thus provides a very efficient approach.

    The second part, which considers a standard camera together with a 3D laser range scanner, starts with the proposal and evaluation of non-iterative interpolation methods. These methods use colour information from the camera to obtain range information at the resolution of the camera image, or even with sub-pixel accuracy, from the low resolution range information provided by the range scanner. Based on the ability to determine depth values for local visual features, a new registration method is then introduced, which combines the depth of local image features and variance estimates obtained from the 3D laser range scanner to realise a vision-aided 6D registration method, which does not require an initial pose estimate. This is possible because of the discriminative power of the local image features used to determine point correspondences (data association). The vision-aided registration method is further developed into a 6D SLAM approach where the optimisation constraint is based on distances of paired local visual features. Finally, the methods introduced in the second part are combined with a novel adaptive normal distribution transform (NDT) representation of coloured 3D point clouds into a robotic difference detection system.

  • 3.
    Andreasson, Henrik
    et al.
    Örebro University, School of Science and Technology.
    Adolfsson, Daniel
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Incorporating Ego-motion Uncertainty Estimates in Range Data Registration2017In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 1389-1395Conference paper (Refereed)
    Abstract [en]

    Local scan registration approaches commonlyonly utilize ego-motion estimates (e.g. odometry) as aninitial pose guess in an iterative alignment procedure. Thispaper describes a new method to incorporate ego-motionestimates, including uncertainty, into the objective function of aregistration algorithm. The proposed approach is particularlysuited for feature-poor and self-similar environments,which typically present challenges to current state of theart registration algorithms. Experimental evaluation showssignificant improvements in accuracy when using data acquiredby Automatic Guided Vehicles (AGVs) in industrial productionand warehouse environments.

  • 4.
    Andreasson, Henrik
    et al.
    Örebro University, School of Science and Technology.
    Bouguerra, Abdelbaki
    Örebro University, School of Science and Technology.
    Cirillo, Marcello
    Örebro University, School of Science and Technology.
    Dimitrov, Dimitar Nikolaev
    INRIA - Grenoble, Meylan, France.
    Driankov, Dimiter
    Örebro University, School of Science and Technology.
    Karlsson, Lars
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Pecora, Federico
    Örebro University, School of Science and Technology.
    Saarinen, Jari Pekka
    Örebro University, School of Science and Technology. Aalto University, Espo, Finland .
    Sherikov, Aleksander
    Centre de recherche Grenoble Rhône-Alpes, Grenoble, France .
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Autonomous transport vehicles: where we are and what is missing2015In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 22, no 1, p. 64-75Article in journal (Refereed)
    Abstract [en]

    In this article, we address the problem of realizing a complete efficient system for automated management of fleets of autonomous ground vehicles in industrial sites. We elicit from current industrial practice and the scientific state of the art the key challenges related to autonomous transport vehicles in industrial environments and relate them to enabling techniques in perception, task allocation, motion planning, coordination, collision prediction, and control. We propose a modular approach based on least commitment, which integrates all modules through a uniform constraint-based paradigm. We describe an instantiation of this system and present a summary of the results, showing evidence of increased flexibility at the control level to adapt to contingencies.

  • 5.
    Andreasson, Henrik
    et al.
    Örebro University, School of Science and Technology.
    Bouguerra, Abdelbaki
    Örebro University, School of Science and Technology.
    Åstrand, Björn
    Rögnvaldsson, Thorsteinn
    Örebro University, School of Science and Technology.
    Gold-fish SLAM: an application of SLAM to localize AGVs2012In: Proceedings of the International Conference on Field and Service Robotics (FSR), July 2012., 2012Conference paper (Other academic)
    Abstract [en]

    The main focus of this paper is to present a case study of a SLAM solution for Automated Guided Vehicles (AGVs) operating in real-world industrial environ- ments. The studied solution, called Gold-fish SLAM, was implemented to provide localization estimates in dynamic industrial environments, where there are static landmarks that are only rarely perceived by the AGVs. The main idea of Gold-fish SLAM is to consider the goods that enter and leave the environment as temporary landmarks that can be used in combination with the rarely seen static landmarks to compute online estimates of AGV poses. The solution is tested and verified in a factory of paper using an eight ton diesel-truck retrofitted with an AGV control sys- tem running at speeds up to 3 meters per second. The paper includes also a general discussion on how SLAM can be used in industrial applications with AGVs.

  • 6.
    Andreasson, Henrik
    et al.
    Örebro University, School of Science and Technology.
    Bouguerra, Abdelbaki
    Örebro University, School of Science and Technology.
    Åstrand, Björn
    CAISR Centrum för tillämpade intelligenta system (IS-lab), Högskolan i Halmstad, Halmstad, Sweden.
    Rögnvaldsson, Thorsteinn
    CAISR Centrum för tillämpade intelligenta system (IS-lab), Högskolan i Halmstad, Halmstad, Sweden.
    Gold-Fish SLAM: An Application of SLAM to Localize AGVs2014In: Field and Service Robotics: Results of the 8th International Conference / [ed] Yoshida, Kazuya; Tadokoro, Satoshi, Heidelberg, Germany: Springer Berlin/Heidelberg, 2014, p. 585-598Chapter in book (Refereed)
    Abstract [en]

    The main focus of this paper is to present a case study of a SLAM solution for Automated Guided Vehicles (AGVs) operating in real-world industrial environments. The studied solution, called Gold-fish SLAM, was implemented to provide localization estimates in dynamic industrial environments, where there are static landmarks that are only rarely perceived by the AGVs. The main idea of Gold-fish SLAM is to consider the goods that enter and leave the environment as temporary landmarks that can be used in combination with the rarely seen static landmarks to compute online estimates of AGV poses. The solution is tested and verified in a factory of paper using an eight ton diesel-truck retrofitted with an AGV control system running at speeds up to 3 m/s. The paper includes also a general discussion on how SLAM can be used in industrial applications with AGVs

  • 7.
    Andreasson, Henrik
    et al.
    Örebro University, Department of Technology.
    Duckett, Tom
    University of Lincoln, University of Lincoln, UK.
    Lilienthal, Achim J.
    A Minimalistic Approach to Appearance-Based Visual SLAM2008In: IEEE Transactions on Robotics, ISSN 1552-3098, Vol. 24, no 5, p. 991-1001Article in journal (Refereed)
    Abstract [en]

    This paper presents a vision-based approach to SLAM in indoor / outdoor environments with minimalistic sensing and computational requirements. The approach is based on a graph representation of robot poses, using a relaxation algorithm to obtain a globally consistent map. Each link corresponds to a relative measurement of the spatial relation between the two nodes it connects. The links describe the likelihood distribution of the relative pose as a Gaussian distribution. To estimate the covariance matrix for links obtained from an omni-directional vision sensor, a novel method is introduced based on the relative similarity of neighbouring images. This new method does not require determining distances to image features using multiple view geometry, for example. Combined indoor and outdoor experiments demonstrate that the approach can handle qualitatively different environments (without modification of the parameters), that it can cope with violations of the “flat floor assumption” to some degree, and that it scales well with increasing size of the environment, producing topologically correct and geometrically accurate maps at low computational cost. Further experiments demonstrate that the approach is also suitable for combining multiple overlapping maps, e.g. for solving the multi-robot SLAM problem with unknown initial poses.

  • 8.
    Andreasson, Henrik
    et al.
    Örebro University, Department of Technology.
    Duckett, Tom
    Dept. of Computing & Informatics, University of Lincoln, Lincoln, United Kingdom.
    Lilienthal, Achim J.
    Örebro University, Department of Technology.
    Mini-SLAM: minimalistic visual SLAM in large-scale environments based on a new interpretation of image similarity2007In: 2007 IEEE international conference on robotics and automation (ICRA), New York, NY, USA: IEEE, 2007, p. 4096-4101, article id 4209726Conference paper (Refereed)
    Abstract [en]

    This paper presents a vision-based approach to SLAM in large-scale environments with minimal sensing and computational requirements. The approach is based on a graphical representation of robot poses and links between the poses. Links between the robot poses are established based on odometry and image similarity, then a relaxation algorithm is used to generate a globally consistent map. To estimate the covariance matrix for links obtained from the vision sensor, a novel method is introduced based on the relative similarity of neighbouring images, without requiring distances to image features or multiple view geometry. Indoor and outdoor experiments demonstrate that the approach scales well to large-scale environments, producing topologically correct and geometrically accurate maps at minimal computational cost. Mini-SLAM was found to produce consistent maps in an unstructured, large-scale environment (the total path length was 1.4 km) containing indoor and outdoor passages.

  • 9.
    Andreasson, Henrik
    et al.
    Örebro University, Department of Technology.
    Lilienthal, Achim
    Örebro University, Department of Natural Sciences. aass.
    Vision aided 3D laser scanner based registration2007In: ECMR 2007: Proceedings of the European Conference on Mobile Robots, 2007, p. 192-197Conference paper (Refereed)
    Abstract [en]

    This paper describes a vision and 3D laser based registration approach which utilizes visual features to identify correspondences. Visual features are obtained from the images of a standard color camera and the depth of these features is determined by interpolating between the scanning points of a 3D laser range scanner, taking into consideration the visual information in the neighbourhood of the respective visual feature. The 3D laser scanner is also used to determine a position covariance estimate of the visual feature. To exploit these covariance estimates, an ICP algorithm based on the Mahalanobis distance is applied. Initial experimental results are presented in a real world indoor laboratory environment

  • 10.
    Andreasson, Henrik
    et al.
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    6D scan registration using depth-interpolated local image features2010In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 58, no 2, p. 157-165Article in journal (Refereed)
    Abstract [en]

    This paper describes a novel registration approach that is based on a combination of visual and 3D range information.To identify correspondences, local visual features obtained from images of a standard color camera are compared and the depth of matching features (and their position covariance) is determined from the range measurements of a 3D laserscanner. The matched depth-interpolated image features allows to apply registration with known correspondences.We compare several ICP variants in this paper and suggest an extension that considers the spatial distance betweenmatching features to eliminate false correspondences. Experimental results are presented in both outdoor and indoor environments. In addition to pair-wise registration, we also propose a global registration method that registers allscan poses simultaneously.

  • 11.
    Andreasson, Henrik
    et al.
    Örebro University, Department of Technology.
    Lilienthal, Achim J.
    Örebro University, Department of Technology.
    Triebel, Rudolph
    Department of Computer Science, University of Freiburg, Germany.
    Vision based interpolation of 3D laser scans2006In: Proceedings of the Third International Conference on Autonomous Robots and Agents, 2006, p. 455-460Conference paper (Refereed)
    Abstract [en]

    3D range sensors, particularly 3D laser range scanners, enjoy a rising popularity and are used nowadays for many different applications. The resolution 3D range sensors provide in the image plane is typically much lower than the resolution of a modern color camera. In this paper we focus on methods to derive a high-resolution depth image from a low-resolution 3D range sensor and a color image. The main idea is to use color similarity as an indication of depth similarity, based on the observation that depth discontinuities in the scene often correspond to color or brightness changes in the camera image. We present five interpolation methods and compare them with an independently proposed method based on Markov Random Fields. The algorithms proposed in this paper are non-iterative and include a parameter-free vision-based interpolation method. In contrast to previous work, we present ground truth evaluation with real world data and analyse both indoor and outdoor data. Further, we suggest and evaluate four methods to determine a confidence measure for the accuracy of interpolated range values.

  • 12.
    Andreasson, Henrik
    et al.
    Örebro University, Department of Technology.
    Magnusson, Martin
    Örebro University, Department of Technology.
    Lilienthal, Achim
    Örebro University, Department of Natural Sciences.
    Has something changed here?: Autonomous difference detection for security patrol robots2007In: 2007 IEEE/RSJ international conference on intelligent robots and systems, New York, NY, USA: IEEE, 2007, p. 3429-3435, article id 4399381Conference paper (Refereed)
    Abstract [en]

    This paper presents a system for autonomous change detection with a security patrol robot. In an initial step a reference model of the environment is created and changes are then detected with respect to the reference model as differences in coloured 3D point clouds, which are obtained from a 3D laser range scanner and a CCD camera. The suggested approach introduces several novel aspects, including a registration method that utilizes local visual features to determine point correspondences (thus essentially working without an initial pose estimate) and the 3D-NDT representation with adaptive cell size to efficiently represent both the spatial and colour aspects of the reference model. Apart from a detailed description of the individual parts of the difference detection system, a qualitative experimental evaluation in an indoor lab environment is presented, which demonstrates that the suggested system is able register and detect changes in spatial 3D data and also to detect changes that occur in colour space and are not observable using range values only.

  • 13.
    Andreasson, Henrik
    et al.
    Örebro University, School of Science and Technology.
    Saarinen, Jari
    Örebro University, School of Science and Technology.
    Cirillo, Marcello
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Drive the Drive: From Discrete Motion Plans to Smooth Drivable Trajectories2014In: Robotics, E-ISSN 2218-6581, Vol. 3, no 4, p. 400-416Article in journal (Refereed)
    Abstract [en]

    Autonomous navigation in real-world industrial environments is a challenging task in many respects. One of the key open challenges is fast planning and execution of trajectories to reach arbitrary target positions and orientations with high accuracy and precision, while taking into account non-holonomic vehicle constraints. In recent years, lattice-based motion planners have been successfully used to generate kinematically and kinodynamically feasible motions for non-holonomic vehicles. However, the discretized nature of these algorithms induces discontinuities in both state and control space of the obtained trajectories, resulting in a mismatch between the achieved and the target end pose of the vehicle. As endpose accuracy is critical for the successful loading and unloading of cargo in typical industrial applications, automatically planned paths have not been widely adopted in commercial AGV systems. The main contribution of this paper is a path smoothing approach, which builds on the output of a lattice-based motion planner to generate smooth drivable trajectories for non-holonomic industrial vehicles. The proposed approach is evaluated in several industrially relevant scenarios and found to be both fast (less than 2 s per vehicle trajectory) and accurate (end-point pose errors below 0.01 m in translation and 0.005 radians in orientation).

  • 14.
    Andreasson, Henrik
    et al.
    Örebro University, School of Science and Technology.
    Saarinen, Jari
    Örebro University, School of Science and Technology.
    Cirillo, Marcello
    Örebro University, School of Science and Technology. SCANIA AB, Södertälje, Sweden.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Fast, continuous state path smoothing to improve navigation accuracy2015In: IEEE International Conference on Robotics and Automation (ICRA), 2015, IEEE Computer Society, 2015, p. 662-669Conference paper (Refereed)
    Abstract [en]

    Autonomous navigation in real-world industrial environments is a challenging task in many respects. One of the key open challenges is fast planning and execution of trajectories to reach arbitrary target positions and orientations with high accuracy and precision, while taking into account non-holonomic vehicle constraints. In recent years, lattice-based motion planners have been successfully used to generate kinematically and kinodynamically feasible motions for non-holonomic vehicles. However, the discretized nature of these algorithms induces discontinuities in both state and control space of the obtained trajectories, resulting in a mismatch between the achieved and the target end pose of the vehicle. As endpose accuracy is critical for the successful loading and unloading of cargo in typical industrial applications, automatically planned paths have not be widely adopted in commercial AGV systems. The main contribution of this paper addresses this shortcoming by introducing a path smoothing approach, which builds on the output of a lattice-based motion planner to generate smooth drivable trajectories for non-holonomic industrial vehicles. In real world tests presented in this paper we demonstrate that the proposed approach is fast enough for online use (it computes trajectories faster than they can be driven) and highly accurate. In 100 repetitions we achieve mean end-point pose errors below 0.01 meters in translation and 0.002 radians in orientation. Even the maximum errors are very small: only 0.02 meters in translation and 0.008 radians in orientation.

  • 15.
    Andreasson, Henrik
    et al.
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Real time registration of RGB-D data using local visual features and 3D-NDT registration2012In: Proc. of International Conference on Robotics and Automation (ICRA) Workshop on Semantic Perception, Mapping and Exploration (SPME), IEEE, 2012Conference paper (Refereed)
    Abstract [en]

    Recent increased popularity of RGB-D capable sensors in robotics has resulted in a surge of related RGBD registration methods. This paper presents several RGB-D registration algorithms based on combinations between local visual feature and geometric registration. Fast and accurate transformation refinement is obtained by using a recently proposed geometric registration algorithm, based on the Three-Dimensional Normal Distributions Transform (3D-NDT). Results obtained on standard data sets have demonstrated mean translational errors on the order of 1 cm and rotational errors bellow 1 degree, at frame processing rates of about 15 Hz.

  • 16.
    Andreasson, Henrik
    et al.
    Örebro University, Department of Technology.
    Treptow, André
    University of Tübingen.
    Duckett, Tom
    Örebro University, Department of Technology.
    Localization for mobile robots using panoramic vision, local features and particle filter2005In: Proceedings of the 2005 IEEE International Converence on Robotics and Automation: ICRA - 2005, 2005, p. 3348-3353Conference paper (Refereed)
    Abstract [en]

    In this paper we present a vision-based approach to self-localization that uses a novel scheme to integrate featurebased matching of panoramic images with Monte Carlo localization. A specially modified version of Lowe’s SIFT algorithm is used to match features extracted from local interest points in the image, rather than using global features calculated from the whole image. Experiments conducted in a large, populated indoor environment (up to 5 persons visible) over a period of several months demonstrate the robustness of the approach, including kidnapping and occlusion of up to 90% of the robot’s field of view.

  • 17.
    Andreasson, Henrik
    et al.
    Örebro University, Department of Technology.
    Treptow, André
    University of Tübingen.
    Duckett, Tom
    Örebro University, Department of Technology.
    Self-localization in non-stationary environments using omni-directional vision2007In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 55, no 7, p. 541-551Article in journal (Refereed)
    Abstract [en]

    This paper presents an image-based approach for localization in non-static environments using local feature descriptors, and its experimental evaluation in a large, dynamic, populated environment where the time interval between the collected data sets is up to two months. By using local features together with panoramic images, robustness and invariance to large changes in the environment can be handled. Results from global place recognition with no evidence accumulation and a Monte Carlo localization method are shown. To test the approach even further, experiments were conducted with up to 90% virtual occlusion in addition to the dynamic changes in the environment

  • 18.
    Andreasson, Henrik
    et al.
    Örebro University, Department of Technology.
    Triebel, Rudolph
    University of Friburg.
    Burgard, Wolfram
    University of Friburg.
    Improving plane extraction from 3D data by fusing laser data and vision2005In: 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005. (IROS 2005): IROS 2005 IEEE/RSJ, 2005, p. 2656-2661Conference paper (Refereed)
    Abstract [en]

    The problem of extracting three-dimensional structures from data acquired with mobile robots has received considerable attention over the past years. Robots that are able to perceive their three-dimensional environment are envisioned to more robustly perform tasks like navigation, rescue, and manipulation. In this paper we present an approach that simultaneously uses color and range information to cluster 3d points into planar structures. Our current system also is able to calibrate the camera and the laser based on the remission values provided by the range scanner and the brightness of the pixels in the image. It has been implemented on a mobile robot equipped with a manipulator that carries a range scanner and a camera for acquiring colored range scans. Several experiments carried out on real data and in simulations demonstrate that our approach yields highly accurate results also in comparison with previous approaches

  • 19.
    Andreasson, Henrik
    et al.
    Örebro University, Department of Technology.
    Triebel, Rudolph
    Department of Computer Science, University of Freiburg, Freiburg, Germany.
    Lilienthal, Achim J.
    Örebro University, Department of Technology.
    Non-iterative Vision-based Interpolation of 3D Laser Scans2007In: Autonomos Agents and Robots / [ed] Mukhopadhyay, SC, Gupta, GS, Berlin/Heidelberg, Germany: Springer , 2007, Vol. 76, p. 83-90, article id 4399381Conference paper (Other academic)
    Abstract [en]

    3D range sensors, particularly 3D laser range scanners, enjoy a rising popularity and are used nowadays for many different applications. The resolution 3D range sensors provide in the image plane is typically much lower than the resolution of a modern colour camera. In this chapter we focus on methods to derive a highresolution depth image from a low-resolution 3D range sensor and a colour image. The main idea is to use colour similarity as an indication of depth similarity, based on the observation that depth discontinuities in the scene often correspond to colour or brightness changes in the camera image. We present five interpolation methods and compare them with an independently proposed method based on Markov random fields. The proposed algorithms are non-iterative and include a parameter-free vision-based interpolation method. In contrast to previous work, we present ground truth evaluation with real world data and analyse both indoor and outdoor data.

  • 20.
    Bouguerra, Abdelbaki
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Åstrand, Björn
    Halmstad University, Halmstad, Sweden.
    Rögnvaldsson, Thorsteinn
    Halmstad University, Halmstad, Sweden.
    An autonomous robotic system for load transportation2009In: 2009 IEEE Conference on Emerging Technologies & Factory Automation (EFTA 2009), New York: IEEE conference proceedings, 2009, p. 1563-1566Conference paper (Refereed)
    Abstract [en]

    This paper presents an overview of an autonomous robotic material handling system. The goal of the system is to extend the functionalities of traditional AGVs to operate in highly dynamic environments. Traditionally, the reliable functioning of AGVs relies on the availability of adequate infrastructure to support navigation. In the target environments of our system, such infrastructure is difficult to setup in an efficient way. Additionally, the location of objects to handle are unknown, which requires that the system be able to detect and track object positions at runtime. Another requirement of the system is to be able to generate trajectories dynamically, which is uncommon in industrial AGV systems.

  • 21.
    Bouguerra, Abdelbaki
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Åstrand, Björn
    Halmstad University.
    Rögnvaldsson, Thorsteinn
    Halmstad University, Sweden.
    MALTA: a system of multiple autonomous trucks for load transportation2009In: Proceedings of the 4th European conference on mobile robots (ECMR) / [ed] Ivan Petrovic, Achim J. Lilienthal, 2009, p. 93-98Conference paper (Refereed)
    Abstract [en]

    This paper presents an overview of an autonomousrobotic material handling system. The goal of the system is toextend the functionalities of traditional AGVs to operate in highlydynamic environments. Traditionally, the reliable functioning ofAGVs relies on the availability of adequate infrastructure tosupport navigation. In the target environments of our system,such infrastructure is difficult to setup in an efficient way.Additionally, the location of objects to handle are unknown,which requires that the system be able to detect and track objectpositions at runtime. Another requirement of the system is to beable to generate trajectories dynamically, which is uncommon inindustrial AGV systems.

  • 22.
    Bunz, Elsa
    et al.
    Örebro University, Örebro, Sweden.
    Chadalavada, Ravi Teja
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Krug, Robert
    Örebro University, School of Science and Technology.
    Schindler, Maike
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Spatial Augmented Reality and Eye Tracking for Evaluating Human Robot Interaction2016In: Proceedings of RO-MAN 2016 Workshop: Workshop on Communicating Intentions in Human-Robot Interaction, 2016Conference paper (Refereed)
    Abstract [en]

    Freely moving autonomous mobile robots may leadto anxiety when operating in workspaces shared with humans.Previous works have given evidence that communicating in-tentions using Spatial Augmented Reality (SAR) in the sharedworkspace will make humans more comfortable in the vicinity ofrobots. In this work, we conducted experiments with the robotprojecting various patterns in order to convey its movementintentions during encounters with humans. In these experiments,the trajectories of both humans and robot were recorded witha laser scanner. Human test subjects were also equipped withan eye tracker. We analyzed the eye gaze patterns and thelaser scan tracking data in order to understand how the robot’sintention communication affects the human movement behavior.Furthermore, we used retrospective recall interviews to aid inidentifying the reasons that lead to behavior changes.

  • 23.
    Chadalavada, Ravi Teja
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Krug, Robert
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Empirical evaluation of human trust in an expressive mobile robot2016In: Proceedings of RSS Workshop "Social Trust in Autonomous Robots 2016", 2016Conference paper (Refereed)
    Abstract [en]

    A mobile robot communicating its intentions using Spatial Augmented Reality (SAR) on the shared floor space makes humans feel safer and more comfortable around the robot. Our previous work [1] and several other works established this fact. We built upon that work by adding an adaptable information and control to the SAR module. An empirical study about how a mobile robot builds trust in humans by communicating its intentions was conducted. A novel way of evaluating that trust is presented and experimentally shown that adaption in SAR module lead to natural interaction and the new evaluation system helped us discover that the comfort levels between human-robot interactions approached those of human-human interactions.

  • 24.
    Chadalavada, Ravi Teja
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Krug, Robert
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    That’s on my Mind!: Robot to Human Intention Communication through on-board Projection on Shared Floor Space2015In: 2015 European Conference on Mobile Robots (ECMR), New York: IEEE conference proceedings , 2015Conference paper (Refereed)
    Abstract [en]

    The upcoming new generation of autonomous vehicles for transporting materials in industrial environments will be more versatile, flexible and efficient than traditional AGVs, which simply follow pre-defined paths. However, freely navigating vehicles can appear unpredictable to human workers and thus cause stress and render joint use of the available space inefficient. Here we address this issue and propose on-board intention projection on the shared floor space for communication from robot to human. We present a research prototype of a robotic fork-lift equipped with a LED projector to visualize internal state information and intents. We describe the projector system and discuss calibration issues. The robot’s ability to communicate its intentions is evaluated in realistic situations where test subjects meet the robotic forklift. The results show that already adding simple information, such as the trajectory and the space to be occupied by the robot in the near future, is able to effectively improve human response to the robot.

  • 25.
    Cirillo, Marcello
    et al.
    Örebro University, School of Science and Technology.
    Pecora, Federico
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Uras, Tansel
    Department of Computer Science, University of Southern California, Los Angeles, USA.
    Koenig, Sven
    Department of Computer Science, University of Southern California, Los Angeles, USA.
    Integrated Motion Planning and Coordination for Industrial Vehicles2014In: Proceedings of the 24th International Conference on Automated Planning and Scheduling, 2014Conference paper (Refereed)
    Abstract [en]

    A growing interest in the industrial sector for autonomous ground vehicles has prompted significant investment in fleet management systems. Such systems need to accommodate on-line externally imposed temporal and spatial requirements, and to adhere to them even in the presence of contingencies. Moreover, a fleet management system should ensure correctness, i.e., refuse to commit to requirements that cannot be satisfied. We present an approach to obtain sets of alternative execution patterns (called trajectory envelopes) which provide these guarantees. The approach relies on a constraint-based representation shared among multiple solvers, each of which progressively refines trajectory envelopes following a least commitment principle.

  • 26.
    Fleck, Sven
    et al.
    University of Tübingen.
    Busch, Florian
    University of Tübingen.
    Biber, Peter
    University of Tübingen.
    Strasser, Wolfgang
    University of Tübingen.
    Andreasson, Henrik
    Örebro University, Department of Technology.
    Omnidirectional 3D modeling on a mobile robot using graph cuts2005In: Proceedings of the 2005 IEEE International Converence on Robotics and Automation: ICRA - 2005, 2005, p. 1748-1754Conference paper (Refereed)
    Abstract [en]

    For a mobile robot it is a natural task to build a 3D model of its environment. Such a model is not only useful for planning robot actions but also to provide a remote human surveillant a realistic visualization of the robot’s state with respect to the environment. Acquiring 3D models of environments is also an important task on its own with many possible applications like creating virtual interactive walkthroughs or as basis for 3D-TV.

    In this paper we present our method to acquire a 3D model using a mobile robot that is equipped with a laser scanner and a panoramic camera. The method is based on calculating dense depth maps for panoramic images using pairs of panoramic images taken from different positions using stereo matching. Traditional 2D-SLAM using laser-scan-matching is used to determine the needed camera poses. To receive high-quality results we use a high-quality stereo matching algorithm – the graph cut method. We describe the necessary modifications to handle panoramic images and specialized post-processing methods.

  • 27.
    Krug, Robert
    et al.
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Tincani, Vinicio
    Interdepart. Research Center “E. Piaggio”; University of Pisa, Pisa, Italy.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Mosberger, Rafael
    Örebro University, School of Science and Technology.
    Fantoni, Gualtiero
    Interdepart. Research Center “E. Piaggio”; University of Pisa, Pisa, Italy.
    Bicchi, Antonio
    Interdepart. Research Center “E. Piaggio”; University of Pisa, Pisa, Italy.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    On Using Optimization-based Control instead of Path-Planning for Robot Grasp Motion Generation2015In: IEEE International Conference on Robotics and Automation (ICRA) - Workshop on Robotic Hands, Grasping, and Manipulation, 2015Conference paper (Refereed)
  • 28.
    Krug, Robert
    et al.
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Tincani, Vinicio
    University of Pisa, Pisa, Italy.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Mosberger, Rafael
    Örebro University, School of Science and Technology.
    Fantoni, Gualtiero
    University of Pisa, Pisa, Italy.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    The Next Step in Robot Commissioning: Autonomous Picking and Palletizing2016In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 1, no 1, p. 546-553Article in journal (Refereed)
    Abstract [en]

    So far, autonomous order picking (commissioning) systems have not been able to meet the stringent demands regarding speed, safety, and accuracy of real-world warehouse automation, resulting in reliance on human workers. In this letter, we target the next step in autonomous robot commissioning: automatizing the currently manual order picking procedure. To this end, we investigate the use case of autonomous picking and palletizing with a dedicated research platform and discuss lessons learned during testing in simplified warehouse settings. The main theoretical contribution is a novel grasp representation scheme which allows for redundancy in the gripper pose placement. This redundancy is exploited by a local, prioritized kinematic controller which generates reactive manipulator motions on-the-fly. We validated our grasping approach by means of a large set of experiments, which yielded an average grasp acquisition time of 23.5 s at a success rate of 94.7%. Our system is able to autonomously carry out simple order picking tasks in a humansafe manner, and as such serves as an initial step toward future commercial-scale in-house logistics automation solutions.

  • 29.
    Lowry, Stephanie
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lightweight, Viewpoint-Invariant Visual Place Recognition in Changing Environments2018In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 3, no 2, p. 957-964Article in journal (Refereed)
    Abstract [en]

    This paper presents a viewpoint-invariant place recognition algorithm which is robust to changing environments while requiring only a small memory footprint. It demonstrates that condition-invariant local features can be combined with Vectors of Locally Aggregated Descriptors (VLAD) to reduce high-dimensional representations of images to compact binary signatures while retaining place matching capability across visually dissimilar conditions. This system provides a speed-up of two orders of magnitude over direct feature matching, and outperforms a bag-of-visual-words approach with near-identical computation speed and memory footprint. The experimental results show that single-image place matching from non-aligned images can be achieved in visually changing environments with as few as 256 bits (32 bytes) per image.

  • 30.
    Lowry, Stephanie
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    LOGOS: Local geometric support for high-outlier spatial verification2018Conference paper (Refereed)
    Abstract [en]

    This paper presents LOGOS, a method of spatial verification for visual localization that is robust in the presence of a high proportion of outliers. LOGOS uses scale and orientation information from local neighbourhoods of features to determine which points are likely to be inliers. The inlier points can be used for secondary localization verification and pose estimation. LOGOS is demonstrated on a number of benchmark localization datasets and outperforms RANSAC as a method of outlier removal and localization verification in scenarios that require robustness to many outliers.

  • 31.
    Lowry, Stephanie
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Visual place recognition techniques for pose estimation in changing environments2016In: Visual Place Recognition: What is it Good For? workshop, Robotics: Science and Systems (RSS) 2016, 2016Conference paper (Other academic)
    Abstract [en]

    This paper investigates whether visual place recognition techniques can be used to provide pose estimation information for a visual SLAM system operating long-term in an environment where the appearance may change a great deal. It demonstrates that a combination of a conventional SURF feature detector and a condition-invariant feature descriptor such as HOG or conv3 can provide a method of determining the relative transformation between two images, even when there is both appearance change and rotation or viewpoint change.

  • 32.
    Magnusson, Martin
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Nüchter, A.
    Jacobs University Bremen, Bremen, Germany.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Appearance-based loop detection from 3D laser data using the normal distributions transform2009In: IEEE International Conference on Robotics and Automation 2009 (ICRA '09), IEEE conference proceedings, 2009, p. 23-28Conference paper (Other academic)
    Abstract [en]

    We propose a new approach to appearance based loop detection from metric 3D maps, exploiting the NDT surface representation. Locations are described with feature histograms based on surface orientation and smoothness, and loop closure can be detected by matching feature histograms. We also present a quantitative performance evaluation using two realworld data sets, showing that the proposed method works well in different environments.© 2009 IEEE.

  • 33.
    Magnusson, Martin
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Nüchter, Andreas
    Jacobs University Bremen.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Automatic appearance-based loop detection from three-dimensional laser data using the normal distributions transform2009In: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 26, no 11-12, p. 892-914Article in journal (Refereed)
    Abstract [en]

    We propose a new approach to appearance-based loop detection for mobile robots, usingthree-dimensional (3D) laser scans. Loop detection is an important problem in the simultaneouslocalization and mapping (SLAM) domain, and, because it can be seen as theproblem of recognizing previously visited places, it is an example of the data associationproblem. Without a flat-floor assumption, two-dimensional laser-based approaches arebound to fail in many cases. Two of the problems with 3D approaches that we address inthis paper are how to handle the greatly increased amount of data and how to efficientlyobtain invariance to 3D rotations.We present a compact representation of 3D point cloudsthat is still discriminative enough to detect loop closures without false positives (i.e.,detecting loop closure where there is none). A low false-positive rate is very important becausewrong data association could have disastrous consequences in a SLAM algorithm.Our approach uses only the appearance of 3D point clouds to detect loops and requires nopose information. We exploit the normal distributions transform surface representationto create feature histograms based on surface orientation and smoothness. The surfaceshape histograms compress the input data by two to three orders of magnitude. Becauseof the high compression rate, the histograms can be matched efficiently to compare theappearance of two scans. Rotation invariance is achieved by aligning scans with respectto dominant surface orientations. We also propose to use expectation maximization to fit a gamma mixture model to the output similarity measures in order to automatically determinethe threshold that separates scans at loop closures from nonoverlapping ones.Wediscuss the problem of determining ground truth in the context of loop detection and thedifficulties in comparing the results of the few available methods based on range information.Furthermore, we present quantitative performance evaluations using three realworlddata sets, one of which is highly self-similar, showing that the proposed methodachieves high recall rates (percentage of correctly identified loop closures) at low falsepositiverates in environments with different characteristics.

  • 34.
    Magnusson, Martin
    et al.
    Örebro University, School of Science and Technology.
    Kucner, Tomasz Piotr
    Örebro University, School of Science and Technology.
    Gholami Shahbandi, Saeed
    IS lab, Halmstad University, Halmstad, Sweden.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Semi-Supervised 3D Place Categorisation by Descriptor Clustering2017In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 620-625Conference paper (Refereed)
    Abstract [en]

    Place categorisation; i. e., learning to group perception data into categories based on appearance; typically uses supervised learning and either visual or 2D range data.

    This paper shows place categorisation from 3D data without any training phase. We show that, by leveraging the NDT histogram descriptor to compactly encode 3D point cloud appearance, in combination with standard clustering techniques, it is possible to classify public indoor data sets with accuracy comparable to, and sometimes better than, previous supervised training methods. We also demonstrate the effectiveness of this approach to outdoor data, with an added benefit of being able to hierarchically categorise places into sub-categories based on a user-selected threshold.

    This technique relieves users of providing relevant training data, and only requires them to adjust the sensitivity to the number of place categories, and provide a semantic label to each category after the process is completed.

  • 35.
    Mansouri, Masoumeh
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Pecora, Federico
    Örebro University, School of Science and Technology.
    Hybrid Reasoning for Multi-robot Drill Planning in Open-pit Mines2016In: Acta Polytechnica, ISSN 1210-2709, E-ISSN 1805-2363, Vol. 56, no 1, p. 47-56Article in journal (Refereed)
    Abstract [en]

    Fleet automation often involves solving several strongly correlated sub-problems, including task allocation, motion planning, and coordination. Solutions need to account for very specific, domaindependent constraints. In addition, several aspects of the overall fleet management problem become known only online. We propose a method for solving the fleet-management problem grounded on a heuristically-guided search in the space of mutually feasible solutions to sub-problems. We focus on a mining application which requires online contingency handling and accommodating many domainspecific constraints. As contingencies occur, efficient reasoning is performed to adjust the plan online for the entire fleet.

  • 36.
    Mansouri, Masoumeh
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Pecora, Frederico
    Örebro University, School of Science and Technology.
    Towards Hybrid Reasoning for Automated Industrial Fleet Management2015In: 24th International Joint Conference on Artificial Intelligence, Workshop on Hybrid Reasoning, AAAI Press, 2015Conference paper (Refereed)
    Abstract [en]

    More and more industrial applications require fleets of autonomous ground vehicles. Today's solutions to the management of these fleets still largely rely on fixed set-ups of the system, manually specified ad-hoc rules. Our aim is to replace current practice with autonomous fleets and fleet management systems that are easily adaptable to new set-ups and environments, can accommodate human-intelligible rules, and guarantee feasible and meaningful behavior of the fleet. We propose to cast the problem of autonomous fleet management to a meta-CSP that integrates task allocation, coordination and motion planning. We discuss design choices of the approach, and how it caters to the need for hybrid reasoning in terms of symbolic, metric, temporal and spatial constraints. We also comment on a preliminary realization of the system.

  • 37.
    Mielle, Malcolm
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Using emergency maps to add not yet explored places into SLAM2017Conference paper (Other academic)
    Abstract [en]

    While using robots in search and rescue missions would help ensure the safety of first responders, a key issue is the time needed by the robot to operate. Even though SLAM is faster and faster, it might still be too slow to enable the use of robots in critical situations. One way to speed up operation time is to use prior information.

    We aim at integrating emergency-maps into SLAM to complete the SLAM map with information about not yet explored part of the environment. By integrating prior information, we can speed up exploration time or provide valuable prior information for navigation, for example, in case of sensor blackout/failure. However, while extensively used by firemen in their operations, emergency maps are not easy to integrate in SLAM since they are often not up to date or with non consistent scales.

    The main challenge we are tackling is in dealing with the imperfect scale of the rough emergency maps and integrate it with the online SLAM map in addition to challenges due to incorrect matches between these two types of map. We developed a formulation of graph-based SLAM incorporating information from an emergency map into SLAM, and propose a novel optimization process adapted to this formulation.

    We extract corners from the emergency map and the SLAM map, in between which we find correspondences using a distance measure. We then build a graph representation associating information from the emergency map and the SLAM map. Corners in the emergency map, corners in the robot map, and robot poses are added as nodes in the graph, while odometry, corner observations, walls in the emergency map, and corner associations are added as edges. To conserve the topology of the emergency map, but correct its possible errors in scale, edges representing the emergency map's walls are given a covariance so that they are easy to extend or shrink but hard to rotate. Correspondences between corners represent a zero transformation for the optimization to match them as close as possible. The graph optimization is done by using a combination robust kernels. We first use the Huber kernel, to converge toward a good solution, followed by Dynamic Covariance Scaling, to handle the remaining errors.

    We demonstrate our system in an office environment. We run the SLAM online during the exploration. Using the map enhanced by information from the emergency map, the robot was able to plan the shortest path toward a place it has not yet explored. This capability can be a real asset in complex buildings where exploration can take up a long time. It can also reduce exploration time by avoiding exploration of dead-ends, or search of specific places since the robot knows where it is in the emergency map.

  • 38.
    Mielle, Malcolm
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    SLAM auto-complete: completing a robot map using an emergency map2017In: 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), IEEE conference proceedings, 2017, p. 35-40, article id 8088137Conference paper (Refereed)
    Abstract [en]

    In search and rescue missions, time is an important factor; fast navigation and quickly acquiring situation awareness might be matters of life and death. Hence, the use of robots in such scenarios has been restricted by the time needed to explore and build a map. One way to speed up exploration and mapping is to reason about unknown parts of the environment using prior information. While previous research on using external priors for robot mapping mainly focused on accurate maps or aerial images, such data are not always possible to get, especially indoor. We focus on emergency maps as priors for robot mapping since they are easy to get and already extensively used by firemen in rescue missions. However, those maps can be outdated, information might be missing, and the scales of rooms are typically not consistent.

    We have developed a formulation of graph-based SLAM that incorporates information from an emergency map. The graph-SLAM is optimized using a combination of robust kernels, fusing the emergency map and the robot map into one map, even when faced with scale inaccuracies and inexact start poses.

    We typically have more than 50% of wrong correspondences in the settings studied in this paper, and the method we propose correctly handles them. Experiments in an office environment show that we can handle up to 70% of wrong correspondences and still get the expected result. The robot can navigate and explore while taking into account places it has not yet seen. We demonstrate this in a test scenario and also show that the emergency map is enhanced by adding information not represented such as closed doors or new walls.

  • 39.
    Mosberger, Rafael
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    An Inexpensive Monocular Vision System for Tracking Humans in Industrial Environments2013In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), IEEE conference proceedings, 2013, p. 5850-5857Conference paper (Refereed)
    Abstract [en]

    We report on a novel vision-based method for reliable human detection from vehicles operating in industrial environments in the vicinity of workers. By exploiting the fact that reflective vests represent a standard safety equipment on most industrial worksites, we use a single camera system and active IR illumination to detect humans by identifying the reflective vest markers. Adopting a sparse feature based approach, we classify vest markers against other reflective material and perform supervised learning of the object distance based on local image descriptors. The integration of the resulting per-feature 3D position estimates in a particle filter finally allows to perform human tracking in conditions ranging from broad daylight to complete darkness.

  • 40.
    Mosberger, Rafael
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Estimating the 3D Position of Humans Wearing a Reflective Vest Using a Single Camera System2014In: Field and Service Robotics: Results of the 8th International Conference / [ed] Yoshida, Kazuya, Tadokoro, Satoshi, Springer Berlin/Heidelberg, 2014, p. 143-157Chapter in book (Refereed)
    Abstract [en]

    This chapter presents a novel possible solution for people detection and estimation of their 3D position in challenging shared environments. Addressing safety critical applications in industrial environments, we make the basic assumption that people wear reflective vests. In order to detect these vests and to discriminate them from other reflective material, we propose an approach based on a single camera equipped with an IR flash. The camera acquires pairs of images, one with and one without IR flash, in short succession. The images forming a pair are then related to each other through feature tracking, which allows to discard features for which the relative intensity difference is small and which are thus not believed to belong to a reflective vest. Next, the local neighbourhood of the remaining features is further analysed. First, a Random Forest classifier is used to discriminate between features caused by a reflective vest and features caused by some other reflective materials. Second, the distance between the camera and the vest features is estimated using a Random Forest regressor. The proposed system was evaluated in one indoor and two challenging outdoor scenarios. Our results indicate very good classification performance and remarkably accurate distance estimation especially in combination with the SURF descriptor, even under direct exposure to sunlight.

  • 41.
    Mosberger, Rafael
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Estimating the 3d position of humans wearing a reflective vest using a single camera system2012In: Proceedings of the International Conference on Field and Service Robotics (FSR), Springer, 2012Conference paper (Refereed)
    Abstract [en]

    This paper presents a novel possible solution for people detection and estimation of their 3D position in challenging shared environments. Addressing safety critical applications in industrial environments, we make the basic assumption that people wear reflective vests. In order to detect these vests and to discriminate them from other reflective material, we propose an approach based on a single camera equipped with an IR flash. The camera acquires pairs of images, one with and one without IR flash, in short succession. The images forming a pair are then related to each other through feature tracking, which allows to discard features for which the relative intensity difference is small and which are thus not believed to belong to a reflective vest. Next, the local neighbourhood of the remaining features is further analysed. First, a Random Forest classifier is used to discriminate between features caused by a reflective vest and features caused by some other reflective materials. Second, the distance between the camera and the vest features is estimated using a Random Forest regressor. The proposed system was evaluated in one indoor and two challenging outdoor scenarios. Our results indicate very good classification performance and remarkably accurate distance estimation especially in combination with the SURF descriptor, even under direct exposure to sunlight.

  • 42.
    Mosberger, Rafael
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    A customized vision system for tracking humans wearing reflective safety clothing from industrial vehicles and machinery2014In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 14, no 10, p. 17952-17980Article in journal (Refereed)
    Abstract [en]

    This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions.

  • 43.
    Mosberger, Rafael
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Multi-human Tracking using High-visibility Clothing for Industrial Safety2013In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2013, p. 638-644Conference paper (Refereed)
    Abstract [en]

    We propose and evaluate a system for detecting and tracking multiple humans wearing high-visibility clothing from vehicles operating in industrial work environments. We use a customized stereo camera setup equipped with IR flash and IR filter to detect the reflective material on the worker's garments and estimate their trajectories in 3D space. An evaluation in two distinct industrial environments with different degrees of complexity demonstrates the approach to be robust and accurate for tracking workers in arbitrary body poses, under occlusion, and under a wide range of different illumination settings.

  • 44.
    Mosberger, Rafael
    et al.
    Örebro University, School of Science and Technology.
    Leibe, Bastian
    Aachen University, Aachen, Germany.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Multi-band Hough Forests for detecting humans with Reflective Safety Clothing from mobile machinery2015In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), IEEE Computer Society, 2015, p. 697-703Conference paper (Refereed)
    Abstract [en]

    We address the problem of human detection from heavy mobile machinery and robotic equipment operating at industrial working sites. Exploiting the fact that workers are typically obliged to wear high-visibility clothing with reflective markers, we propose a new recognition algorithm that specifically incorporates the highly discriminative features of the safety garments in the detection process. Termed Multi-band Hough Forest, our detector fuses the input from active near-infrared (NIR) and RGB color vision to learn a human appearance model that not only allows us to detect and localize industrial workers, but also to estimate their body orientation. We further propose an efficient pipeline for automated generation of training data with high-quality body part annotations that are used in training to increase detector performance. We report a thorough experimental evaluation on challenging image sequences from a real-world production environment, where persons appear in a variety of upright and non-upright body positions.

  • 45.
    Mosberger, Rafael
    et al.
    Örebro University, School of Science and Technology.
    Schaffernicht, Erik
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Inferring human body posture information from reflective patterns of protective work garments2016In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 4131-4136Conference paper (Refereed)
    Abstract [en]

    We address the problem of extracting human body posture labels, upper body orientation and the spatial location of individual body parts from near-infrared (NIR) images depicting patterns of retro-reflective markers. The analyzed patterns originate from the observation of humans equipped with protective high-visibility garments that represent common safety equipment in the industrial sector. Exploiting the shape of the observed reflectors we adopt shape matching based on the chamfer distance and infer one of seven discrete body posture labels as well as the approximate upper body orientation with respect to the camera. We then proceed to analyze the NIR images on a pixel scale and estimate a figure-ground segmentation together with human body part labels using classification of densely extracted local image patches. Our results indicate a body posture classification accuracy of 80% and figure-ground segmentations with 87% accuracy.

  • 46.
    Pecora, Federico
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Mansouri, Masoumeh
    Örebro University, School of Science and Technology.
    Petkov, Vilian
    Technical University of Varna, Varna, Bulgaria.
    A Loosely-Coupled Approach for Multi-Robot Coordination, Motion Planning and Control2018Conference paper (Refereed)
  • 47.
    Rituerto, Alejandro
    et al.
    Instituto de Investigación en Ingeniería de Aragón, Deptartmento de Informática e Ingeniería de Sistemas, University of Zaragoza, Zaragoza, Spain.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Murillo, Ana C.
    Instituto de Investigación en Ingeniería de Aragón, Deptartmento de Informática e Ingeniería de Sistemas, University of Zaragoza, Zaragoza, Spain.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Jesus Guerrero, Jose
    Instituto de Investigación en Ingeniería de Aragón, Deptartmento de Informática e Ingeniería de Sistemas, University of Zaragoza, Zaragoza, Spain.
    Building an Enhanced Vocabulary of the Robot Environment with a Ceiling Pointing Camera2016In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 16, no 4, article id 493Article in journal (Refereed)
    Abstract [en]

    Mobile robots are of great help for automatic monitoring tasks in different environments. One of the first tasks that needs to be addressed when creating these kinds of robotic systems is modeling the robot environment. This work proposes a pipeline to build an enhanced visual model of a robot environment indoors. Vision based recognition approaches frequently use quantized feature spaces, commonly known as Bag of Words (BoW) or vocabulary representations. A drawback using standard BoW approaches is that semantic information is not considered as a criteria to create the visual words. To solve this challenging task, this paper studies how to leverage the standard vocabulary construction process to obtain a more meaningful visual vocabulary of the robot work environment using image sequences. We take advantage of spatio-temporal constraints and prior knowledge about the position of the camera. The key contribution of our work is the definition of a new pipeline to create a model of the environment. This pipeline incorporates (1) tracking information to the process of vocabulary construction and (2) geometric cues to the appearance descriptors. Motivated by long term robotic applications, such as the aforementioned monitoring tasks, we focus on a configuration where the robot camera points to the ceiling, which captures more stable regions of the environment. The experimental validation shows how our vocabulary models the environment in more detail than standard vocabulary approaches, without loss of recognition performance. We show different robotic tasks that could benefit of the use of our visual vocabulary approach, such as place recognition or object discovery. For this validation, we use our publicly available data-set.

  • 48.
    Saarinen, Jari
    et al.
    Department of Automation and Systems Technology, Aalto University, Alto, Finland.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Independent Markov Chain Occupancy Grid Maps for Representation of Dynamic Environments2012In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, New York, USA: IEEE, 2012, p. 3489-3495Conference paper (Refereed)
    Abstract [en]

    In this paper we propose a new grid based approach to model a dynamic environment. Each grid cell is assumed to be an independent Markov chain (iMac) with two states. The state transition parameters are learned online and modeled as two Poisson processes. As a result, our representation not only encodes the expected occupancy of the cell, but also models the expected dynamics within the cell. The paper also presents a strategy based on recency weighting to learn the model parameters from observations that is able to deal with non-stationary cell dynamics. Moreover, an interpretation of the model parameters with discussion about the convergence rates of the cells is presented. The proposed model is experimentally validated using offline data recorded with a Laser Guided Vehicle (LGV) system running in production use.

  • 49.
    Saarinen, Jari
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Ala-Luhtala, Juha
    Aalto University of Technology, Aalto, Finland.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Normal distributions transform occupancy maps: application to large-scale online 3D mapping2013In: IEEE International Conference on Robotics and Automation, New York: IEEE conference proceedings, 2013, p. 2233-2238Conference paper (Refereed)
    Abstract [en]

    Autonomous vehicles operating in real-world industrial environments have to overcome numerous challenges, chief among which is the creation and maintenance of consistent 3D world models. This paper proposes to address the challenges of online real-world mapping by building upon previous work on compact spatial representation and formulating a novel 3D mapping approach — the Normal Distributions Transform Occupancy Map (NDT-OM). The presented algorithm enables accurate real-time 3D mapping in large-scale dynamic nvironments employing a recursive update strategy. In addition, the proposed approach can seamlessly provide maps at multiple resolutions allowing for fast utilization in high-level functions such as localization or path planning. Compared to previous approaches that use the NDT representation, the proposed NDT-OM formulates an exact and efficient recursive update formulation and models the full occupancy of the map.

  • 50.
    Saarinen, Jari
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    3D normal distributions transform occupancy maps: an efficient representation for mapping in dynamic environments2013In: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 32, no 14, p. 1627-1644Article in journal (Refereed)
    Abstract [en]

    In order to enable long-term operation of autonomous vehicles in industrial environments numerous challenges need to be addressed. A basic requirement for many applications is the creation and maintenance of consistent 3D world models. This article proposes a novel 3D spatial representation for online real-world mapping, building upon two known representations: normal distributions transform (NDT) maps and occupancy grid maps. The proposed normal distributions transform occupancy map (NDT-OM) combines the advantages of both representations; compactness of NDT maps and robustness of occupancy maps. One key contribution in this article is that we formulate an exact recursive updates for NDT-OMs. We show that the recursive update equations provide natural support for multi-resolution maps. Next, we describe a modification of the recursive update equations that allows adaptation in dynamic environments. As a second key contribution we introduce NDT-OMs and formulate the occupancy update equations that allow to build consistent maps in dynamic environments. The update of the occupancy values are based on an efficient probabilistic sensor model that is specially formulated for NDT-OMs. In several experiments with a total of 17 hours of data from a milk factory we demonstrate that NDT-OMs enable real-time performance in large-scale, long-term industrial setups.

12 1 - 50 of 62
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf