oru.sePublikationer
Ändra sökning
Avgränsa sökresultatet
12 1 - 50 av 70
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Adolfsson, Daniel
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lowry, Stephanie
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Improving Localisation Accuracy using Submaps in warehouses2018Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    This paper presents a method for localisation in hybrid metric-topological maps built using only local information that is, only measurements that were captured by the robot when it was in a nearby location. The motivation is that observations are typically range and viewpoint dependent and that a map a discrete map representation might not be able to explain the full structure within a voxel. The localisation system uses a method to select submap based on how frequently and where from each submap was updated. This allow the system to select the most descriptive submap, thereby improving the localisation and increasing performance by up to 40%.

    Ladda ner fulltext (pdf)
    Improving Localisation Accuracy using Submaps in warehouses
  • 2.
    Adolfsson, Daniel
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lowry, Stephanie
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Magnusson, Martin
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    A Submap per Perspective: Selecting Subsets for SuPer Mapping that Afford Superior Localization Quality2019Ingår i: 2019 European Conference on Mobile Robots (ECMR), IEEE, 2019Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper targets high-precision robot localization. We address a general problem for voxel-based map representations that the expressiveness of the map is fundamentally limited by the resolution since integration of measurements taken from different perspectives introduces imprecisions, and thus reduces localization accuracy.We propose SuPer maps that contain one Submap per Perspective representing a particular view of the environment. For localization, a robot then selects the submap that best explains the environment from its perspective. We propose SuPer mapping as an offline refinement step between initial SLAM and deploying autonomous robots for navigation. We evaluate the proposed method on simulated and real-world data that represent an important use case of an industrial scenario with high accuracy requirements in an repetitive environment. Our results demonstrate a significantly improved localization accuracy, up to 46% better compared to localization in global maps, and up to 25% better compared to alternative submapping approaches.

    Ladda ner fulltext (pdf)
    A Submap per Perspective - Selecting Subsets for SuPer Mapping that Afford Superior Localization Quality
  • 3.
    Andreasson, Henrik
    Örebro universitet, Akademin för naturvetenskap och teknik.
    Camera based navigation by mobile robots: local visual feature based localisation and mapping2009Bok (Övrigt vetenskapligt)
    Abstract [en]

    The most important property of a mobile robot is the fact that it is mobile. How to give a robot the skills required to navigate around its environment is therefore an important topic in mobile robotics. Navigation, both for robots and humans, typically involves a map. The map can be used, for example, to estimate a pose based on observations (localisation) or determine a suitable path between to locations. Maps are available nowadays for us humans with few exceptions, however, maps suitable for mobile robots rarely exists. In addition, to relate sensor readings to a map requires that the map content and the observation is compatible, i.e. different robots may require different maps for the same area. This book addresses some of the fundamental problems related to mobile robot navigation (registration, localisation and mapping) using cameras as the primary sensor input. Small salient regions (local visual features) are extracted from each camera image, where each region can be seen as a fingerprint. Many fingerprint matches implicates a high likelihood that they corresponding images originate from a similar location, which is a central property utilised in this work.

  • 4.
    Andreasson, Henrik
    Örebro universitet, Institutionen för teknik.
    Local visual feature based localisation and mapping by mobile robots2008Doktorsavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    This thesis addresses the problems of registration, localisation and simultaneous localisation and mapping (SLAM), relying particularly on local visual features extracted from camera images. These fundamental problems in mobile robot navigation are tightly coupled. Localisation requires a representation of the environment (a map) and registration methods to estimate the pose of the robot relative to the map given the robot’s sensory readings. To create a map, sensor data must be accumulated into a consistent representation and therefore the pose of the robot needs to be estimated, which is again the problem of localisation.

    The major contributions of this thesis are new methods proposed to address the registration, localisation and SLAM problems, considering two different sensor configurations. The first part of the thesis concerns a sensor configuration consisting of an omni-directional camera and odometry, while the second part assumes a standard camera together with a 3D laser range scanner. The main difference is that the former configuration allows for a very inexpensive set-up and (considering the possibility to include visual odometry) the realisation of purely visual navigation approaches. By contrast, the second configuration was chosen to study the usefulness of colour or intensity information in connection with 3D point clouds (“coloured point clouds”), both for improved 3D resolution (“super resolution”) and approaches to the fundamental problems of navigation that exploit the complementary strengths of visual and range information.

    Considering the omni-directional camera/odometry setup, the first part introduces a new registration method based on a measure of image similarity. This registration method is then used to develop a localisation method, which is robust to the changes in dynamic environments, and a visual approach to metric SLAM, which does not require position estimation of local image features and thus provides a very efficient approach.

    The second part, which considers a standard camera together with a 3D laser range scanner, starts with the proposal and evaluation of non-iterative interpolation methods. These methods use colour information from the camera to obtain range information at the resolution of the camera image, or even with sub-pixel accuracy, from the low resolution range information provided by the range scanner. Based on the ability to determine depth values for local visual features, a new registration method is then introduced, which combines the depth of local image features and variance estimates obtained from the 3D laser range scanner to realise a vision-aided 6D registration method, which does not require an initial pose estimate. This is possible because of the discriminative power of the local image features used to determine point correspondences (data association). The vision-aided registration method is further developed into a 6D SLAM approach where the optimisation constraint is based on distances of paired local visual features. Finally, the methods introduced in the second part are combined with a novel adaptive normal distribution transform (NDT) representation of coloured 3D point clouds into a robotic difference detection system.

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 5.
    Andreasson, Henrik
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Adolfsson, Daniel
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Magnusson, Martin
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Incorporating Ego-motion Uncertainty Estimates in Range Data Registration2017Ingår i: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE), 2017, s. 1389-1395Konferensbidrag (Refereegranskat)
    Abstract [en]

    Local scan registration approaches commonlyonly utilize ego-motion estimates (e.g. odometry) as aninitial pose guess in an iterative alignment procedure. Thispaper describes a new method to incorporate ego-motionestimates, including uncertainty, into the objective function of aregistration algorithm. The proposed approach is particularlysuited for feature-poor and self-similar environments,which typically present challenges to current state of theart registration algorithms. Experimental evaluation showssignificant improvements in accuracy when using data acquiredby Automatic Guided Vehicles (AGVs) in industrial productionand warehouse environments.

  • 6.
    Andreasson, Henrik
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Bouguerra, Abdelbaki
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Cirillo, Marcello
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Dimitrov, Dimitar Nikolaev
    INRIA - Grenoble, Meylan, France.
    Driankov, Dimiter
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Karlsson, Lars
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Pecora, Federico
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Saarinen, Jari Pekka
    Örebro universitet, Institutionen för naturvetenskap och teknik. Aalto University, Espo, Finland .
    Sherikov, Aleksander
    Centre de recherche Grenoble Rhône-Alpes, Grenoble, France .
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Autonomous transport vehicles: where we are and what is missing2015Ingår i: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 22, nr 1, s. 64-75Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this article, we address the problem of realizing a complete efficient system for automated management of fleets of autonomous ground vehicles in industrial sites. We elicit from current industrial practice and the scientific state of the art the key challenges related to autonomous transport vehicles in industrial environments and relate them to enabling techniques in perception, task allocation, motion planning, coordination, collision prediction, and control. We propose a modular approach based on least commitment, which integrates all modules through a uniform constraint-based paradigm. We describe an instantiation of this system and present a summary of the results, showing evidence of increased flexibility at the control level to adapt to contingencies.

  • 7.
    Andreasson, Henrik
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Bouguerra, Abdelbaki
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Åstrand, Björn
    CAISR Centrum för tillämpade intelligenta system (IS-lab), Högskolan i Halmstad, Halmstad, Sweden.
    Rögnvaldsson, Thorsteinn
    CAISR Centrum för tillämpade intelligenta system (IS-lab), Högskolan i Halmstad, Halmstad, Sweden.
    Gold-Fish SLAM: An Application of SLAM to Localize AGVs2014Ingår i: Field and Service Robotics: Results of the 8th International Conference / [ed] Yoshida, Kazuya; Tadokoro, Satoshi, Heidelberg, Germany: Springer Berlin/Heidelberg, 2014, s. 585-598Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    The main focus of this paper is to present a case study of a SLAM solution for Automated Guided Vehicles (AGVs) operating in real-world industrial environments. The studied solution, called Gold-fish SLAM, was implemented to provide localization estimates in dynamic industrial environments, where there are static landmarks that are only rarely perceived by the AGVs. The main idea of Gold-fish SLAM is to consider the goods that enter and leave the environment as temporary landmarks that can be used in combination with the rarely seen static landmarks to compute online estimates of AGV poses. The solution is tested and verified in a factory of paper using an eight ton diesel-truck retrofitted with an AGV control system running at speeds up to 3 m/s. The paper includes also a general discussion on how SLAM can be used in industrial applications with AGVs

  • 8.
    Andreasson, Henrik
    et al.
    Örebro universitet, Akademin för naturvetenskap och teknik.
    Bouguerra, Abdelbaki
    Örebro universitet, Akademin för naturvetenskap och teknik.
    Åstrand, Björn
    Rögnvaldsson, Thorsteinn
    Örebro universitet, Akademin för naturvetenskap och teknik.
    Gold-fish SLAM: an application of SLAM to localize AGVs2012Ingår i: Proceedings of the International Conference on Field and Service Robotics (FSR), July 2012., 2012Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    The main focus of this paper is to present a case study of a SLAM solution for Automated Guided Vehicles (AGVs) operating in real-world industrial environ- ments. The studied solution, called Gold-fish SLAM, was implemented to provide localization estimates in dynamic industrial environments, where there are static landmarks that are only rarely perceived by the AGVs. The main idea of Gold-fish SLAM is to consider the goods that enter and leave the environment as temporary landmarks that can be used in combination with the rarely seen static landmarks to compute online estimates of AGV poses. The solution is tested and verified in a factory of paper using an eight ton diesel-truck retrofitted with an AGV control sys- tem running at speeds up to 3 meters per second. The paper includes also a general discussion on how SLAM can be used in industrial applications with AGVs.

  • 9.
    Andreasson, Henrik
    et al.
    Örebro universitet, Institutionen för teknik.
    Duckett, Tom
    University of Lincoln, University of Lincoln, UK.
    Lilienthal, Achim J.
    A Minimalistic Approach to Appearance-Based Visual SLAM2008Ingår i: IEEE Transactions on Robotics, ISSN 1552-3098, Vol. 24, nr 5, s. 991-1001Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This paper presents a vision-based approach to SLAM in indoor / outdoor environments with minimalistic sensing and computational requirements. The approach is based on a graph representation of robot poses, using a relaxation algorithm to obtain a globally consistent map. Each link corresponds to a relative measurement of the spatial relation between the two nodes it connects. The links describe the likelihood distribution of the relative pose as a Gaussian distribution. To estimate the covariance matrix for links obtained from an omni-directional vision sensor, a novel method is introduced based on the relative similarity of neighbouring images. This new method does not require determining distances to image features using multiple view geometry, for example. Combined indoor and outdoor experiments demonstrate that the approach can handle qualitatively different environments (without modification of the parameters), that it can cope with violations of the “flat floor assumption” to some degree, and that it scales well with increasing size of the environment, producing topologically correct and geometrically accurate maps at low computational cost. Further experiments demonstrate that the approach is also suitable for combining multiple overlapping maps, e.g. for solving the multi-robot SLAM problem with unknown initial poses.

    Ladda ner fulltext (pdf)
    A Minimalistic Approach to Appearance based Visual SLAM
  • 10.
    Andreasson, Henrik
    et al.
    Örebro universitet, Institutionen för teknik.
    Duckett, Tom
    Dept. of Computing & Informatics, University of Lincoln, Lincoln, United Kingdom.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för teknik.
    Mini-SLAM: minimalistic visual SLAM in large-scale environments based on a new interpretation of image similarity2007Ingår i: 2007 IEEE international conference on robotics and automation (ICRA), New York, NY, USA: IEEE, 2007, s. 4096-4101, artikel-id 4209726Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper presents a vision-based approach to SLAM in large-scale environments with minimal sensing and computational requirements. The approach is based on a graphical representation of robot poses and links between the poses. Links between the robot poses are established based on odometry and image similarity, then a relaxation algorithm is used to generate a globally consistent map. To estimate the covariance matrix for links obtained from the vision sensor, a novel method is introduced based on the relative similarity of neighbouring images, without requiring distances to image features or multiple view geometry. Indoor and outdoor experiments demonstrate that the approach scales well to large-scale environments, producing topologically correct and geometrically accurate maps at minimal computational cost. Mini-SLAM was found to produce consistent maps in an unstructured, large-scale environment (the total path length was 1.4 km) containing indoor and outdoor passages.

    Ladda ner fulltext (pdf)
    Mini-SLAM: Minimalistic Visual SLAM in Large-Scale Environments Based on a New Interpretation of Image Similarity
  • 11.
    Andreasson, Henrik
    et al.
    Örebro universitet, Institutionen för teknik.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap. aass.
    Vision aided 3D laser scanner based registration2007Ingår i: ECMR 2007: Proceedings of the European Conference on Mobile Robots, 2007, s. 192-197Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper describes a vision and 3D laser based registration approach which utilizes visual features to identify correspondences. Visual features are obtained from the images of a standard color camera and the depth of these features is determined by interpolating between the scanning points of a 3D laser range scanner, taking into consideration the visual information in the neighbourhood of the respective visual feature. The 3D laser scanner is also used to determine a position covariance estimate of the visual feature. To exploit these covariance estimates, an ICP algorithm based on the Mahalanobis distance is applied. Initial experimental results are presented in a real world indoor laboratory environment

    Ladda ner fulltext (pdf)
    Vision Aided 3D Laser Scanner Based Registration
  • 12.
    Andreasson, Henrik
    et al.
    Örebro universitet, Akademin för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Akademin för naturvetenskap och teknik.
    6D scan registration using depth-interpolated local image features2010Ingår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 58, nr 2, s. 157-165Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This paper describes a novel registration approach that is based on a combination of visual and 3D range information.To identify correspondences, local visual features obtained from images of a standard color camera are compared and the depth of matching features (and their position covariance) is determined from the range measurements of a 3D laserscanner. The matched depth-interpolated image features allows to apply registration with known correspondences.We compare several ICP variants in this paper and suggest an extension that considers the spatial distance betweenmatching features to eliminate false correspondences. Experimental results are presented in both outdoor and indoor environments. In addition to pair-wise registration, we also propose a global registration method that registers allscan poses simultaneously.

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 13.
    Andreasson, Henrik
    et al.
    Örebro universitet, Institutionen för teknik.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för teknik.
    Triebel, Rudolph
    Department of Computer Science, University of Freiburg, Germany.
    Vision based interpolation of 3D laser scans2006Ingår i: Proceedings of the Third International Conference on Autonomous Robots and Agents, 2006, s. 455-460Konferensbidrag (Refereegranskat)
    Abstract [en]

    3D range sensors, particularly 3D laser range scanners, enjoy a rising popularity and are used nowadays for many different applications. The resolution 3D range sensors provide in the image plane is typically much lower than the resolution of a modern color camera. In this paper we focus on methods to derive a high-resolution depth image from a low-resolution 3D range sensor and a color image. The main idea is to use color similarity as an indication of depth similarity, based on the observation that depth discontinuities in the scene often correspond to color or brightness changes in the camera image. We present five interpolation methods and compare them with an independently proposed method based on Markov Random Fields. The algorithms proposed in this paper are non-iterative and include a parameter-free vision-based interpolation method. In contrast to previous work, we present ground truth evaluation with real world data and analyse both indoor and outdoor data. Further, we suggest and evaluate four methods to determine a confidence measure for the accuracy of interpolated range values.

    Ladda ner fulltext (pdf)
    Vision based Interpolation of 3D Laser Scans
  • 14.
    Andreasson, Henrik
    et al.
    Örebro universitet, Institutionen för teknik.
    Magnusson, Martin
    Örebro universitet, Institutionen för teknik.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap.
    Has something changed here?: Autonomous difference detection for security patrol robots2007Ingår i: 2007 IEEE/RSJ international conference on intelligent robots and systems, New York, NY, USA: IEEE, 2007, s. 3429-3435, artikel-id 4399381Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper presents a system for autonomous change detection with a security patrol robot. In an initial step a reference model of the environment is created and changes are then detected with respect to the reference model as differences in coloured 3D point clouds, which are obtained from a 3D laser range scanner and a CCD camera. The suggested approach introduces several novel aspects, including a registration method that utilizes local visual features to determine point correspondences (thus essentially working without an initial pose estimate) and the 3D-NDT representation with adaptive cell size to efficiently represent both the spatial and colour aspects of the reference model. Apart from a detailed description of the individual parts of the difference detection system, a qualitative experimental evaluation in an indoor lab environment is presented, which demonstrates that the suggested system is able register and detect changes in spatial 3D data and also to detect changes that occur in colour space and are not observable using range values only.

    Ladda ner fulltext (pdf)
    Has Something Changed Here?: Autonomous Difference Detection for Security Patrol Robots
  • 15.
    Andreasson, Henrik
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Saarinen, Jari
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Cirillo, Marcello
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Drive the Drive: From Discrete Motion Plans to Smooth Drivable Trajectories2014Ingår i: Robotics, E-ISSN 2218-6581, Vol. 3, nr 4, s. 400-416Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Autonomous navigation in real-world industrial environments is a challenging task in many respects. One of the key open challenges is fast planning and execution of trajectories to reach arbitrary target positions and orientations with high accuracy and precision, while taking into account non-holonomic vehicle constraints. In recent years, lattice-based motion planners have been successfully used to generate kinematically and kinodynamically feasible motions for non-holonomic vehicles. However, the discretized nature of these algorithms induces discontinuities in both state and control space of the obtained trajectories, resulting in a mismatch between the achieved and the target end pose of the vehicle. As endpose accuracy is critical for the successful loading and unloading of cargo in typical industrial applications, automatically planned paths have not been widely adopted in commercial AGV systems. The main contribution of this paper is a path smoothing approach, which builds on the output of a lattice-based motion planner to generate smooth drivable trajectories for non-holonomic industrial vehicles. The proposed approach is evaluated in several industrially relevant scenarios and found to be both fast (less than 2 s per vehicle trajectory) and accurate (end-point pose errors below 0.01 m in translation and 0.005 radians in orientation).

    Ladda ner fulltext (pdf)
    fulltext
  • 16.
    Andreasson, Henrik
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Saarinen, Jari
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Cirillo, Marcello
    Örebro universitet, Institutionen för naturvetenskap och teknik. SCANIA AB, Södertälje, Sweden.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Fast, continuous state path smoothing to improve navigation accuracy2015Ingår i: IEEE International Conference on Robotics and Automation (ICRA), 2015, IEEE Computer Society, 2015, s. 662-669Konferensbidrag (Refereegranskat)
    Abstract [en]

    Autonomous navigation in real-world industrial environments is a challenging task in many respects. One of the key open challenges is fast planning and execution of trajectories to reach arbitrary target positions and orientations with high accuracy and precision, while taking into account non-holonomic vehicle constraints. In recent years, lattice-based motion planners have been successfully used to generate kinematically and kinodynamically feasible motions for non-holonomic vehicles. However, the discretized nature of these algorithms induces discontinuities in both state and control space of the obtained trajectories, resulting in a mismatch between the achieved and the target end pose of the vehicle. As endpose accuracy is critical for the successful loading and unloading of cargo in typical industrial applications, automatically planned paths have not be widely adopted in commercial AGV systems. The main contribution of this paper addresses this shortcoming by introducing a path smoothing approach, which builds on the output of a lattice-based motion planner to generate smooth drivable trajectories for non-holonomic industrial vehicles. In real world tests presented in this paper we demonstrate that the proposed approach is fast enough for online use (it computes trajectories faster than they can be driven) and highly accurate. In 100 repetitions we achieve mean end-point pose errors below 0.01 meters in translation and 0.002 radians in orientation. Even the maximum errors are very small: only 0.02 meters in translation and 0.008 radians in orientation.

  • 17.
    Andreasson, Henrik
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Real time registration of RGB-D data using local visual features and 3D-NDT registration2012Ingår i: Proc. of International Conference on Robotics and Automation (ICRA) Workshop on Semantic Perception, Mapping and Exploration (SPME), IEEE, 2012Konferensbidrag (Refereegranskat)
    Abstract [en]

    Recent increased popularity of RGB-D capable sensors in robotics has resulted in a surge of related RGBD registration methods. This paper presents several RGB-D registration algorithms based on combinations between local visual feature and geometric registration. Fast and accurate transformation refinement is obtained by using a recently proposed geometric registration algorithm, based on the Three-Dimensional Normal Distributions Transform (3D-NDT). Results obtained on standard data sets have demonstrated mean translational errors on the order of 1 cm and rotational errors bellow 1 degree, at frame processing rates of about 15 Hz.

  • 18.
    Andreasson, Henrik
    et al.
    Örebro universitet, Institutionen för teknik.
    Treptow, André
    University of Tübingen.
    Duckett, Tom
    Örebro universitet, Institutionen för teknik.
    Localization for mobile robots using panoramic vision, local features and particle filter2005Ingår i: Proceedings of the 2005 IEEE International Converence on Robotics and Automation: ICRA - 2005, 2005, s. 3348-3353Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we present a vision-based approach to self-localization that uses a novel scheme to integrate featurebased matching of panoramic images with Monte Carlo localization. A specially modified version of Lowe’s SIFT algorithm is used to match features extracted from local interest points in the image, rather than using global features calculated from the whole image. Experiments conducted in a large, populated indoor environment (up to 5 persons visible) over a period of several months demonstrate the robustness of the approach, including kidnapping and occlusion of up to 90% of the robot’s field of view.

  • 19.
    Andreasson, Henrik
    et al.
    Örebro universitet, Institutionen för teknik.
    Treptow, André
    University of Tübingen.
    Duckett, Tom
    Örebro universitet, Institutionen för teknik.
    Self-localization in non-stationary environments using omni-directional vision2007Ingår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 55, nr 7, s. 541-551Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This paper presents an image-based approach for localization in non-static environments using local feature descriptors, and its experimental evaluation in a large, dynamic, populated environment where the time interval between the collected data sets is up to two months. By using local features together with panoramic images, robustness and invariance to large changes in the environment can be handled. Results from global place recognition with no evidence accumulation and a Monte Carlo localization method are shown. To test the approach even further, experiments were conducted with up to 90% virtual occlusion in addition to the dynamic changes in the environment

  • 20.
    Andreasson, Henrik
    et al.
    Örebro universitet, Institutionen för teknik.
    Triebel, Rudolph
    University of Friburg.
    Burgard, Wolfram
    University of Friburg.
    Improving plane extraction from 3D data by fusing laser data and vision2005Ingår i: 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005. (IROS 2005): IROS 2005 IEEE/RSJ, 2005, s. 2656-2661Konferensbidrag (Refereegranskat)
    Abstract [en]

    The problem of extracting three-dimensional structures from data acquired with mobile robots has received considerable attention over the past years. Robots that are able to perceive their three-dimensional environment are envisioned to more robustly perform tasks like navigation, rescue, and manipulation. In this paper we present an approach that simultaneously uses color and range information to cluster 3d points into planar structures. Our current system also is able to calibrate the camera and the laser based on the remission values provided by the range scanner and the brightness of the pixels in the image. It has been implemented on a mobile robot equipped with a manipulator that carries a range scanner and a camera for acquiring colored range scans. Several experiments carried out on real data and in simulations demonstrate that our approach yields highly accurate results also in comparison with previous approaches

  • 21.
    Andreasson, Henrik
    et al.
    Örebro universitet, Institutionen för teknik.
    Triebel, Rudolph
    Department of Computer Science, University of Freiburg, Freiburg, Germany.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för teknik.
    Non-iterative Vision-based Interpolation of 3D Laser Scans2007Ingår i: Autonomos Agents and Robots / [ed] Mukhopadhyay, SC, Gupta, GS, Berlin/Heidelberg, Germany: Springer , 2007, Vol. 76, s. 83-90, artikel-id 4399381Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    3D range sensors, particularly 3D laser range scanners, enjoy a rising popularity and are used nowadays for many different applications. The resolution 3D range sensors provide in the image plane is typically much lower than the resolution of a modern colour camera. In this chapter we focus on methods to derive a highresolution depth image from a low-resolution 3D range sensor and a colour image. The main idea is to use colour similarity as an indication of depth similarity, based on the observation that depth discontinuities in the scene often correspond to colour or brightness changes in the camera image. We present five interpolation methods and compare them with an independently proposed method based on Markov random fields. The proposed algorithms are non-iterative and include a parameter-free vision-based interpolation method. In contrast to previous work, we present ground truth evaluation with real world data and analyse both indoor and outdoor data.

    Ladda ner fulltext (pdf)
    Non-iterative Vision-based Interpolation of 3D Laser Scans
  • 22.
    Bouguerra, Abdelbaki
    et al.
    Örebro universitet, Akademin för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Akademin för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Akademin för naturvetenskap och teknik.
    Åstrand, Björn
    Halmstad University, Halmstad, Sweden.
    Rögnvaldsson, Thorsteinn
    Halmstad University, Halmstad, Sweden.
    An autonomous robotic system for load transportation2009Ingår i: 2009 IEEE Conference on Emerging Technologies & Factory Automation (EFTA 2009), New York: IEEE conference proceedings, 2009, s. 1563-1566Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper presents an overview of an autonomous robotic material handling system. The goal of the system is to extend the functionalities of traditional AGVs to operate in highly dynamic environments. Traditionally, the reliable functioning of AGVs relies on the availability of adequate infrastructure to support navigation. In the target environments of our system, such infrastructure is difficult to setup in an efficient way. Additionally, the location of objects to handle are unknown, which requires that the system be able to detect and track object positions at runtime. Another requirement of the system is to be able to generate trajectories dynamically, which is uncommon in industrial AGV systems.

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 23.
    Bouguerra, Abdelbaki
    et al.
    Örebro universitet, Akademin för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Akademin för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Akademin för naturvetenskap och teknik.
    Åstrand, Björn
    Halmstad University.
    Rögnvaldsson, Thorsteinn
    Halmstad University, Sweden.
    MALTA: a system of multiple autonomous trucks for load transportation2009Ingår i: Proceedings of the 4th European conference on mobile robots (ECMR) / [ed] Ivan Petrovic, Achim J. Lilienthal, 2009, s. 93-98Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper presents an overview of an autonomousrobotic material handling system. The goal of the system is toextend the functionalities of traditional AGVs to operate in highlydynamic environments. Traditionally, the reliable functioning ofAGVs relies on the availability of adequate infrastructure tosupport navigation. In the target environments of our system,such infrastructure is difficult to setup in an efficient way.Additionally, the location of objects to handle are unknown,which requires that the system be able to detect and track objectpositions at runtime. Another requirement of the system is to beable to generate trajectories dynamically, which is uncommon inindustrial AGV systems.

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 24.
    Bunz, Elsa
    et al.
    Örebro University, Örebro, Sweden.
    Chadalavada, Ravi Teja
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Krug, Robert
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Schindler, Maike
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Spatial Augmented Reality and Eye Tracking for Evaluating Human Robot Interaction2016Ingår i: Proceedings of RO-MAN 2016 Workshop: Workshop on Communicating Intentions in Human-Robot Interaction, 2016Konferensbidrag (Refereegranskat)
    Abstract [en]

    Freely moving autonomous mobile robots may leadto anxiety when operating in workspaces shared with humans.Previous works have given evidence that communicating in-tentions using Spatial Augmented Reality (SAR) in the sharedworkspace will make humans more comfortable in the vicinity ofrobots. In this work, we conducted experiments with the robotprojecting various patterns in order to convey its movementintentions during encounters with humans. In these experiments,the trajectories of both humans and robot were recorded witha laser scanner. Human test subjects were also equipped withan eye tracker. We analyzed the eye gaze patterns and thelaser scan tracking data in order to understand how the robot’sintention communication affects the human movement behavior.Furthermore, we used retrospective recall interviews to aid inidentifying the reasons that lead to behavior changes.

    Ladda ner fulltext (pdf)
    fulltext
  • 25.
    Chadalavada, Ravi Teja
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Krug, Robert
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Empirical evaluation of human trust in an expressive mobile robot2016Ingår i: Proceedings of RSS Workshop "Social Trust in Autonomous Robots 2016", 2016Konferensbidrag (Refereegranskat)
    Abstract [en]

    A mobile robot communicating its intentions using Spatial Augmented Reality (SAR) on the shared floor space makes humans feel safer and more comfortable around the robot. Our previous work [1] and several other works established this fact. We built upon that work by adding an adaptable information and control to the SAR module. An empirical study about how a mobile robot builds trust in humans by communicating its intentions was conducted. A novel way of evaluating that trust is presented and experimentally shown that adaption in SAR module lead to natural interaction and the new evaluation system helped us discover that the comfort levels between human-robot interactions approached those of human-human interactions.

    Ladda ner fulltext (pdf)
    fulltext
  • 26.
    Chadalavada, Ravi Teja
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Krug, Robert
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    That’s on my Mind!: Robot to Human Intention Communication through on-board Projection on Shared Floor Space2015Ingår i: 2015 European Conference on Mobile Robots (ECMR), New York: IEEE conference proceedings , 2015Konferensbidrag (Refereegranskat)
    Abstract [en]

    The upcoming new generation of autonomous vehicles for transporting materials in industrial environments will be more versatile, flexible and efficient than traditional AGVs, which simply follow pre-defined paths. However, freely navigating vehicles can appear unpredictable to human workers and thus cause stress and render joint use of the available space inefficient. Here we address this issue and propose on-board intention projection on the shared floor space for communication from robot to human. We present a research prototype of a robotic fork-lift equipped with a LED projector to visualize internal state information and intents. We describe the projector system and discuss calibration issues. The robot’s ability to communicate its intentions is evaluated in realistic situations where test subjects meet the robotic forklift. The results show that already adding simple information, such as the trajectory and the space to be occupied by the robot in the near future, is able to effectively improve human response to the robot.

  • 27.
    Chadalavada, Ravi Teja
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Schindler, Maike
    Faculty of Human Sciences, University of Cologne, Germany, Cologne, Gemany.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Implicit intention transference using eye-tracking glasses for improved safety in human-robot interaction2019Konferensbidrag (Refereegranskat)
    Abstract [en]

    Eye gaze can convey information about intentions beyond what can beinferred from the trajectory and head pose of a person. We propose eye-trackingglasses as safety equipment in industrial environments shared by humans androbots. In this work, an implicit intention transference system was developed and implemented. Robot was given access to human eye gaze data, and it responds tothe eye gaze data through spatial augmented reality projections on the sharedfloor space in real-time and the robot could also adapt its path. This allows proactivesafety approaches in HRI for example by attempting to get the human'sattention when they are in the vicinity of a moving robot. A study was conductedwith workers at an industrial warehouse. The time taken to understand the behaviorof the system was recorded. Electrodermal activity and pupil diameter wererecorded to measure the increase in stress and cognitive load while interactingwith an autonomous system, using these measurements as a proxy to quantifytrust in autonomous systems.

    Ladda ner fulltext (pdf)
    Implicit intention transference using eye-tracking glasses for improved safety in human-robot interaction
  • 28.
    Chadalavada, Ravi Teja
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Schindler, Maike
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Palm, Rainer
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Accessing your navigation plans! Human-Robot Intention Transfer using Eye-Tracking Glasses2018Ingår i: Advances in Manufacturing Technology XXXII: Proceedings of the 16th International Conference on Manufacturing Research, incorporating the 33rd National Conference on Manufacturing Research, September 11–13, 2018, University of Skövde, Sweden / [ed] Case K. &Thorvald P., Amsterdam, Netherlands: IOS Press, 2018, s. 253-258Konferensbidrag (Refereegranskat)
    Abstract [en]

    Robots in human co-habited environments need human-aware task and motion planning, ideally responding to people’s motion intentions as soon as they can be inferred from human cues. Eye gaze can convey information about intentions beyond trajectory and head pose of a person. Hence, we propose eye-tracking glasses as safety equipment in industrial environments shared by humans and robots. This paper investigates the possibility of human-to-robot implicit intention transference solely from eye gaze data.  We present experiments in which humans wearing eye-tracking glasses encountered a small forklift truck under various conditions. We evaluate how the observed eye gaze patterns of the participants related to their navigation decisions. Our analysis shows that people primarily gazed on that side of the robot they ultimately decided to pass by. We discuss implications of these results and relate to a control approach that uses human eye gaze for early obstacle avoidance.

    Ladda ner fulltext (pdf)
    Accessing your navigation plans! Human-Robot Intention Transfer using Eye-Tracking Glasses
  • 29.
    Chadalavada, Ravi Teja
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Schindler, Maike
    Faculty of Human Sciences, University of Cologne, Germany.
    Palm, Rainer
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Bi-directional navigation intent communication using spatial augmented reality and eye-tracking glasses for improved safety in human-robot interaction2020Ingår i: Robotics and Computer-Integrated Manufacturing, ISSN 0736-5845, E-ISSN 1879-2537, Vol. 61, artikel-id 101830Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Safety, legibility and efficiency are essential for autonomous mobile robots that interact with humans. A key factor in this respect is bi-directional communication of navigation intent, which we focus on in this article with a particular view on industrial logistic applications. In the direction robot-to-human, we study how a robot can communicate its navigation intent using Spatial Augmented Reality (SAR) such that humans can intuitively understand the robot's intention and feel safe in the vicinity of robots. We conducted experiments with an autonomous forklift that projects various patterns on the shared floor space to convey its navigation intentions. We analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift and carried out stimulated recall interviews (SRI) in order to identify desirable features for projection of robot intentions. In the direction human-to-robot, we argue that robots in human co-habited environments need human-aware task and motion planning to support safety and efficiency, ideally responding to people's motion intentions as soon as they can be inferred from human cues. Eye gaze can convey information about intentions beyond what can be inferred from the trajectory and head pose of a person. Hence, we propose eye-tracking glasses as safety equipment in industrial environments shared by humans and robots. In this work, we investigate the possibility of human-to-robot implicit intention transference solely from eye gaze data and evaluate how the observed eye gaze patterns of the participants relate to their navigation decisions. We again analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift for clues that could reveal direction intent. Our analysis shows that people primarily gazed on that side of the robot they ultimately decided to pass by. We discuss implications of these results and relate to a control approach that uses human gaze for early obstacle avoidance.

  • 30.
    Cirillo, Marcello
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Pecora, Federico
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Uras, Tansel
    Department of Computer Science, University of Southern California, Los Angeles, USA.
    Koenig, Sven
    Department of Computer Science, University of Southern California, Los Angeles, USA.
    Integrated Motion Planning and Coordination for Industrial Vehicles2014Ingår i: Proceedings of the 24th International Conference on Automated Planning and Scheduling, AAAI Press, 2014Konferensbidrag (Refereegranskat)
    Abstract [en]

    A growing interest in the industrial sector for autonomous ground vehicles has prompted significant investment in fleet management systems. Such systems need to accommodate on-line externally imposed temporal and spatial requirements, and to adhere to them even in the presence of contingencies. Moreover, a fleet management system should ensure correctness, i.e., refuse to commit to requirements that cannot be satisfied. We present an approach to obtain sets of alternative execution patterns (called trajectory envelopes) which provide these guarantees. The approach relies on a constraint-based representation shared among multiple solvers, each of which progressively refines trajectory envelopes following a least commitment principle.

  • 31.
    Della Corte, Bartolomeo
    et al.
    Department of Computer, Control, and Management Engineering “Antonio Ruberti” Sapienza, University of Rome, Rome, Italy.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Grisetti, Giorgio
    Department of Computer, Control, and Management Engineering “Antonio Ruberti” Sapienza, University of Rome, Rome, Italy.
    Unified Motion-Based Calibration of Mobile Multi-Sensor Platforms With Time Delay Estimation2019Ingår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, nr 2, s. 902-909Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The ability to maintain and continuously update geometric calibration parameters of a mobile platform is a key functionality for every robotic system. These parameters include the intrinsic kinematic parameters of the platform, the extrinsic parameters of the sensors mounted on it, and their time delays. In this letter, we present a unified pipeline for motion-based calibration of mobile platforms equipped with multiple heterogeneous sensors. We formulate a unified optimization problem to concurrently estimate the platform kinematic parameters, the sensors extrinsic parameters, and their time delays. We analyze the influence of the trajectory followed by the robot on the accuracy of the estimate. Our framework automatically selects appropriate trajectories to maximize the information gathered and to obtain a more accurate parameters estimate. In combination with that, our pipeline observes the parameters evolution in long-term operation to detect possible values change in the parameters set. The experiments conducted on real data show a smooth convergence along with the ability to detect changes in parameters value. We release an open-source version of our framework to the community.

  • 32.
    Fleck, Sven
    et al.
    University of Tübingen.
    Busch, Florian
    University of Tübingen.
    Biber, Peter
    University of Tübingen.
    Strasser, Wolfgang
    University of Tübingen.
    Andreasson, Henrik
    Örebro universitet, Institutionen för teknik.
    Omnidirectional 3D modeling on a mobile robot using graph cuts2005Ingår i: Proceedings of the 2005 IEEE International Converence on Robotics and Automation: ICRA - 2005, 2005, s. 1748-1754Konferensbidrag (Refereegranskat)
    Abstract [en]

    For a mobile robot it is a natural task to build a 3D model of its environment. Such a model is not only useful for planning robot actions but also to provide a remote human surveillant a realistic visualization of the robot’s state with respect to the environment. Acquiring 3D models of environments is also an important task on its own with many possible applications like creating virtual interactive walkthroughs or as basis for 3D-TV.

    In this paper we present our method to acquire a 3D model using a mobile robot that is equipped with a laser scanner and a panoramic camera. The method is based on calculating dense depth maps for panoramic images using pairs of panoramic images taken from different positions using stereo matching. Traditional 2D-SLAM using laser-scan-matching is used to determine the needed camera poses. To receive high-quality results we use a high-quality stereo matching algorithm – the graph cut method. We describe the necessary modifications to handle panoramic images and specialized post-processing methods.

  • 33.
    Krug, Robert
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Tincani, Vinicio
    Interdepart. Research Center “E. Piaggio”; University of Pisa, Pisa, Italy.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Mosberger, Rafael
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Fantoni, Gualtiero
    Interdepart. Research Center “E. Piaggio”; University of Pisa, Pisa, Italy.
    Bicchi, Antonio
    Interdepart. Research Center “E. Piaggio”; University of Pisa, Pisa, Italy.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    On Using Optimization-based Control instead of Path-Planning for Robot Grasp Motion Generation2015Ingår i: IEEE International Conference on Robotics and Automation (ICRA) - Workshop on Robotic Hands, Grasping, and Manipulation, 2015Konferensbidrag (Refereegranskat)
    Ladda ner fulltext (pdf)
    fulltext
  • 34.
    Krug, Robert
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Tincani, Vinicio
    University of Pisa, Pisa, Italy.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Mosberger, Rafael
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Fantoni, Gualtiero
    University of Pisa, Pisa, Italy.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    The Next Step in Robot Commissioning: Autonomous Picking and Palletizing2016Ingår i: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 1, nr 1, s. 546-553Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    So far, autonomous order picking (commissioning) systems have not been able to meet the stringent demands regarding speed, safety, and accuracy of real-world warehouse automation, resulting in reliance on human workers. In this letter, we target the next step in autonomous robot commissioning: automatizing the currently manual order picking procedure. To this end, we investigate the use case of autonomous picking and palletizing with a dedicated research platform and discuss lessons learned during testing in simplified warehouse settings. The main theoretical contribution is a novel grasp representation scheme which allows for redundancy in the gripper pose placement. This redundancy is exploited by a local, prioritized kinematic controller which generates reactive manipulator motions on-the-fly. We validated our grasping approach by means of a large set of experiments, which yielded an average grasp acquisition time of 23.5 s at a success rate of 94.7%. Our system is able to autonomously carry out simple order picking tasks in a humansafe manner, and as such serves as an initial step toward future commercial-scale in-house logistics automation solutions.

    Ladda ner fulltext (pdf)
    fulltext
  • 35.
    Kurtser, Polina
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Ringdahl, Ola
    Department of Computing science, Umeå University, Umeå, Sweden.
    Rotstein, Nati
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer Sheva, Israel.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik. Centre for Applied Autonomous Sensor Systems.
    PointNet and geometric reasoning for detection of grape vines from single frame RGB-D data in outdoor conditions2020Ingår i: Proceedings of the Northern Lights Deep Learning Workshop, NLDL , 2020, Vol. 1, s. 1-6Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we present the usage of PointNet, a deep neural network that consumes raw un-ordered point clouds, for detection of grape vine clusters in outdoor conditions. We investigate the added value of feeding the detection network with both RGB and depth, contradictory to common practice in agricultural robotics of relying on RGB only. A total of 5057 pointclouds (1033 manually annotated and 4024 annotated using geometric reasoning) were collected in a field experiment conducted in outdoor conditions on 9 grape vines and 5 plants. The detection results show overall accuracy of 91% (average class accuracy of 74%, precision 53% recall 48%) for RGBXYZ data and a significant drop in recall for RGB or XYZ data only. These results suggest the usage of depth cameras for vision in agricultural robotics is crucial for crops where the color contrast between the crop and the background is complex. The results also suggest geometric reasoning can be used for increased training set size, a major bottleneck in the development of agricultural vision systems.

    Ladda ner fulltext (pdf)
    PointNet and geometric reasoning for detection of grape vines from single frame RGB-D data in outdoor conditions
  • 36.
    Liao, Qianfang
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Sun, Da
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Point Set Registration for 3D Range Scans Using Fuzzy Cluster-based Metric and Efficient Global OptimizationIngår i: IEEE Transaction on Pattern Analysis and Machine Intelligence, ISSN 0162-8828, E-ISSN 1939-3539Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This study presents a new point set registration method to align 3D range scans. In our method, fuzzy clusters are utilized to represent a scan, and the registration of two given scans is realized by minimizing a fuzzy weighted sum of the distances between their fuzzy cluster centers. This fuzzy cluster-based metric has a broad basin of convergence and is robust to noise. Moreover, this metric provides analytic gradients, allowing standard gradient-based algorithms to be applied for optimization. Based on this metric, the outlier issues are addressed. In addition, for the first time in rigid point set registration, a registration quality assessment in the absence of ground truth is provided. Furthermore, given specified rotation and translation spaces, we derive the upper and lower bounds of the fuzzy cluster-based metric and develop a branch-and-bound (BnB)-based optimization scheme, which can globally minimize the metric regardless of the initialization. This optimization scheme is performed in an efficient coarse-to-fine fashion: First, fuzzy clustering is applied to describe each of the two given scans by a small number of fuzzy clusters. Then, a global search, which integrates BnB and gradient-based algorithms, is implemented to achieve a coarse alignment for the two scans. During the global search, the registration quality assessment offers a beneficial stop criterion to detect whether a good result is obtained. Afterwards, a relatively large number of points of the two scans are directly taken as the fuzzy cluster centers, and then, the coarse solution is refined to be an exact alignment using the gradient-based local convergence. Compared to existing counterparts, this optimization scheme makes a large improvementin terms of robustness and efficiency by virtue of the fuzzy cluster-based metric and the registration quality assessment. In the experiments, the registration results of several 3D range scan pairs demonstrate the accuracy and effectiveness of the proposed method, as well as its superiority to state-of-the-art registration approaches.

    Ladda ner fulltext (pdf)
    Point Set Registration for 3D Range Scans Using Fuzzy Cluster-based Metric and Efficient Global Optimization
  • 37.
    Lowry, Stephanie
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lightweight, Viewpoint-Invariant Visual Place Recognition in Changing Environments2018Ingår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 3, nr 2, s. 957-964Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This paper presents a viewpoint-invariant place recognition algorithm which is robust to changing environments while requiring only a small memory footprint. It demonstrates that condition-invariant local features can be combined with Vectors of Locally Aggregated Descriptors (VLAD) to reduce high-dimensional representations of images to compact binary signatures while retaining place matching capability across visually dissimilar conditions. This system provides a speed-up of two orders of magnitude over direct feature matching, and outperforms a bag-of-visual-words approach with near-identical computation speed and memory footprint. The experimental results show that single-image place matching from non-aligned images can be achieved in visually changing environments with as few as 256 bits (32 bytes) per image.

  • 38.
    Lowry, Stephanie
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    LOGOS: Local geometric support for high-outlier spatial verification2018Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper presents LOGOS, a method of spatial verification for visual localization that is robust in the presence of a high proportion of outliers. LOGOS uses scale and orientation information from local neighbourhoods of features to determine which points are likely to be inliers. The inlier points can be used for secondary localization verification and pose estimation. LOGOS is demonstrated on a number of benchmark localization datasets and outperforms RANSAC as a method of outlier removal and localization verification in scenarios that require robustness to many outliers.

  • 39.
    Lowry, Stephanie
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Visual place recognition techniques for pose estimation in changing environments2016Ingår i: Visual Place Recognition: What is it Good For? workshop, Robotics: Science and Systems (RSS) 2016, 2016Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    This paper investigates whether visual place recognition techniques can be used to provide pose estimation information for a visual SLAM system operating long-term in an environment where the appearance may change a great deal. It demonstrates that a combination of a conventional SURF feature detector and a condition-invariant feature descriptor such as HOG or conv3 can provide a method of determining the relative transformation between two images, even when there is both appearance change and rotation or viewpoint change.

    Ladda ner fulltext (pdf)
    Visual place recognition techniques for pose estimation in changing environments
  • 40.
    Magnusson, Martin
    et al.
    Örebro universitet, Akademin för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Akademin för naturvetenskap och teknik.
    Nüchter, A.
    Jacobs University Bremen, Bremen, Germany.
    Lilienthal, Achim J.
    Örebro universitet, Akademin för naturvetenskap och teknik.
    Appearance-based loop detection from 3D laser data using the normal distributions transform2009Ingår i: IEEE International Conference on Robotics and Automation 2009 (ICRA '09), IEEE conference proceedings, 2009, s. 23-28Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    We propose a new approach to appearance based loop detection from metric 3D maps, exploiting the NDT surface representation. Locations are described with feature histograms based on surface orientation and smoothness, and loop closure can be detected by matching feature histograms. We also present a quantitative performance evaluation using two realworld data sets, showing that the proposed method works well in different environments.© 2009 IEEE.

    Ladda ner fulltext (pdf)
    Appearance-Based Loop Detection from 3D Laser Data Using the Normal Distributions Transform
  • 41.
    Magnusson, Martin
    et al.
    Örebro universitet, Akademin för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Akademin för naturvetenskap och teknik.
    Nüchter, Andreas
    Jacobs University Bremen.
    Lilienthal, Achim J.
    Örebro universitet, Akademin för naturvetenskap och teknik.
    Automatic appearance-based loop detection from three-dimensional laser data using the normal distributions transform2009Ingår i: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 26, nr 11-12, s. 892-914Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We propose a new approach to appearance-based loop detection for mobile robots, usingthree-dimensional (3D) laser scans. Loop detection is an important problem in the simultaneouslocalization and mapping (SLAM) domain, and, because it can be seen as theproblem of recognizing previously visited places, it is an example of the data associationproblem. Without a flat-floor assumption, two-dimensional laser-based approaches arebound to fail in many cases. Two of the problems with 3D approaches that we address inthis paper are how to handle the greatly increased amount of data and how to efficientlyobtain invariance to 3D rotations.We present a compact representation of 3D point cloudsthat is still discriminative enough to detect loop closures without false positives (i.e.,detecting loop closure where there is none). A low false-positive rate is very important becausewrong data association could have disastrous consequences in a SLAM algorithm.Our approach uses only the appearance of 3D point clouds to detect loops and requires nopose information. We exploit the normal distributions transform surface representationto create feature histograms based on surface orientation and smoothness. The surfaceshape histograms compress the input data by two to three orders of magnitude. Becauseof the high compression rate, the histograms can be matched efficiently to compare theappearance of two scans. Rotation invariance is achieved by aligning scans with respectto dominant surface orientations. We also propose to use expectation maximization to fit a gamma mixture model to the output similarity measures in order to automatically determinethe threshold that separates scans at loop closures from nonoverlapping ones.Wediscuss the problem of determining ground truth in the context of loop detection and thedifficulties in comparing the results of the few available methods based on range information.Furthermore, we present quantitative performance evaluations using three realworlddata sets, one of which is highly self-similar, showing that the proposed methodachieves high recall rates (percentage of correctly identified loop closures) at low falsepositiverates in environments with different characteristics.

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 42.
    Magnusson, Martin
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kucner, Tomasz Piotr
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Gholami Shahbandi, Saeed
    IS lab, Halmstad University, Halmstad, Sweden.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Semi-Supervised 3D Place Categorisation by Descriptor Clustering2017Ingår i: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE), 2017, s. 620-625Konferensbidrag (Refereegranskat)
    Abstract [en]

    Place categorisation; i. e., learning to group perception data into categories based on appearance; typically uses supervised learning and either visual or 2D range data.

    This paper shows place categorisation from 3D data without any training phase. We show that, by leveraging the NDT histogram descriptor to compactly encode 3D point cloud appearance, in combination with standard clustering techniques, it is possible to classify public indoor data sets with accuracy comparable to, and sometimes better than, previous supervised training methods. We also demonstrate the effectiveness of this approach to outdoor data, with an added benefit of being able to hierarchically categorise places into sub-categories based on a user-selected threshold.

    This technique relieves users of providing relevant training data, and only requires them to adjust the sensitivity to the number of place categories, and provide a semantic label to each category after the process is completed.

    Ladda ner fulltext (pdf)
    fulltext
  • 43.
    Mansouri, Masoumeh
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Pecora, Federico
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Hybrid Reasoning for Multi-robot Drill Planning in Open-pit Mines2016Ingår i: Acta Polytechnica, ISSN 1210-2709, E-ISSN 1805-2363, Vol. 56, nr 1, s. 47-56Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Fleet automation often involves solving several strongly correlated sub-problems, including task allocation, motion planning, and coordination. Solutions need to account for very specific, domaindependent constraints. In addition, several aspects of the overall fleet management problem become known only online. We propose a method for solving the fleet-management problem grounded on a heuristically-guided search in the space of mutually feasible solutions to sub-problems. We focus on a mining application which requires online contingency handling and accommodating many domainspecific constraints. As contingencies occur, efficient reasoning is performed to adjust the plan online for the entire fleet.

    Ladda ner fulltext (pdf)
    fulltext
  • 44.
    Mansouri, Masoumeh
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Pecora, Frederico
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Towards Hybrid Reasoning for Automated Industrial Fleet Management2015Ingår i: 24th International Joint Conference on Artificial Intelligence, Workshop on Hybrid Reasoning, AAAI Press, 2015Konferensbidrag (Refereegranskat)
    Abstract [en]

    More and more industrial applications require fleets of autonomous ground vehicles. Today's solutions to the management of these fleets still largely rely on fixed set-ups of the system, manually specified ad-hoc rules. Our aim is to replace current practice with autonomous fleets and fleet management systems that are easily adaptable to new set-ups and environments, can accommodate human-intelligible rules, and guarantee feasible and meaningful behavior of the fleet. We propose to cast the problem of autonomous fleet management to a meta-CSP that integrates task allocation, coordination and motion planning. We discuss design choices of the approach, and how it caters to the need for hybrid reasoning in terms of symbolic, metric, temporal and spatial constraints. We also comment on a preliminary realization of the system.

    Ladda ner fulltext (pdf)
    fulltext
  • 45.
    Mielle, Malcolm
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Magnusson, Martin
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Using emergency maps to add not yet explored places into SLAM2017Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    While using robots in search and rescue missions would help ensure the safety of first responders, a key issue is the time needed by the robot to operate. Even though SLAM is faster and faster, it might still be too slow to enable the use of robots in critical situations. One way to speed up operation time is to use prior information.

    We aim at integrating emergency-maps into SLAM to complete the SLAM map with information about not yet explored part of the environment. By integrating prior information, we can speed up exploration time or provide valuable prior information for navigation, for example, in case of sensor blackout/failure. However, while extensively used by firemen in their operations, emergency maps are not easy to integrate in SLAM since they are often not up to date or with non consistent scales.

    The main challenge we are tackling is in dealing with the imperfect scale of the rough emergency maps and integrate it with the online SLAM map in addition to challenges due to incorrect matches between these two types of map. We developed a formulation of graph-based SLAM incorporating information from an emergency map into SLAM, and propose a novel optimization process adapted to this formulation.

    We extract corners from the emergency map and the SLAM map, in between which we find correspondences using a distance measure. We then build a graph representation associating information from the emergency map and the SLAM map. Corners in the emergency map, corners in the robot map, and robot poses are added as nodes in the graph, while odometry, corner observations, walls in the emergency map, and corner associations are added as edges. To conserve the topology of the emergency map, but correct its possible errors in scale, edges representing the emergency map's walls are given a covariance so that they are easy to extend or shrink but hard to rotate. Correspondences between corners represent a zero transformation for the optimization to match them as close as possible. The graph optimization is done by using a combination robust kernels. We first use the Huber kernel, to converge toward a good solution, followed by Dynamic Covariance Scaling, to handle the remaining errors.

    We demonstrate our system in an office environment. We run the SLAM online during the exploration. Using the map enhanced by information from the emergency map, the robot was able to plan the shortest path toward a place it has not yet explored. This capability can be a real asset in complex buildings where exploration can take up a long time. It can also reduce exploration time by avoiding exploration of dead-ends, or search of specific places since the robot knows where it is in the emergency map.

  • 46.
    Mielle, Malcolm
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Magnusson, Martin
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    SLAM auto-complete: completing a robot map using an emergency map2017Ingår i: 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), IEEE conference proceedings, 2017, s. 35-40, artikel-id 8088137Konferensbidrag (Refereegranskat)
    Abstract [en]

    In search and rescue missions, time is an important factor; fast navigation and quickly acquiring situation awareness might be matters of life and death. Hence, the use of robots in such scenarios has been restricted by the time needed to explore and build a map. One way to speed up exploration and mapping is to reason about unknown parts of the environment using prior information. While previous research on using external priors for robot mapping mainly focused on accurate maps or aerial images, such data are not always possible to get, especially indoor. We focus on emergency maps as priors for robot mapping since they are easy to get and already extensively used by firemen in rescue missions. However, those maps can be outdated, information might be missing, and the scales of rooms are typically not consistent.

    We have developed a formulation of graph-based SLAM that incorporates information from an emergency map. The graph-SLAM is optimized using a combination of robust kernels, fusing the emergency map and the robot map into one map, even when faced with scale inaccuracies and inexact start poses.

    We typically have more than 50% of wrong correspondences in the settings studied in this paper, and the method we propose correctly handles them. Experiments in an office environment show that we can handle up to 70% of wrong correspondences and still get the expected result. The robot can navigate and explore while taking into account places it has not yet seen. We demonstrate this in a test scenario and also show that the emergency map is enhanced by adding information not represented such as closed doors or new walls.

    Ladda ner fulltext (pdf)
    SLAM auto-complete: completing a robot map using an emergency map
  • 47.
    Mosberger, Rafael
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    An Inexpensive Monocular Vision System for Tracking Humans in Industrial Environments2013Ingår i: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), IEEE conference proceedings, 2013, s. 5850-5857Konferensbidrag (Refereegranskat)
    Abstract [en]

    We report on a novel vision-based method for reliable human detection from vehicles operating in industrial environments in the vicinity of workers. By exploiting the fact that reflective vests represent a standard safety equipment on most industrial worksites, we use a single camera system and active IR illumination to detect humans by identifying the reflective vest markers. Adopting a sparse feature based approach, we classify vest markers against other reflective material and perform supervised learning of the object distance based on local image descriptors. The integration of the resulting per-feature 3D position estimates in a particle filter finally allows to perform human tracking in conditions ranging from broad daylight to complete darkness.

  • 48.
    Mosberger, Rafael
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Estimating the 3d position of humans wearing a reflective vest using a single camera system2012Ingår i: Proceedings of the International Conference on Field and Service Robotics (FSR), Springer, 2012Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper presents a novel possible solution for people detection and estimation of their 3D position in challenging shared environments. Addressing safety critical applications in industrial environments, we make the basic assumption that people wear reflective vests. In order to detect these vests and to discriminate them from other reflective material, we propose an approach based on a single camera equipped with an IR flash. The camera acquires pairs of images, one with and one without IR flash, in short succession. The images forming a pair are then related to each other through feature tracking, which allows to discard features for which the relative intensity difference is small and which are thus not believed to belong to a reflective vest. Next, the local neighbourhood of the remaining features is further analysed. First, a Random Forest classifier is used to discriminate between features caused by a reflective vest and features caused by some other reflective materials. Second, the distance between the camera and the vest features is estimated using a Random Forest regressor. The proposed system was evaluated in one indoor and two challenging outdoor scenarios. Our results indicate very good classification performance and remarkably accurate distance estimation especially in combination with the SURF descriptor, even under direct exposure to sunlight.

  • 49.
    Mosberger, Rafael
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Estimating the 3D Position of Humans Wearing a Reflective Vest Using a Single Camera System2014Ingår i: Field and Service Robotics: Results of the 8th International Conference / [ed] Yoshida, Kazuya, Tadokoro, Satoshi, Springer Berlin/Heidelberg, 2014, s. 143-157Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    This chapter presents a novel possible solution for people detection and estimation of their 3D position in challenging shared environments. Addressing safety critical applications in industrial environments, we make the basic assumption that people wear reflective vests. In order to detect these vests and to discriminate them from other reflective material, we propose an approach based on a single camera equipped with an IR flash. The camera acquires pairs of images, one with and one without IR flash, in short succession. The images forming a pair are then related to each other through feature tracking, which allows to discard features for which the relative intensity difference is small and which are thus not believed to belong to a reflective vest. Next, the local neighbourhood of the remaining features is further analysed. First, a Random Forest classifier is used to discriminate between features caused by a reflective vest and features caused by some other reflective materials. Second, the distance between the camera and the vest features is estimated using a Random Forest regressor. The proposed system was evaluated in one indoor and two challenging outdoor scenarios. Our results indicate very good classification performance and remarkably accurate distance estimation especially in combination with the SURF descriptor, even under direct exposure to sunlight.

  • 50.
    Mosberger, Rafael
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    A customized vision system for tracking humans wearing reflective safety clothing from industrial vehicles and machinery2014Ingår i: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 14, nr 10, s. 17952-17980Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions.

12 1 - 50 av 70
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf