To Örebro University

oru.seÖrebro University Publications
Change search
Refine search result
12 1 - 50 of 95
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Adolfsson, Daniel
    et al.
    Örebro University, School of Science and Technology.
    Castellano-Quero, Manuel
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    CorAl: Introspection for robust radar and lidar perception in diverse environments using differential entropy2022In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 155, article id 104136Article in journal (Refereed)
    Abstract [en]

    Robust perception is an essential component to enable long-term operation of mobile robots. It depends on failure resilience through reliable sensor data and pre-processing, as well as failure awareness through introspection, for example the ability to self-assess localization performance. This paper presents CorAl: a principled, intuitive, and generalizable method to measure the quality of alignment between pairs of point clouds, which learns to detect alignment errors in a self-supervised manner. CorAl compares the differential entropy in the point clouds separately with the entropy in their union to account for entropy inherent to the scene. By making use of dual entropy measurements, we obtain a quality metric that is highly sensitive to small alignment errors and still generalizes well to unseen environments. In this work, we extend our previous work on lidar-only CorAl to radar data by proposing a two-step filtering technique that produces high-quality point clouds from noisy radar scans. Thus, we target robust perception in two ways: by introducing a method that introspectively assesses alignment quality, and by applying it to an inherently robust sensor modality. We show that our filtering technique combined with CorAl can be applied to the problem of alignment classification, and that it detects small alignment errors in urban settings with up to 98% accuracy, and with up to 96% if trained only in a different environment. Our lidar and radar experiments demonstrate that CorAl outperforms previous methods both on the ETH lidar benchmark, which includes several indoor and outdoor environments, and the large-scale Oxford and MulRan radar data sets for urban traffic scenarios. The results also demonstrate that CorAl generalizes very well across substantially different environments without the need of retraining.

  • 2.
    Adolfsson, Daniel
    et al.
    Örebro University, School of Science and Technology.
    Karlsson, Mattias
    MRO Lab of the AASS Research Centre, Örebro University, Örebro, Sweden.
    Kubelka, Vladimír
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    TBV Radar SLAM - Trust but Verify Loop Candidates2023In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 8, no 6, p. 3613-3620Article in journal (Refereed)
    Abstract [en]

    Robust SLAM in large-scale environments requires fault resilience and awareness at multiple stages, from sensing and odometry estimation to loop closure. In this work, we present TBV (Trust But Verify) Radar SLAM, a method for radar SLAM that introspectively verifies loop closure candidates. TBV Radar SLAM achieves a high correct-loop-retrieval rate by combining multiple place-recognition techniques: tightly coupled place similarity and odometry uncertainty search, creating loop descriptors from origin-shifted scans, and delaying loop selection until after verification. Robustness to false constraints is achieved by carefully verifying and selecting the most likely ones from multiple loop constraints. Importantly, the verification and selection are carried out after registration when additional sources of loop evidence can easily be computed. We integrate our loop retrieval and verification method with a robust odometry pipeline within a pose graph framework. By evaluation on public benchmarks we found that TBV Radar SLAM achieves 65% lower error than the previous state of the art. We also show that it generalizes across environments without needing to change any parameters. We provide the open-source implementation at https://github.com/dan11003/tbv_slam_public

    The full text will be freely available from 2025-06-01 00:00
  • 3.
    Adolfsson, Daniel
    et al.
    Örebro University, School of Science and Technology.
    Lowry, Stephanie
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Improving Localisation Accuracy using Submaps in warehouses2018Conference paper (Other academic)
    Abstract [en]

    This paper presents a method for localisation in hybrid metric-topological maps built using only local information that is, only measurements that were captured by the robot when it was in a nearby location. The motivation is that observations are typically range and viewpoint dependent and that a map a discrete map representation might not be able to explain the full structure within a voxel. The localisation system uses a method to select submap based on how frequently and where from each submap was updated. This allow the system to select the most descriptive submap, thereby improving the localisation and increasing performance by up to 40%.

    Download full text (pdf)
    Improving Localisation Accuracy using Submaps in warehouses
  • 4.
    Adolfsson, Daniel
    et al.
    Örebro University, School of Science and Technology.
    Lowry, Stephanie
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    A Submap per Perspective: Selecting Subsets for SuPer Mapping that Afford Superior Localization Quality2019In: 2019 European Conference on Mobile Robots (ECMR), IEEE, 2019Conference paper (Refereed)
    Abstract [en]

    This paper targets high-precision robot localization. We address a general problem for voxel-based map representations that the expressiveness of the map is fundamentally limited by the resolution since integration of measurements taken from different perspectives introduces imprecisions, and thus reduces localization accuracy.We propose SuPer maps that contain one Submap per Perspective representing a particular view of the environment. For localization, a robot then selects the submap that best explains the environment from its perspective. We propose SuPer mapping as an offline refinement step between initial SLAM and deploying autonomous robots for navigation. We evaluate the proposed method on simulated and real-world data that represent an important use case of an industrial scenario with high accuracy requirements in an repetitive environment. Our results demonstrate a significantly improved localization accuracy, up to 46% better compared to localization in global maps, and up to 25% better compared to alternative submapping approaches.

    Download full text (pdf)
    A Submap per Perspective - Selecting Subsets for SuPer Mapping that Afford Superior Localization Quality
  • 5.
    Adolfsson, Daniel
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Alhashimi, Anas
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    CFEAR Radarodometry - Conservative Filtering for Efficient and Accurate Radar Odometry2021In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021), IEEE, 2021, p. 5462-5469Conference paper (Refereed)
    Abstract [en]

    This paper presents the accurate, highly efficient, and learning-free method CFEAR Radarodometry for large-scale radar odometry estimation. By using a filtering technique that keeps the k strongest returns per azimuth and by additionally filtering the radar data in Cartesian space, we are able to compute a sparse set of oriented surface points for efficient and accurate scan matching. Registration is carried out by minimizing a point-to-line metric and robustness to outliers is achieved using a Huber loss. We were able to additionally reduce drift by jointly registering the latest scan to a history of keyframes and found that our odometry method generalizes to different sensor models and datasets without changing a single parameter. We evaluate our method in three widely different environments and demonstrate an improvement over spatially cross-validated state-of-the-art with an overall translation error of 1.76% in a public urban radar odometry benchmark, running at 55Hz merely on a single laptop CPU thread.

    Download full text (pdf)
    CFEAR Radarodometry - Conservative Filtering for Efficient and Accurate Radar Odometry
  • 6.
    Adolfsson, Daniel
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Alhashimi, Anas
    Örebro University, Örebro, Sweden; Computer Engineering Department, University of Baghdad, Baghdad, Iraq.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lidar-Level Localization With Radar? The CFEAR Approach to Accurate, Fast, and Robust Large-Scale Radar Odometry in Diverse Environments2023In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 39, no 2, p. 1476-1495Article in journal (Refereed)
    Abstract [en]

    This article presents an accurate, highly efficient, and learning-free method for large-scale odometry estimation using spinning radar, empirically found to generalize well across very diverse environments—outdoors, from urban to woodland, and indoors in warehouses and mines—without changing parameters. Our method integrates motion compensation within a sweep with one-to-many scan registration that minimizes distances between nearby oriented surface points and mitigates outliers with a robust loss function. Extending our previous approach conservative filtering for efficient and accurate radar odometry (CFEAR), we present an in-depth investigation on a wider range of datasets, quantifying the importance of filtering, resolution, registration cost and loss functions, keyframe history, and motion compensation. We present a new solving strategy and configuration that overcomes previous issues with sparsity and bias, and improves our state-of-the-art by 38%, thus, surprisingly, outperforming radar simultaneous localization and mapping (SLAM) and approaching lidar SLAM. The most accurate configuration achieves 1.09% error at 5 Hz on the Oxford benchmark, and the fastest achieves 1.79% error at 160 Hz.

    Download full text (pdf)
    Lidar-level localization with radar? The CFEAR approach to accurate, fast and robust large-scale radar odometry in diverse environments
  • 7.
    Adolfsson, Daniel
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Alhashimi, Anas
    School of Science and Technology, Örebro University, Örebro, Sweden.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Oriented surface points for efficient and accurate radar odometry2021Conference paper (Refereed)
    Abstract [en]

    This paper presents an efficient and accurate radar odometry pipeline for large-scale localization. We propose a radar filter that keeps only the strongest reflections per-azimuth that exceeds the expected noise level. The filtered radar data is used to incrementally estimate odometry by registering the current scan with a nearby keyframe. By modeling local surfaces, we were able to register scans by minimizing a point-to-line metric and accurately estimate odometry from sparse point sets, hence improving efficiency. Specifically, we found that a point-to-line metric yields significant improvements compared to a point-to-point metric when matching sparse sets of surface points. Preliminary results from an urban odometry benchmark show that our odometry pipeline is accurate and efficient compared to existing methods with an overall translation error of 2.05%, down from 2.78% from the previously best published method, running at 12.5ms per frame without need of environmental specific training. 

  • 8.
    Adolfsson, Daniel
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Liao, Qianfang
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    CorAl – Are the point clouds Correctly Aligned?2021In: 10th European Conference on Mobile Robots (ECMR 2021), IEEE, 2021, Vol. 10Conference paper (Refereed)
    Abstract [en]

    In robotics perception, numerous tasks rely on point cloud registration. However, currently there is no method that can automatically detect misaligned point clouds reliably and without environment-specific parameters. We propose "CorAl", an alignment quality measure and alignment classifier for point cloud pairs, which facilitates the ability to introspectively assess the performance of registration. CorAl compares the joint and the separate entropy of the two point clouds. The separate entropy provides a measure of the entropy that can be expected to be inherent to the environment. The joint entropy should therefore not be substantially higher if the point clouds are properly aligned. Computing the expected entropy makes the method sensitive also to small alignment errors, which are particularly hard to detect, and applicable in a range of different environments. We found that CorAl is able to detect small alignment errors in previously unseen environments with an accuracy of 95% and achieve a substantial improvement to previous methods.

    Download full text (pdf)
    CorAl – Are the point clouds Correctly Aligned?
  • 9.
    Alhashimi, Anas
    et al.
    School of Science and Technology, Örebro University, Örebro, Sweden; Computer Engineering Department, University of Baghdad, Baghdad, Iraq.
    Adolfsson, Daniel
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    BFAR – Bounded False Alarm Rate detector for improved radar odometry estimation2021Conference paper (Refereed)
    Abstract [en]

    This paper presents a new detector for filtering noise from true detections in radar data, which improves the state of the art in radar odometry. Scanning Frequency-Modulated Continuous Wave (FMCW) radars can be useful for localisation and mapping in low visibility, but return a lot of noise compared to (more commonly used) lidar, which makes the detection task more challenging. Our Bounded False-Alarm Rate (BFAR) detector is different from the classical Constant False-Alarm Rate (CFAR) detector in that it applies an affine transformation on the estimated noise level after which the parameters that minimize the estimation error can be learned. BFAR is an optimized combination between CFAR and fixed-level thresholding. Only a single parameter needs to be learned from a training dataset. We apply BFAR tothe use case of radar odometry, and adapt a state-of-the-art odometry pipeline (CFEAR), replacing its original conservative filtering with BFAR. In this way we reduce the state-of-the-art translation/rotation odometry errors from 1.76%/0.5◦/100 m to 1.55%/0.46◦/100 m; an improvement of 12.5%.

  • 10.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Camera based navigation by mobile robots: local visual feature based localisation and mapping2009Book (Other academic)
    Abstract [en]

    The most important property of a mobile robot is the fact that it is mobile. How to give a robot the skills required to navigate around its environment is therefore an important topic in mobile robotics. Navigation, both for robots and humans, typically involves a map. The map can be used, for example, to estimate a pose based on observations (localisation) or determine a suitable path between to locations. Maps are available nowadays for us humans with few exceptions, however, maps suitable for mobile robots rarely exists. In addition, to relate sensor readings to a map requires that the map content and the observation is compatible, i.e. different robots may require different maps for the same area. This book addresses some of the fundamental problems related to mobile robot navigation (registration, localisation and mapping) using cameras as the primary sensor input. Small salient regions (local visual features) are extracted from each camera image, where each region can be seen as a fingerprint. Many fingerprint matches implicates a high likelihood that they corresponding images originate from a similar location, which is a central property utilised in this work.

  • 11.
    Andreasson, Henrik
    Örebro University, Department of Technology.
    Local visual feature based localisation and mapping by mobile robots2008Doctoral thesis, monograph (Other academic)
    Abstract [en]

    This thesis addresses the problems of registration, localisation and simultaneous localisation and mapping (SLAM), relying particularly on local visual features extracted from camera images. These fundamental problems in mobile robot navigation are tightly coupled. Localisation requires a representation of the environment (a map) and registration methods to estimate the pose of the robot relative to the map given the robot’s sensory readings. To create a map, sensor data must be accumulated into a consistent representation and therefore the pose of the robot needs to be estimated, which is again the problem of localisation.

    The major contributions of this thesis are new methods proposed to address the registration, localisation and SLAM problems, considering two different sensor configurations. The first part of the thesis concerns a sensor configuration consisting of an omni-directional camera and odometry, while the second part assumes a standard camera together with a 3D laser range scanner. The main difference is that the former configuration allows for a very inexpensive set-up and (considering the possibility to include visual odometry) the realisation of purely visual navigation approaches. By contrast, the second configuration was chosen to study the usefulness of colour or intensity information in connection with 3D point clouds (“coloured point clouds”), both for improved 3D resolution (“super resolution”) and approaches to the fundamental problems of navigation that exploit the complementary strengths of visual and range information.

    Considering the omni-directional camera/odometry setup, the first part introduces a new registration method based on a measure of image similarity. This registration method is then used to develop a localisation method, which is robust to the changes in dynamic environments, and a visual approach to metric SLAM, which does not require position estimation of local image features and thus provides a very efficient approach.

    The second part, which considers a standard camera together with a 3D laser range scanner, starts with the proposal and evaluation of non-iterative interpolation methods. These methods use colour information from the camera to obtain range information at the resolution of the camera image, or even with sub-pixel accuracy, from the low resolution range information provided by the range scanner. Based on the ability to determine depth values for local visual features, a new registration method is then introduced, which combines the depth of local image features and variance estimates obtained from the 3D laser range scanner to realise a vision-aided 6D registration method, which does not require an initial pose estimate. This is possible because of the discriminative power of the local image features used to determine point correspondences (data association). The vision-aided registration method is further developed into a 6D SLAM approach where the optimisation constraint is based on distances of paired local visual features. Finally, the methods introduced in the second part are combined with a novel adaptive normal distribution transform (NDT) representation of coloured 3D point clouds into a robotic difference detection system.

    Download full text (pdf)
    FULLTEXT01
  • 12.
    Andreasson, Henrik
    et al.
    Örebro University, School of Science and Technology.
    Adolfsson, Daniel
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Incorporating Ego-motion Uncertainty Estimates in Range Data Registration2017In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 1389-1395Conference paper (Refereed)
    Abstract [en]

    Local scan registration approaches commonlyonly utilize ego-motion estimates (e.g. odometry) as aninitial pose guess in an iterative alignment procedure. Thispaper describes a new method to incorporate ego-motionestimates, including uncertainty, into the objective function of aregistration algorithm. The proposed approach is particularlysuited for feature-poor and self-similar environments,which typically present challenges to current state of theart registration algorithms. Experimental evaluation showssignificant improvements in accuracy when using data acquiredby Automatic Guided Vehicles (AGVs) in industrial productionand warehouse environments.

  • 13.
    Andreasson, Henrik
    et al.
    Örebro University, School of Science and Technology.
    Bouguerra, Abdelbaki
    Örebro University, School of Science and Technology.
    Cirillo, Marcello
    Örebro University, School of Science and Technology.
    Dimitrov, Dimitar Nikolaev
    INRIA - Grenoble, Meylan, France.
    Driankov, Dimiter
    Örebro University, School of Science and Technology.
    Karlsson, Lars
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Pecora, Federico
    Örebro University, School of Science and Technology.
    Saarinen, Jari Pekka
    Örebro University, School of Science and Technology. Aalto University, Espo, Finland .
    Sherikov, Aleksander
    Centre de recherche Grenoble Rhône-Alpes, Grenoble, France .
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Autonomous transport vehicles: where we are and what is missing2015In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 22, no 1, p. 64-75Article in journal (Refereed)
    Abstract [en]

    In this article, we address the problem of realizing a complete efficient system for automated management of fleets of autonomous ground vehicles in industrial sites. We elicit from current industrial practice and the scientific state of the art the key challenges related to autonomous transport vehicles in industrial environments and relate them to enabling techniques in perception, task allocation, motion planning, coordination, collision prediction, and control. We propose a modular approach based on least commitment, which integrates all modules through a uniform constraint-based paradigm. We describe an instantiation of this system and present a summary of the results, showing evidence of increased flexibility at the control level to adapt to contingencies.

  • 14.
    Andreasson, Henrik
    et al.
    Örebro University, School of Science and Technology.
    Bouguerra, Abdelbaki
    Örebro University, School of Science and Technology.
    Åstrand, Björn
    CAISR Centrum för tillämpade intelligenta system (IS-lab), Högskolan i Halmstad, Halmstad, Sweden.
    Rögnvaldsson, Thorsteinn
    CAISR Centrum för tillämpade intelligenta system (IS-lab), Högskolan i Halmstad, Halmstad, Sweden.
    Gold-Fish SLAM: An Application of SLAM to Localize AGVs2014In: Field and Service Robotics: Results of the 8th International Conference / [ed] Yoshida, Kazuya; Tadokoro, Satoshi, Heidelberg, Germany: Springer Berlin/Heidelberg, 2014, p. 585-598Chapter in book (Refereed)
    Abstract [en]

    The main focus of this paper is to present a case study of a SLAM solution for Automated Guided Vehicles (AGVs) operating in real-world industrial environments. The studied solution, called Gold-fish SLAM, was implemented to provide localization estimates in dynamic industrial environments, where there are static landmarks that are only rarely perceived by the AGVs. The main idea of Gold-fish SLAM is to consider the goods that enter and leave the environment as temporary landmarks that can be used in combination with the rarely seen static landmarks to compute online estimates of AGV poses. The solution is tested and verified in a factory of paper using an eight ton diesel-truck retrofitted with an AGV control system running at speeds up to 3 m/s. The paper includes also a general discussion on how SLAM can be used in industrial applications with AGVs

  • 15.
    Andreasson, Henrik
    et al.
    Örebro University, School of Science and Technology.
    Bouguerra, Abdelbaki
    Örebro University, School of Science and Technology.
    Åstrand, Björn
    Rögnvaldsson, Thorsteinn
    Örebro University, School of Science and Technology.
    Gold-fish SLAM: an application of SLAM to localize AGVs2012In: Proceedings of the International Conference on Field and Service Robotics (FSR), July 2012., 2012Conference paper (Other academic)
    Abstract [en]

    The main focus of this paper is to present a case study of a SLAM solution for Automated Guided Vehicles (AGVs) operating in real-world industrial environ- ments. The studied solution, called Gold-fish SLAM, was implemented to provide localization estimates in dynamic industrial environments, where there are static landmarks that are only rarely perceived by the AGVs. The main idea of Gold-fish SLAM is to consider the goods that enter and leave the environment as temporary landmarks that can be used in combination with the rarely seen static landmarks to compute online estimates of AGV poses. The solution is tested and verified in a factory of paper using an eight ton diesel-truck retrofitted with an AGV control sys- tem running at speeds up to 3 meters per second. The paper includes also a general discussion on how SLAM can be used in industrial applications with AGVs.

  • 16.
    Andreasson, Henrik
    et al.
    Örebro University, Department of Technology.
    Duckett, Tom
    University of Lincoln, University of Lincoln, UK.
    Lilienthal, Achim J.
    A Minimalistic Approach to Appearance-Based Visual SLAM2008In: IEEE Transactions on Robotics, ISSN 1552-3098, Vol. 24, no 5, p. 991-1001Article in journal (Refereed)
    Abstract [en]

    This paper presents a vision-based approach to SLAM in indoor / outdoor environments with minimalistic sensing and computational requirements. The approach is based on a graph representation of robot poses, using a relaxation algorithm to obtain a globally consistent map. Each link corresponds to a relative measurement of the spatial relation between the two nodes it connects. The links describe the likelihood distribution of the relative pose as a Gaussian distribution. To estimate the covariance matrix for links obtained from an omni-directional vision sensor, a novel method is introduced based on the relative similarity of neighbouring images. This new method does not require determining distances to image features using multiple view geometry, for example. Combined indoor and outdoor experiments demonstrate that the approach can handle qualitatively different environments (without modification of the parameters), that it can cope with violations of the “flat floor assumption” to some degree, and that it scales well with increasing size of the environment, producing topologically correct and geometrically accurate maps at low computational cost. Further experiments demonstrate that the approach is also suitable for combining multiple overlapping maps, e.g. for solving the multi-robot SLAM problem with unknown initial poses.

    Download full text (pdf)
    A Minimalistic Approach to Appearance based Visual SLAM
  • 17.
    Andreasson, Henrik
    et al.
    Örebro University, Department of Technology.
    Duckett, Tom
    Dept. of Computing & Informatics, University of Lincoln, Lincoln, United Kingdom.
    Lilienthal, Achim J.
    Örebro University, Department of Technology.
    Mini-SLAM: minimalistic visual SLAM in large-scale environments based on a new interpretation of image similarity2007In: 2007 IEEE international conference on robotics and automation (ICRA), New York, NY, USA: IEEE, 2007, p. 4096-4101, article id 4209726Conference paper (Refereed)
    Abstract [en]

    This paper presents a vision-based approach to SLAM in large-scale environments with minimal sensing and computational requirements. The approach is based on a graphical representation of robot poses and links between the poses. Links between the robot poses are established based on odometry and image similarity, then a relaxation algorithm is used to generate a globally consistent map. To estimate the covariance matrix for links obtained from the vision sensor, a novel method is introduced based on the relative similarity of neighbouring images, without requiring distances to image features or multiple view geometry. Indoor and outdoor experiments demonstrate that the approach scales well to large-scale environments, producing topologically correct and geometrically accurate maps at minimal computational cost. Mini-SLAM was found to produce consistent maps in an unstructured, large-scale environment (the total path length was 1.4 km) containing indoor and outdoor passages.

    Download full text (pdf)
    Mini-SLAM: Minimalistic Visual SLAM in Large-Scale Environments Based on a New Interpretation of Image Similarity
  • 18.
    Andreasson, Henrik
    et al.
    Örebro University, School of Science and Technology.
    Larsson, Jonas
    ABB Corporate Research, Västerås, Sweden.
    Lowry, Stephanie
    Örebro University, School of Science and Technology.
    A Local Planner for Accurate Positioning for a Multiple Steer-and-Drive Unit Vehicle Using Non-Linear Optimization2022In: Sensors, E-ISSN 1424-8220, Vol. 22, no 7, article id 2588Article in journal (Refereed)
    Abstract [en]

    This paper presents a local planning approach that is targeted for pseudo-omnidirectional vehicles: that is, vehicles that can drive sideways and rotate on the spot. This local planner—MSDU–is based on optimal control and formulates a non-linear optimization problem formulation that exploits the omni-motion capabilities of the vehicle to drive the vehicle to the goal in a smooth and efficient manner while avoiding obstacles and singularities. MSDU is designed for a real platform for mobile manipulation where one key function is the capability to drive in narrow and confined areas. The real-world evaluations show that MSDU planned paths that were smoother and more accurate than a comparable local path planner Timed Elastic Band (TEB), with a mean (translational, angular) error for MSDU of (0.0028 m, 0.0010 rad) compared to (0.0033 m, 0.0038 rad) for TEB. MSDU also generated paths that were consistently shorter than TEB, with a mean (translational, angular) distance traveled of (0.6026 m, 1.6130 rad) for MSDU compared to (0.7346 m, 3.7598 rad) for TEB.

    Download full text (pdf)
    A Local Planner for Accurate Positioning for a Multiple Steer-and-Drive Unit Vehicle Using Non-Linear Optimization
  • 19.
    Andreasson, Henrik
    et al.
    Örebro University, Department of Technology.
    Lilienthal, Achim
    Örebro University, Department of Natural Sciences. aass.
    Vision aided 3D laser scanner based registration2007In: ECMR 2007: Proceedings of the European Conference on Mobile Robots, 2007, p. 192-197Conference paper (Refereed)
    Abstract [en]

    This paper describes a vision and 3D laser based registration approach which utilizes visual features to identify correspondences. Visual features are obtained from the images of a standard color camera and the depth of these features is determined by interpolating between the scanning points of a 3D laser range scanner, taking into consideration the visual information in the neighbourhood of the respective visual feature. The 3D laser scanner is also used to determine a position covariance estimate of the visual feature. To exploit these covariance estimates, an ICP algorithm based on the Mahalanobis distance is applied. Initial experimental results are presented in a real world indoor laboratory environment

    Download full text (pdf)
    Vision Aided 3D Laser Scanner Based Registration
  • 20.
    Andreasson, Henrik
    et al.
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    6D scan registration using depth-interpolated local image features2010In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 58, no 2, p. 157-165Article in journal (Refereed)
    Abstract [en]

    This paper describes a novel registration approach that is based on a combination of visual and 3D range information.To identify correspondences, local visual features obtained from images of a standard color camera are compared and the depth of matching features (and their position covariance) is determined from the range measurements of a 3D laserscanner. The matched depth-interpolated image features allows to apply registration with known correspondences.We compare several ICP variants in this paper and suggest an extension that considers the spatial distance betweenmatching features to eliminate false correspondences. Experimental results are presented in both outdoor and indoor environments. In addition to pair-wise registration, we also propose a global registration method that registers allscan poses simultaneously.

    Download full text (pdf)
    FULLTEXT01
  • 21.
    Andreasson, Henrik
    et al.
    Örebro University, Department of Technology.
    Lilienthal, Achim J.
    Örebro University, Department of Technology.
    Triebel, Rudolph
    Department of Computer Science, University of Freiburg, Germany.
    Vision based interpolation of 3D laser scans2006In: Proceedings of the Third International Conference on Autonomous Robots and Agents, 2006, p. 455-460Conference paper (Refereed)
    Abstract [en]

    3D range sensors, particularly 3D laser range scanners, enjoy a rising popularity and are used nowadays for many different applications. The resolution 3D range sensors provide in the image plane is typically much lower than the resolution of a modern color camera. In this paper we focus on methods to derive a high-resolution depth image from a low-resolution 3D range sensor and a color image. The main idea is to use color similarity as an indication of depth similarity, based on the observation that depth discontinuities in the scene often correspond to color or brightness changes in the camera image. We present five interpolation methods and compare them with an independently proposed method based on Markov Random Fields. The algorithms proposed in this paper are non-iterative and include a parameter-free vision-based interpolation method. In contrast to previous work, we present ground truth evaluation with real world data and analyse both indoor and outdoor data. Further, we suggest and evaluate four methods to determine a confidence measure for the accuracy of interpolated range values.

    Download full text (pdf)
    Vision based Interpolation of 3D Laser Scans
  • 22.
    Andreasson, Henrik
    et al.
    Örebro University, Department of Technology.
    Magnusson, Martin
    Örebro University, Department of Technology.
    Lilienthal, Achim
    Örebro University, Department of Natural Sciences.
    Has something changed here?: Autonomous difference detection for security patrol robots2007In: 2007 IEEE/RSJ international conference on intelligent robots and systems, New York, NY, USA: IEEE, 2007, p. 3429-3435, article id 4399381Conference paper (Refereed)
    Abstract [en]

    This paper presents a system for autonomous change detection with a security patrol robot. In an initial step a reference model of the environment is created and changes are then detected with respect to the reference model as differences in coloured 3D point clouds, which are obtained from a 3D laser range scanner and a CCD camera. The suggested approach introduces several novel aspects, including a registration method that utilizes local visual features to determine point correspondences (thus essentially working without an initial pose estimate) and the 3D-NDT representation with adaptive cell size to efficiently represent both the spatial and colour aspects of the reference model. Apart from a detailed description of the individual parts of the difference detection system, a qualitative experimental evaluation in an indoor lab environment is presented, which demonstrates that the suggested system is able register and detect changes in spatial 3D data and also to detect changes that occur in colour space and are not observable using range values only.

    Download full text (pdf)
    Has Something Changed Here?: Autonomous Difference Detection for Security Patrol Robots
  • 23.
    Andreasson, Henrik
    et al.
    Örebro University, School of Science and Technology.
    Saarinen, Jari
    Örebro University, School of Science and Technology.
    Cirillo, Marcello
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Drive the Drive: From Discrete Motion Plans to Smooth Drivable Trajectories2014In: Robotics, E-ISSN 2218-6581, Vol. 3, no 4, p. 400-416Article in journal (Refereed)
    Abstract [en]

    Autonomous navigation in real-world industrial environments is a challenging task in many respects. One of the key open challenges is fast planning and execution of trajectories to reach arbitrary target positions and orientations with high accuracy and precision, while taking into account non-holonomic vehicle constraints. In recent years, lattice-based motion planners have been successfully used to generate kinematically and kinodynamically feasible motions for non-holonomic vehicles. However, the discretized nature of these algorithms induces discontinuities in both state and control space of the obtained trajectories, resulting in a mismatch between the achieved and the target end pose of the vehicle. As endpose accuracy is critical for the successful loading and unloading of cargo in typical industrial applications, automatically planned paths have not been widely adopted in commercial AGV systems. The main contribution of this paper is a path smoothing approach, which builds on the output of a lattice-based motion planner to generate smooth drivable trajectories for non-holonomic industrial vehicles. The proposed approach is evaluated in several industrially relevant scenarios and found to be both fast (less than 2 s per vehicle trajectory) and accurate (end-point pose errors below 0.01 m in translation and 0.005 radians in orientation).

    Download full text (pdf)
    fulltext
  • 24.
    Andreasson, Henrik
    et al.
    Örebro University, School of Science and Technology.
    Saarinen, Jari
    Örebro University, School of Science and Technology.
    Cirillo, Marcello
    Örebro University, School of Science and Technology. SCANIA AB, Södertälje, Sweden.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Fast, continuous state path smoothing to improve navigation accuracy2015In: IEEE International Conference on Robotics and Automation (ICRA), 2015, IEEE Computer Society, 2015, p. 662-669Conference paper (Refereed)
    Abstract [en]

    Autonomous navigation in real-world industrial environments is a challenging task in many respects. One of the key open challenges is fast planning and execution of trajectories to reach arbitrary target positions and orientations with high accuracy and precision, while taking into account non-holonomic vehicle constraints. In recent years, lattice-based motion planners have been successfully used to generate kinematically and kinodynamically feasible motions for non-holonomic vehicles. However, the discretized nature of these algorithms induces discontinuities in both state and control space of the obtained trajectories, resulting in a mismatch between the achieved and the target end pose of the vehicle. As endpose accuracy is critical for the successful loading and unloading of cargo in typical industrial applications, automatically planned paths have not be widely adopted in commercial AGV systems. The main contribution of this paper addresses this shortcoming by introducing a path smoothing approach, which builds on the output of a lattice-based motion planner to generate smooth drivable trajectories for non-holonomic industrial vehicles. In real world tests presented in this paper we demonstrate that the proposed approach is fast enough for online use (it computes trajectories faster than they can be driven) and highly accurate. In 100 repetitions we achieve mean end-point pose errors below 0.01 meters in translation and 0.002 radians in orientation. Even the maximum errors are very small: only 0.02 meters in translation and 0.008 radians in orientation.

  • 25.
    Andreasson, Henrik
    et al.
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Real time registration of RGB-D data using local visual features and 3D-NDT registration2012In: Proc. of International Conference on Robotics and Automation (ICRA) Workshop on Semantic Perception, Mapping and Exploration (SPME), IEEE, 2012Conference paper (Refereed)
    Abstract [en]

    Recent increased popularity of RGB-D capable sensors in robotics has resulted in a surge of related RGBD registration methods. This paper presents several RGB-D registration algorithms based on combinations between local visual feature and geometric registration. Fast and accurate transformation refinement is obtained by using a recently proposed geometric registration algorithm, based on the Three-Dimensional Normal Distributions Transform (3D-NDT). Results obtained on standard data sets have demonstrated mean translational errors on the order of 1 cm and rotational errors bellow 1 degree, at frame processing rates of about 15 Hz.

  • 26.
    Andreasson, Henrik
    et al.
    Örebro University, Department of Technology.
    Treptow, André
    University of Tübingen.
    Duckett, Tom
    Örebro University, Department of Technology.
    Localization for mobile robots using panoramic vision, local features and particle filter2005In: Proceedings of the 2005 IEEE International Converence on Robotics and Automation: ICRA - 2005, 2005, p. 3348-3353Conference paper (Refereed)
    Abstract [en]

    In this paper we present a vision-based approach to self-localization that uses a novel scheme to integrate featurebased matching of panoramic images with Monte Carlo localization. A specially modified version of Lowe’s SIFT algorithm is used to match features extracted from local interest points in the image, rather than using global features calculated from the whole image. Experiments conducted in a large, populated indoor environment (up to 5 persons visible) over a period of several months demonstrate the robustness of the approach, including kidnapping and occlusion of up to 90% of the robot’s field of view.

  • 27.
    Andreasson, Henrik
    et al.
    Örebro University, Department of Technology.
    Treptow, André
    University of Tübingen.
    Duckett, Tom
    Örebro University, Department of Technology.
    Self-localization in non-stationary environments using omni-directional vision2007In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 55, no 7, p. 541-551Article in journal (Refereed)
    Abstract [en]

    This paper presents an image-based approach for localization in non-static environments using local feature descriptors, and its experimental evaluation in a large, dynamic, populated environment where the time interval between the collected data sets is up to two months. By using local features together with panoramic images, robustness and invariance to large changes in the environment can be handled. Results from global place recognition with no evidence accumulation and a Monte Carlo localization method are shown. To test the approach even further, experiments were conducted with up to 90% virtual occlusion in addition to the dynamic changes in the environment

  • 28.
    Andreasson, Henrik
    et al.
    Örebro University, Department of Technology.
    Triebel, Rudolph
    University of Friburg.
    Burgard, Wolfram
    University of Friburg.
    Improving plane extraction from 3D data by fusing laser data and vision2005In: 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005. (IROS 2005): IROS 2005 IEEE/RSJ, 2005, p. 2656-2661Conference paper (Refereed)
    Abstract [en]

    The problem of extracting three-dimensional structures from data acquired with mobile robots has received considerable attention over the past years. Robots that are able to perceive their three-dimensional environment are envisioned to more robustly perform tasks like navigation, rescue, and manipulation. In this paper we present an approach that simultaneously uses color and range information to cluster 3d points into planar structures. Our current system also is able to calibrate the camera and the laser based on the remission values provided by the range scanner and the brightness of the pixels in the image. It has been implemented on a mobile robot equipped with a manipulator that carries a range scanner and a camera for acquiring colored range scans. Several experiments carried out on real data and in simulations demonstrate that our approach yields highly accurate results also in comparison with previous approaches

  • 29.
    Andreasson, Henrik
    et al.
    Örebro University, Department of Technology.
    Triebel, Rudolph
    Department of Computer Science, University of Freiburg, Freiburg, Germany.
    Lilienthal, Achim J.
    Örebro University, Department of Technology.
    Non-iterative Vision-based Interpolation of 3D Laser Scans2007In: Autonomos Agents and Robots / [ed] Mukhopadhyay, SC, Gupta, GS, Berlin/Heidelberg, Germany: Springer , 2007, Vol. 76, p. 83-90, article id 4399381Conference paper (Other academic)
    Abstract [en]

    3D range sensors, particularly 3D laser range scanners, enjoy a rising popularity and are used nowadays for many different applications. The resolution 3D range sensors provide in the image plane is typically much lower than the resolution of a modern colour camera. In this chapter we focus on methods to derive a highresolution depth image from a low-resolution 3D range sensor and a colour image. The main idea is to use colour similarity as an indication of depth similarity, based on the observation that depth discontinuities in the scene often correspond to colour or brightness changes in the camera image. We present five interpolation methods and compare them with an independently proposed method based on Markov random fields. The proposed algorithms are non-iterative and include a parameter-free vision-based interpolation method. In contrast to previous work, we present ground truth evaluation with real world data and analyse both indoor and outdoor data.

    Download full text (pdf)
    Non-iterative Vision-based Interpolation of 3D Laser Scans
  • 30.
    Arunachalam, Ajay
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    MSI-RPi: Affordable, Portable, and Modular Multispectral Imaging Prototype Suited to Operate in UV, Visible and Mid-Infrared Regions2022In: Journal of Mobile Multimedia, ISSN 1550-4646, E-ISSN 1550-4654, Vol. 18, no 3, p. 723-742Article in journal (Refereed)
    Abstract [en]

    Digital plant inventory provides critical growth insights, given the associated data quality is good. Stable & high-quality image acquisition is critical for further examination. In this work, we showcase an affordable, portable, and modular spectral camera prototype, designed with open hardware’s and open-source software’s. The image sensors used were color, and infrared Pi micro-camera. The designed prototype presents the advantage as being low-cost and modular with respect to other general commercial market available spectral devices. The micro-size connected sensors make it a compact instrument that can be used for any general spectral acquisition purposes, along with the provision of custom selection of the bands, making the presented prototype design a Plug-nd-Play (PnP) setup that can be used in different wide application areas. The images acquired from our custom-built prototype were back-tested by performing image analysis and qualitative assessments. The image acquisition software, and processing algorithm has been programmed, which is bundled with our developed system. Further, an end-to-end automation script is integrated for the users to readily leverage the services on-demand. The design files, schematics, and all the related materials of the spectral block design is open-sourced with open-hardware license & is made available at https://github.com/ajayarunachalam/Multi-Spectral-Imaging-RaspberryPi-Design. The automated data acquisition scripts & the spectral image analysis done is made available at https://github.com/ajayarunachalam/SI-RPi.

  • 31.
    Arunachalam, Ajay
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    RaspberryPi‐Arduino (RPA) powered smart mirrored and reconfigurable IoT facility for plant science research2022In: Internet Technology Letters, E-ISSN 2476-1508, Vol. 5, no 1, article id e272Article in journal (Refereed)
    Abstract [en]

    Continuous monitoring of crops is critical for the sustainability of agriculture. The effects of changes in temperature, light intensity, humidity, pH, soil moisture, gas intensities, etc. have an overall impact on the plant growth. Growth chambers are environmental controlled facilities which needs to be monitored round-the-clock. To improve both the reproducibility, and maintenance of such facilities, remote monitoring plays a very pivotal role. An automated re-configurable & persistent mirrored storage-based remote monitoring system is developed with low-cost open source hardwares and softwares. The system automates sensors deployment, storage (database, logs), and provides an elegant dashboard to visualize the real-time continuous data stream. We propose a new smart AGRO IoT system with robust data acquisition mechanism, and also propose two software component nodes, (i.e., Mirroring and Reconfiguration) running as an instance of the whole IoT facility. The former one is aimed to minimize/avoid the downtime, while the latter one is aimed to leverage the available cores, and better utilization of the computational resources. Our system can be easily deployed in growth chambers, greenhouses, CNC farming test-bed setup, cultivation plots, and further can be also extended to support large-farms with either using multiple individual standalone setup as heterogeneous instances of this facility, or by extending it as master-slave cluster configuration for communication as a single homogeneous instance. Our RaspberryPi-Arduino (RPA) powered solution is scalable, and provides stability for monitoring any environment continuously at ease.

    Download full text (pdf)
    RaspberryPi-Arduino (RPA) powered smart mirrored and reconfigurable IoT facility for plant science research
  • 32.
    Arunachalam, Ajay
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Real-time plant phenomics under robotic farming setup: A vision-based platform for complex plant phenotyping tasks2021In: Computers & electrical engineering, ISSN 0045-7906, E-ISSN 1879-0755, Vol. 92, article id 107098Article in journal (Refereed)
    Abstract [en]

    Plant phenotyping in general refers to quantitative estimation of the plant's anatomical, ontogenetical, physiological and biochemical properties. Analyzing big data is challenging, and non-trivial given the different complexities involved. Efficient processing and analysis pipelines are the need of the hour with the increasing popularity of phenotyping technologies and sensors. Through this work, we largely address the overlapping object segmentation & localization problem. Further, we dwell upon multi-plant pipelines that pose challenges as detection and multi-object tracking becomes critical for single frame/set of frames aimed towards uniform tagging & visual features extraction. A plant phenotyping tool named RTPP (Real-Time Plant Phenotyping) is presented that can aid in the detection of single/multi plant traits, modeling, and visualization for agricultural settings. We compare our system with the plantCV platform. The relationship of the digital estimations, and the measured plant traits are discussed that plays a vital roadmap towards precision farming and/or plant breeding.

    Download full text (pdf)
    Real-time plant phenomics under robotic farming setup: A vision-based platform for complex plant phenotyping tasks
  • 33.
    Bouguerra, Abdelbaki
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Åstrand, Björn
    Halmstad University, Halmstad, Sweden.
    Rögnvaldsson, Thorsteinn
    Halmstad University, Halmstad, Sweden.
    An autonomous robotic system for load transportation2009In: 2009 IEEE Conference on Emerging Technologies & Factory Automation (EFTA 2009), New York: IEEE conference proceedings, 2009, p. 1563-1566Conference paper (Refereed)
    Abstract [en]

    This paper presents an overview of an autonomous robotic material handling system. The goal of the system is to extend the functionalities of traditional AGVs to operate in highly dynamic environments. Traditionally, the reliable functioning of AGVs relies on the availability of adequate infrastructure to support navigation. In the target environments of our system, such infrastructure is difficult to setup in an efficient way. Additionally, the location of objects to handle are unknown, which requires that the system be able to detect and track object positions at runtime. Another requirement of the system is to be able to generate trajectories dynamically, which is uncommon in industrial AGV systems.

    Download full text (pdf)
    FULLTEXT01
  • 34.
    Bouguerra, Abdelbaki
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Åstrand, Björn
    Halmstad University.
    Rögnvaldsson, Thorsteinn
    Halmstad University, Sweden.
    MALTA: a system of multiple autonomous trucks for load transportation2009In: Proceedings of the 4th European conference on mobile robots (ECMR) / [ed] Ivan Petrovic, Achim J. Lilienthal, 2009, p. 93-98Conference paper (Refereed)
    Abstract [en]

    This paper presents an overview of an autonomousrobotic material handling system. The goal of the system is toextend the functionalities of traditional AGVs to operate in highlydynamic environments. Traditionally, the reliable functioning ofAGVs relies on the availability of adequate infrastructure tosupport navigation. In the target environments of our system,such infrastructure is difficult to setup in an efficient way.Additionally, the location of objects to handle are unknown,which requires that the system be able to detect and track objectpositions at runtime. Another requirement of the system is to beable to generate trajectories dynamically, which is uncommon inindustrial AGV systems.

    Download full text (pdf)
    Fulltext
  • 35.
    Bunz, Elsa
    et al.
    Örebro University, Örebro, Sweden.
    Chadalavada, Ravi Teja
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Krug, Robert
    Örebro University, School of Science and Technology.
    Schindler, Maike
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Spatial Augmented Reality and Eye Tracking for Evaluating Human Robot Interaction2016In: Proceedings of RO-MAN 2016 Workshop: Workshop on Communicating Intentions in Human-Robot Interaction, 2016Conference paper (Refereed)
    Abstract [en]

    Freely moving autonomous mobile robots may leadto anxiety when operating in workspaces shared with humans.Previous works have given evidence that communicating in-tentions using Spatial Augmented Reality (SAR) in the sharedworkspace will make humans more comfortable in the vicinity ofrobots. In this work, we conducted experiments with the robotprojecting various patterns in order to convey its movementintentions during encounters with humans. In these experiments,the trajectories of both humans and robot were recorded witha laser scanner. Human test subjects were also equipped withan eye tracker. We analyzed the eye gaze patterns and thelaser scan tracking data in order to understand how the robot’sintention communication affects the human movement behavior.Furthermore, we used retrospective recall interviews to aid inidentifying the reasons that lead to behavior changes.

    Download full text (pdf)
    fulltext
  • 36.
    Chadalavada, Ravi Teja
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Krug, Robert
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Empirical evaluation of human trust in an expressive mobile robot2016In: Proceedings of RSS Workshop "Social Trust in Autonomous Robots 2016", 2016Conference paper (Refereed)
    Abstract [en]

    A mobile robot communicating its intentions using Spatial Augmented Reality (SAR) on the shared floor space makes humans feel safer and more comfortable around the robot. Our previous work [1] and several other works established this fact. We built upon that work by adding an adaptable information and control to the SAR module. An empirical study about how a mobile robot builds trust in humans by communicating its intentions was conducted. A novel way of evaluating that trust is presented and experimentally shown that adaption in SAR module lead to natural interaction and the new evaluation system helped us discover that the comfort levels between human-robot interactions approached those of human-human interactions.

    Download full text (pdf)
    fulltext
  • 37.
    Chadalavada, Ravi Teja
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Krug, Robert
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    That’s on my Mind!: Robot to Human Intention Communication through on-board Projection on Shared Floor Space2015In: 2015 European Conference on Mobile Robots (ECMR), New York: IEEE conference proceedings , 2015Conference paper (Refereed)
    Abstract [en]

    The upcoming new generation of autonomous vehicles for transporting materials in industrial environments will be more versatile, flexible and efficient than traditional AGVs, which simply follow pre-defined paths. However, freely navigating vehicles can appear unpredictable to human workers and thus cause stress and render joint use of the available space inefficient. Here we address this issue and propose on-board intention projection on the shared floor space for communication from robot to human. We present a research prototype of a robotic fork-lift equipped with a LED projector to visualize internal state information and intents. We describe the projector system and discuss calibration issues. The robot’s ability to communicate its intentions is evaluated in realistic situations where test subjects meet the robotic forklift. The results show that already adding simple information, such as the trajectory and the space to be occupied by the robot in the near future, is able to effectively improve human response to the robot.

  • 38.
    Chadalavada, Ravi Teja
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Schindler, Maike
    Faculty of Human Sciences, University of Cologne, Germany, Cologne, Gemany.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Implicit intention transference using eye-tracking glasses for improved safety in human-robot interaction2019Conference paper (Refereed)
    Abstract [en]

    Eye gaze can convey information about intentions beyond what can beinferred from the trajectory and head pose of a person. We propose eye-trackingglasses as safety equipment in industrial environments shared by humans androbots. In this work, an implicit intention transference system was developed and implemented. Robot was given access to human eye gaze data, and it responds tothe eye gaze data through spatial augmented reality projections on the sharedfloor space in real-time and the robot could also adapt its path. This allows proactivesafety approaches in HRI for example by attempting to get the human'sattention when they are in the vicinity of a moving robot. A study was conductedwith workers at an industrial warehouse. The time taken to understand the behaviorof the system was recorded. Electrodermal activity and pupil diameter wererecorded to measure the increase in stress and cognitive load while interactingwith an autonomous system, using these measurements as a proxy to quantifytrust in autonomous systems.

    Download full text (pdf)
    Implicit intention transference using eye-tracking glasses for improved safety in human-robot interaction
  • 39.
    Chadalavada, Ravi Teja
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Schindler, Maike
    Örebro University, School of Science and Technology.
    Palm, Rainer
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Accessing your navigation plans! Human-Robot Intention Transfer using Eye-Tracking Glasses2018In: Advances in Manufacturing Technology XXXII: Proceedings of the 16th International Conference on Manufacturing Research, incorporating the 33rd National Conference on Manufacturing Research, September 11–13, 2018, University of Skövde, Sweden / [ed] Case K. &Thorvald P., Amsterdam, Netherlands: IOS Press, 2018, p. 253-258Conference paper (Refereed)
    Abstract [en]

    Robots in human co-habited environments need human-aware task and motion planning, ideally responding to people’s motion intentions as soon as they can be inferred from human cues. Eye gaze can convey information about intentions beyond trajectory and head pose of a person. Hence, we propose eye-tracking glasses as safety equipment in industrial environments shared by humans and robots. This paper investigates the possibility of human-to-robot implicit intention transference solely from eye gaze data.  We present experiments in which humans wearing eye-tracking glasses encountered a small forklift truck under various conditions. We evaluate how the observed eye gaze patterns of the participants related to their navigation decisions. Our analysis shows that people primarily gazed on that side of the robot they ultimately decided to pass by. We discuss implications of these results and relate to a control approach that uses human eye gaze for early obstacle avoidance.

    Download full text (pdf)
    Accessing your navigation plans! Human-Robot Intention Transfer using Eye-Tracking Glasses
  • 40.
    Chadalavada, Ravi Teja
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Schindler, Maike
    Faculty of Human Sciences, University of Cologne, Germany.
    Palm, Rainer
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Bi-directional navigation intent communication using spatial augmented reality and eye-tracking glasses for improved safety in human-robot interaction2020In: Robotics and Computer-Integrated Manufacturing, ISSN 0736-5845, E-ISSN 1879-2537, Vol. 61, article id 101830Article in journal (Refereed)
    Abstract [en]

    Safety, legibility and efficiency are essential for autonomous mobile robots that interact with humans. A key factor in this respect is bi-directional communication of navigation intent, which we focus on in this article with a particular view on industrial logistic applications. In the direction robot-to-human, we study how a robot can communicate its navigation intent using Spatial Augmented Reality (SAR) such that humans can intuitively understand the robot's intention and feel safe in the vicinity of robots. We conducted experiments with an autonomous forklift that projects various patterns on the shared floor space to convey its navigation intentions. We analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift and carried out stimulated recall interviews (SRI) in order to identify desirable features for projection of robot intentions. In the direction human-to-robot, we argue that robots in human co-habited environments need human-aware task and motion planning to support safety and efficiency, ideally responding to people's motion intentions as soon as they can be inferred from human cues. Eye gaze can convey information about intentions beyond what can be inferred from the trajectory and head pose of a person. Hence, we propose eye-tracking glasses as safety equipment in industrial environments shared by humans and robots. In this work, we investigate the possibility of human-to-robot implicit intention transference solely from eye gaze data and evaluate how the observed eye gaze patterns of the participants relate to their navigation decisions. We again analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift for clues that could reveal direction intent. Our analysis shows that people primarily gazed on that side of the robot they ultimately decided to pass by. We discuss implications of these results and relate to a control approach that uses human gaze for early obstacle avoidance.

  • 41.
    Cirillo, Marcello
    et al.
    Örebro University, School of Science and Technology.
    Pecora, Federico
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Uras, Tansel
    Department of Computer Science, University of Southern California, Los Angeles, USA.
    Koenig, Sven
    Department of Computer Science, University of Southern California, Los Angeles, USA.
    Integrated Motion Planning and Coordination for Industrial Vehicles2014In: Proceedings of the 24th International Conference on Automated Planning and Scheduling, AAAI Press, 2014Conference paper (Refereed)
    Abstract [en]

    A growing interest in the industrial sector for autonomous ground vehicles has prompted significant investment in fleet management systems. Such systems need to accommodate on-line externally imposed temporal and spatial requirements, and to adhere to them even in the presence of contingencies. Moreover, a fleet management system should ensure correctness, i.e., refuse to commit to requirements that cannot be satisfied. We present an approach to obtain sets of alternative execution patterns (called trajectory envelopes) which provide these guarantees. The approach relies on a constraint-based representation shared among multiple solvers, each of which progressively refines trajectory envelopes following a least commitment principle.

  • 42.
    Della Corte, Bartolomeo
    et al.
    Department of Computer, Control, and Management Engineering “Antonio Ruberti” Sapienza, University of Rome, Rome, Italy.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Grisetti, Giorgio
    Department of Computer, Control, and Management Engineering “Antonio Ruberti” Sapienza, University of Rome, Rome, Italy.
    Unified Motion-Based Calibration of Mobile Multi-Sensor Platforms With Time Delay Estimation2019In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 4, no 2, p. 902-909Article in journal (Refereed)
    Abstract [en]

    The ability to maintain and continuously update geometric calibration parameters of a mobile platform is a key functionality for every robotic system. These parameters include the intrinsic kinematic parameters of the platform, the extrinsic parameters of the sensors mounted on it, and their time delays. In this letter, we present a unified pipeline for motion-based calibration of mobile platforms equipped with multiple heterogeneous sensors. We formulate a unified optimization problem to concurrently estimate the platform kinematic parameters, the sensors extrinsic parameters, and their time delays. We analyze the influence of the trajectory followed by the robot on the accuracy of the estimate. Our framework automatically selects appropriate trajectories to maximize the information gathered and to obtain a more accurate parameters estimate. In combination with that, our pipeline observes the parameters evolution in long-term operation to detect possible values change in the parameters set. The experiments conducted on real data show a smooth convergence along with the ability to detect changes in parameters value. We release an open-source version of our framework to the community.

  • 43.
    Fleck, Sven
    et al.
    University of Tübingen.
    Busch, Florian
    University of Tübingen.
    Biber, Peter
    University of Tübingen.
    Strasser, Wolfgang
    University of Tübingen.
    Andreasson, Henrik
    Örebro University, Department of Technology.
    Omnidirectional 3D modeling on a mobile robot using graph cuts2005In: Proceedings of the 2005 IEEE International Converence on Robotics and Automation: ICRA - 2005, 2005, p. 1748-1754Conference paper (Refereed)
    Abstract [en]

    For a mobile robot it is a natural task to build a 3D model of its environment. Such a model is not only useful for planning robot actions but also to provide a remote human surveillant a realistic visualization of the robot’s state with respect to the environment. Acquiring 3D models of environments is also an important task on its own with many possible applications like creating virtual interactive walkthroughs or as basis for 3D-TV.

    In this paper we present our method to acquire a 3D model using a mobile robot that is equipped with a laser scanner and a panoramic camera. The method is based on calculating dense depth maps for panoramic images using pairs of panoramic images taken from different positions using stereo matching. Traditional 2D-SLAM using laser-scan-matching is used to determine the needed camera poses. To receive high-quality results we use a high-quality stereo matching algorithm – the graph cut method. We describe the necessary modifications to handle panoramic images and specialized post-processing methods.

  • 44.
    Forte, Paolo
    et al.
    Örebro University, School of Science and Technology.
    Mannucci, Anna
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Pecora, Federico
    Örebro University, School of Science and Technology.
    Construction Site Automation: Open Challenges for Planning and Robotics2021In: Proceedings of the 9th ICAPS Workshop on Planning and Robotics (PlanRob), 2021Conference paper (Refereed)
  • 45.
    Forte, Paolo
    et al.
    Örebro University, School of Science and Technology.
    Mannucci, Anna
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Pecora, Federico
    Örebro University, School of Science and Technology.
    Online Task Assignment and Coordination in Multi-Robot Fleets2021In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 6, no 3, p. 4584-4591Article in journal (Refereed)
    Abstract [en]

    We propose a loosely-coupled framework for integrated task assignment, motion planning, coordination and contro of heterogeneous fleets of robots subject to non-cooperative tasks. The approach accounts for the important real-world requiremen that tasks can be posted asynchronously. We exploit systematic search for optimal task assignment, where interference is considered as a cost and estimated with knowledge of the kinodynamic models and current state of the robots. Safety is guaranteed by an online coordination algorithm, where the absence of collisions is treated as a hard constraint. The relation between the weight of interference cost in task assignment and computational overhead is analyzed empirically, and the approach is compared against alternative realizations using local search algorithms for task assignment.

  • 46.
    Gupta, Himanshu
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology. Perception for Intelligent Systems, Technical University of Munich, Munich, Germany.
    Kurtser, Polina
    Centre for Applied Autonomous Sensor Systems, Örebro University, Örebro, Sweden; Department of Radiation Science, Radiation Physics, Umeå University, Umeå, Sweden.
    Robust Scan Registration for Navigation in Forest Environment Using Low-Resolution LiDAR Sensors2023In: Sensors, E-ISSN 1424-8220, Vol. 23, no 10, article id 4736Article in journal (Refereed)
    Abstract [en]

    Automated forest machines are becoming important due to human operators' complex and dangerous working conditions, leading to a labor shortage. This study proposes a new method for robust SLAM and tree mapping using low-resolution LiDAR sensors in forestry conditions. Our method relies on tree detection to perform scan registration and pose correction using only low-resolution LiDAR sensors (16Ch, 32Ch) or narrow field of view Solid State LiDARs without additional sensory modalities like GPS or IMU. We evaluate our approach on three datasets, including two private and one public dataset, and demonstrate improved navigation accuracy, scan registration, tree localization, and tree diameter estimation compared to current approaches in forestry machine automation. Our results show that the proposed method yields robust scan registration using detected trees, outperforming generalized feature-based registration algorithms like Fast Point Feature Histogram, with an above 3 m reduction in RMSE for the 16Chanel LiDAR sensor. For Solid-State LiDAR the algorithm achieves a similar RMSE of 3.7 m. Additionally, our adaptive pre-processing and heuristic approach to tree detection increased the number of detected trees by 13% compared to the current approach of using fixed radius search parameters for pre-processing. Our automated tree trunk diameter estimation method yields a mean absolute error of 4.3 cm (RSME = 6.5 cm) for the local map and complete trajectory maps.

  • 47.
    Gupta, Himanshu
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Julier, Simon
    Department of Computer Science, University College London, London, England.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology. Perception for Intelligent Systems, Technical University of Munich, Germany .
    Revisiting Distribution-Based Registration Methods2023In: 2023 European Conference on Mobile Robots (ECMR) / [ed] Marques, L.; Markovic, I., IEEE , 2023, p. 43-48Conference paper (Refereed)
    Abstract [en]

    Normal Distribution Transformation (NDT) registration is a fast, learning-free point cloud registration algorithm that works well in diverse environments. It uses the compact NDT representation to represent point clouds or maps as a spatial probability function that models the occupancy likelihood in an environment. However, because of the grid discretization in NDT maps, the global minima of the registration cost function do not always correlate to ground truth, particularly for rotational alignment. In this study, we examined the NDT registration cost function in-depth. We evaluated three modifications (Student-t likelihood function, inflated covariance/heavily broadened likelihood curve, and overlapping grid cells) that aim to reduce the negative impact of discretization in classical NDT registration. The first NDT modification improves likelihood estimates for matching the distributions of small population sizes; the second modification reduces discretization artifacts by broadening the likelihood tails through covariance inflation; and the third modification achieves continuity by creating the NDT representations with overlapping grid cells (without increasing the total number of cells). We used the Pomerleau Dataset evaluation protocol for our experiments and found significant improvements compared to the classic NDT D2D registration approach (27.7% success rate) using the registration cost functions "heavily broadened likelihood NDT" (HBL-NDT) (34.7% success rate) and "overlapping grid cells NDT" (OGC-NDT) (33.5% success rate). However, we could not observe a consistent improvement using the Student-t likelihood-based registration cost function (22.2% success rate) over the NDT P2D registration cost function (23.7% success rate). A comparative analysis with other state-of-art registration algorithms is also presented in this work. We found that HBL-NDT worked best for easy initial pose difficulties scenarios making it suitable for consecutive point cloud registration in SLAM application.

  • 48.
    Gupta, Himanshu
    et al.
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology. Perception for Intelligent Systems, TechnicalUniversity of Munich, Munich, Germany.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Kurtser, Polina
    Centre for Applied Autonomous SensorSystems, Institutionen för naturvetenskap &teknik, Örebro University, Örebro, Sweden; Department of Radiation Science, RadiationPhysics, Umeå University, Umeå, Sweden.
    NDT-6D for color registration in agri-robotic applications2023In: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 40, no 6, p. 1603-1619Article in journal (Refereed)
    Abstract [en]

    Registration of point cloud data containing both depth and color information is critical for a variety of applications, including in-field robotic plant manipulation, crop growth modeling, and autonomous navigation. However, current state-of-the-art registration methods often fail in challenging agricultural field conditions due to factors such as occlusions, plant density, and variable illumination. To address these issues, we propose the NDT-6D registration method, which is a color-based variation of the Normal Distribution Transform (NDT) registration approach for point clouds. Our method computes correspondences between pointclouds using both geometric and color information and minimizes the distance between these correspondences using only the three-dimensional (3D) geometric dimensions. We evaluate the method using the GRAPES3D data set collected with a commercial-grade RGB-D sensor mounted on a mobile platform in a vineyard. Results show that registration methods that only rely on depth information fail to provide quality registration for the tested data set. The proposed color-based variation outperforms state-of-the-art methods with a root mean square error (RMSE) of 1.1-1.6 cm for NDT-6D compared with 1.1-2.3 cm for other color-information-based methods and 1.2-13.7 cm for noncolor-information-based methods. The proposed method is shown to be robust against noises using the TUM RGBD data set by artificially adding noise present in an outdoor scenario. The relative pose error (RPE) increased similar to 14% for our method compared to an increase of similar to 75% for the best-performing registration method. The obtained average accuracy suggests that the NDT-6D registration methods can be used for in-field precision agriculture applications, for example, crop detection, size-based maturity estimation, and growth modeling.

  • 49.
    Krug, Robert
    et al.
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Tincani, Vinicio
    Interdepart. Research Center “E. Piaggio”; University of Pisa, Pisa, Italy.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Mosberger, Rafael
    Örebro University, School of Science and Technology.
    Fantoni, Gualtiero
    Interdepart. Research Center “E. Piaggio”; University of Pisa, Pisa, Italy.
    Bicchi, Antonio
    Interdepart. Research Center “E. Piaggio”; University of Pisa, Pisa, Italy.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    On Using Optimization-based Control instead of Path-Planning for Robot Grasp Motion Generation2015In: IEEE International Conference on Robotics and Automation (ICRA) - Workshop on Robotic Hands, Grasping, and Manipulation, 2015Conference paper (Refereed)
    Download full text (pdf)
    fulltext
  • 50.
    Krug, Robert
    et al.
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Tincani, Vinicio
    University of Pisa, Pisa, Italy.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Mosberger, Rafael
    Örebro University, School of Science and Technology.
    Fantoni, Gualtiero
    University of Pisa, Pisa, Italy.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    The Next Step in Robot Commissioning: Autonomous Picking and Palletizing2016In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 1, no 1, p. 546-553Article in journal (Refereed)
    Abstract [en]

    So far, autonomous order picking (commissioning) systems have not been able to meet the stringent demands regarding speed, safety, and accuracy of real-world warehouse automation, resulting in reliance on human workers. In this letter, we target the next step in autonomous robot commissioning: automatizing the currently manual order picking procedure. To this end, we investigate the use case of autonomous picking and palletizing with a dedicated research platform and discuss lessons learned during testing in simplified warehouse settings. The main theoretical contribution is a novel grasp representation scheme which allows for redundancy in the gripper pose placement. This redundancy is exploited by a local, prioritized kinematic controller which generates reactive manipulator motions on-the-fly. We validated our grasping approach by means of a large set of experiments, which yielded an average grasp acquisition time of 23.5 s at a success rate of 94.7%. Our system is able to autonomously carry out simple order picking tasks in a humansafe manner, and as such serves as an initial step toward future commercial-scale in-house logistics automation solutions.

    Download full text (pdf)
    fulltext
12 1 - 50 of 95
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf