To Örebro University

oru.seÖrebro University Publications
Change search
Refine search result
123456 1 - 50 of 298
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Adolfsson, Daniel
    et al.
    Örebro University, School of Science and Technology.
    Castellano-Quero, Manuel
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    CorAl: Introspection for robust radar and lidar perception in diverse environments using differential entropy2022In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 155, article id 104136Article in journal (Refereed)
    Abstract [en]

    Robust perception is an essential component to enable long-term operation of mobile robots. It depends on failure resilience through reliable sensor data and pre-processing, as well as failure awareness through introspection, for example the ability to self-assess localization performance. This paper presents CorAl: a principled, intuitive, and generalizable method to measure the quality of alignment between pairs of point clouds, which learns to detect alignment errors in a self-supervised manner. CorAl compares the differential entropy in the point clouds separately with the entropy in their union to account for entropy inherent to the scene. By making use of dual entropy measurements, we obtain a quality metric that is highly sensitive to small alignment errors and still generalizes well to unseen environments. In this work, we extend our previous work on lidar-only CorAl to radar data by proposing a two-step filtering technique that produces high-quality point clouds from noisy radar scans. Thus, we target robust perception in two ways: by introducing a method that introspectively assesses alignment quality, and by applying it to an inherently robust sensor modality. We show that our filtering technique combined with CorAl can be applied to the problem of alignment classification, and that it detects small alignment errors in urban settings with up to 98% accuracy, and with up to 96% if trained only in a different environment. Our lidar and radar experiments demonstrate that CorAl outperforms previous methods both on the ETH lidar benchmark, which includes several indoor and outdoor environments, and the large-scale Oxford and MulRan radar data sets for urban traffic scenarios. The results also demonstrate that CorAl generalizes very well across substantially different environments without the need of retraining.

  • 2.
    Adolfsson, Daniel
    et al.
    Örebro University, School of Science and Technology.
    Lowry, Stephanie
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    A Submap per Perspective: Selecting Subsets for SuPer Mapping that Afford Superior Localization Quality2019In: 2019 European Conference on Mobile Robots (ECMR), IEEE, 2019Conference paper (Refereed)
    Abstract [en]

    This paper targets high-precision robot localization. We address a general problem for voxel-based map representations that the expressiveness of the map is fundamentally limited by the resolution since integration of measurements taken from different perspectives introduces imprecisions, and thus reduces localization accuracy.We propose SuPer maps that contain one Submap per Perspective representing a particular view of the environment. For localization, a robot then selects the submap that best explains the environment from its perspective. We propose SuPer mapping as an offline refinement step between initial SLAM and deploying autonomous robots for navigation. We evaluate the proposed method on simulated and real-world data that represent an important use case of an industrial scenario with high accuracy requirements in an repetitive environment. Our results demonstrate a significantly improved localization accuracy, up to 46% better compared to localization in global maps, and up to 25% better compared to alternative submapping approaches.

    Download full text (pdf)
    A Submap per Perspective - Selecting Subsets for SuPer Mapping that Afford Superior Localization Quality
  • 3.
    Adolfsson, Daniel
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Alhashimi, Anas
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    CFEAR Radarodometry - Conservative Filtering for Efficient and Accurate Radar Odometry2021In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021), IEEE, 2021, p. 5462-5469Conference paper (Refereed)
    Abstract [en]

    This paper presents the accurate, highly efficient, and learning-free method CFEAR Radarodometry for large-scale radar odometry estimation. By using a filtering technique that keeps the k strongest returns per azimuth and by additionally filtering the radar data in Cartesian space, we are able to compute a sparse set of oriented surface points for efficient and accurate scan matching. Registration is carried out by minimizing a point-to-line metric and robustness to outliers is achieved using a Huber loss. We were able to additionally reduce drift by jointly registering the latest scan to a history of keyframes and found that our odometry method generalizes to different sensor models and datasets without changing a single parameter. We evaluate our method in three widely different environments and demonstrate an improvement over spatially cross-validated state-of-the-art with an overall translation error of 1.76% in a public urban radar odometry benchmark, running at 55Hz merely on a single laptop CPU thread.

    Download full text (pdf)
    CFEAR Radarodometry - Conservative Filtering for Efficient and Accurate Radar Odometry
  • 4.
    Adolfsson, Daniel
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Alhashimi, Anas
    Örebro University, Örebro, Sweden; Computer Engineering Department, University of Baghdad, Baghdad, Iraq.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lidar-Level Localization With Radar? The CFEAR Approach to Accurate, Fast, and Robust Large-Scale Radar Odometry in Diverse Environments2023In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 39, no 2, p. 1476-1495Article in journal (Refereed)
    Abstract [en]

    This article presents an accurate, highly efficient, and learning-free method for large-scale odometry estimation using spinning radar, empirically found to generalize well across very diverse environments—outdoors, from urban to woodland, and indoors in warehouses and mines—without changing parameters. Our method integrates motion compensation within a sweep with one-to-many scan registration that minimizes distances between nearby oriented surface points and mitigates outliers with a robust loss function. Extending our previous approach conservative filtering for efficient and accurate radar odometry (CFEAR), we present an in-depth investigation on a wider range of datasets, quantifying the importance of filtering, resolution, registration cost and loss functions, keyframe history, and motion compensation. We present a new solving strategy and configuration that overcomes previous issues with sparsity and bias, and improves our state-of-the-art by 38%, thus, surprisingly, outperforming radar simultaneous localization and mapping (SLAM) and approaching lidar SLAM. The most accurate configuration achieves 1.09% error at 5 Hz on the Oxford benchmark, and the fastest achieves 1.79% error at 160 Hz.

    Download full text (pdf)
    Lidar-level localization with radar? The CFEAR approach to accurate, fast and robust large-scale radar odometry in diverse environments
  • 5.
    Adolfsson, Daniel
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Alhashimi, Anas
    School of Science and Technology, Örebro University, Örebro, Sweden.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Oriented surface points for efficient and accurate radar odometry2021Conference paper (Refereed)
    Abstract [en]

    This paper presents an efficient and accurate radar odometry pipeline for large-scale localization. We propose a radar filter that keeps only the strongest reflections per-azimuth that exceeds the expected noise level. The filtered radar data is used to incrementally estimate odometry by registering the current scan with a nearby keyframe. By modeling local surfaces, we were able to register scans by minimizing a point-to-line metric and accurately estimate odometry from sparse point sets, hence improving efficiency. Specifically, we found that a point-to-line metric yields significant improvements compared to a point-to-point metric when matching sparse sets of surface points. Preliminary results from an urban odometry benchmark show that our odometry pipeline is accurate and efficient compared to existing methods with an overall translation error of 2.05%, down from 2.78% from the previously best published method, running at 12.5ms per frame without need of environmental specific training. 

  • 6.
    Adolfsson, Daniel
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Liao, Qianfang
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    CorAl – Are the point clouds Correctly Aligned?2021In: 10th European Conference on Mobile Robots (ECMR 2021), IEEE, 2021, Vol. 10Conference paper (Refereed)
    Abstract [en]

    In robotics perception, numerous tasks rely on point cloud registration. However, currently there is no method that can automatically detect misaligned point clouds reliably and without environment-specific parameters. We propose "CorAl", an alignment quality measure and alignment classifier for point cloud pairs, which facilitates the ability to introspectively assess the performance of registration. CorAl compares the joint and the separate entropy of the two point clouds. The separate entropy provides a measure of the entropy that can be expected to be inherent to the environment. The joint entropy should therefore not be substantially higher if the point clouds are properly aligned. Computing the expected entropy makes the method sensitive also to small alignment errors, which are particularly hard to detect, and applicable in a range of different environments. We found that CorAl is able to detect small alignment errors in previously unseen environments with an accuracy of 95% and achieve a substantial improvement to previous methods.

    Download full text (pdf)
    CorAl – Are the point clouds Correctly Aligned?
  • 7.
    Alhashimi, Anas
    et al.
    School of Science and Technology, Örebro University, Örebro, Sweden; Computer Engineering Department, University of Baghdad, Baghdad, Iraq.
    Adolfsson, Daniel
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    BFAR – Bounded False Alarm Rate detector for improved radar odometry estimation2021Conference paper (Refereed)
    Abstract [en]

    This paper presents a new detector for filtering noise from true detections in radar data, which improves the state of the art in radar odometry. Scanning Frequency-Modulated Continuous Wave (FMCW) radars can be useful for localisation and mapping in low visibility, but return a lot of noise compared to (more commonly used) lidar, which makes the detection task more challenging. Our Bounded False-Alarm Rate (BFAR) detector is different from the classical Constant False-Alarm Rate (CFAR) detector in that it applies an affine transformation on the estimated noise level after which the parameters that minimize the estimation error can be learned. BFAR is an optimized combination between CFAR and fixed-level thresholding. Only a single parameter needs to be learned from a training dataset. We apply BFAR tothe use case of radar odometry, and adapt a state-of-the-art odometry pipeline (CFEAR), replacing its original conservative filtering with BFAR. In this way we reduce the state-of-the-art translation/rotation odometry errors from 1.76%/0.5◦/100 m to 1.55%/0.46◦/100 m; an improvement of 12.5%.

  • 8.
    Almeida, Tiago
    et al.
    Örebro University, School of Science and Technology.
    Rudenko, Andrey
    Robert Bosch GmbH, Corporate Research, Stuttgart, Germany.
    Schreiter, Tim
    Örebro University, School of Science and Technology.
    Zhu, Yufei
    Örebro University, School of Science and Technology.
    Gutiérrez Maestro, Eduardo
    Örebro University, School of Science and Technology.
    Morillo-Mendez, Lucas
    Örebro University, School of Science and Technology.
    Kucner, Tomasz P.
    Mobile Robotics Group, Department of Electrical Engineering and Automation, Aalto University, Finland; FCAI, Finnish Center for Artificial Intelligence, Finland.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Palmieri, Luigi
    Robert Bosch GmbH, Corporate Research, Stuttgart, Germany.
    Arras, Kai O.
    Robert Bosch GmbH, Corporate Research, Stuttgart, Germany.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    THÖR-Magni: Comparative Analysis of Deep Learning Models for Role-Conditioned Human Motion Prediction2023In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, p. 2200-2209Conference paper (Refereed)
    Abstract [en]

    Autonomous systems, that need to operate in human environments and interact with the users, rely on understanding and anticipating human activity and motion. Among the many factors which influence human motion, semantic attributes, such as the roles and ongoing activities of the detected people, provide a powerful cue on their future motion, actions, and intentions. In this work we adapt several popular deep learning models for trajectory prediction with labels corresponding to the roles of the people. To this end we use the novel THOR-Magni dataset, which captures human activity in industrial settings and includes the relevant semantic labels for people who navigate complex environments, interact with objects and robots, work alone and in groups. In qualitative and quantitative experiments we show that the role-conditioned LSTM, Transformer, GAN and VAE methods can effectively incorporate the semantic categories, better capture the underlying input distribution and therefore produce more accurate motion predictions in terms of Top-K ADE/FDE and log-likelihood metrics.

  • 9.
    Almqvist, Håkan
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Kucner, Tomasz Piotr
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Learning to detect misaligned point clouds2018In: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 35, no 5, p. 662-677Article in journal (Refereed)
    Abstract [en]

    Matching and merging overlapping point clouds is a common procedure in many applications, including mobile robotics, three-dimensional mapping, and object visualization. However, fully automatic point-cloud matching, without manual verification, is still not possible because no matching algorithms exist today that can provide any certain methods for detecting misaligned point clouds. In this article, we make a comparative evaluation of geometric consistency methods for classifying aligned and nonaligned point-cloud pairs. We also propose a method that combines the results of the evaluated methods to further improve the classification of the point clouds. We compare a range of methods on two data sets from different environments related to mobile robotics and mapping. The results show that methods based on a Normal Distributions Transform representation of the point clouds perform best under the circumstances presented herein.

  • 10.
    Almqvist, Håkan
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Improving Point Cloud Accuracy Obtained from a Moving Platform for Consistent Pile Attack Pose Estimation2014In: Journal of Intelligent and Robotic Systems, ISSN 0921-0296, E-ISSN 1573-0409, Vol. 75, no 1, p. 101-128Article in journal (Refereed)
    Abstract [en]

    We present a perception system for enabling automated loading with waist-articulated wheel loaders. To enable autonomous loading of piled materials, using either above-ground wheel loaders or underground load-haul-dump vehicles, 3D data of the pile shape is needed. However, using common 3D scanners, the scan data is distorted while the wheel loader is moving towards the pile. Existing methods that make use of 3D scan data (for autonomous loading as well as tasks such as mapping, localisation, and object detection) typically assume that each 3D scan is accurate. For autonomous robots moving over rough terrain, it is often the case that the vehicle moves a substantial amount during the acquisition of one 3D scan, in which case the scan data will be distorted. We present a study of auto-loading methods, and how to locate piles in real-world scenarios with nontrivial ground geometry. We have compared how consistently each method performs for live scans acquired in motion, and also how the methods perform with different view points and scan configurations. The system described in this paper uses a novel method for improving the quality of distorted 3D scans made from a vehicle moving over uneven terrain. The proposed method for improving scan quality is capable of increasing the accuracy of point clouds without assuming any specific features of the environment (such as planar walls), without resorting to a “stop-scan-go” approach, and without relying on specialised and expensive hardware. Each new 3D scan is registered to the preceding using the normal-distributions transform (NDT). After each registration, a mini-loop closure is performed with a local, per-scan, graph-based SLAM method. To verify the impact of the quality improvement, we present data that shows how auto-loading methods benefit from the corrected scans. The presented methods are validated on data from an autonomous wheel loader, as well as with simulated data. The proposed scan-correction method increases the accuracy of both the vehicle trajectory and the point cloud. We also show that it increases the reliability of pile-shape measures used to plan an efficient attack pose when performing autonomous loading.

  • 11.
    Almqvist, Håkan
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Improving Point-Cloud Accuracy from a Moving Platform in Field Operations2013In: 2013 IEEE International Conference on Robotics and Automation (ICRA), IEEE conference proceedings, 2013, p. 733-738Conference paper (Refereed)
    Abstract [en]

    This paper presents a method for improving the quality of distorted 3D point clouds made from a vehicle equipped with a laser scanner moving over uneven terrain. Existing methods that use 3D point-cloud data (for tasks such as mapping, localisation, and object detection) typically assume that each point cloud is accurate. For autonomous robots moving in rough terrain, it is often the case that the vehicle moves a substantial amount during the acquisition of one point cloud, in which case the data will be distorted. The method proposed in this paper is capable of increasing the accuracy of 3D point clouds, without assuming any specific features of the environment (such as planar walls), without resorting to a "stop-scan-go" approach, and without relying on specialised and expensive hardware. Each new point cloud is matched to the previous using normal-distribution-transform (NDT) registration, after which a mini-loop closure is performed with a local, per-scan, graph-based SLAM method. The proposed method increases the accuracy of both the measured platform trajectory and the point cloud. The method is validated on both real-world and simulated data.

  • 12.
    Andreasson, Henrik
    et al.
    Örebro University, School of Science and Technology.
    Adolfsson, Daniel
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Incorporating Ego-motion Uncertainty Estimates in Range Data Registration2017In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 1389-1395Conference paper (Refereed)
    Abstract [en]

    Local scan registration approaches commonlyonly utilize ego-motion estimates (e.g. odometry) as aninitial pose guess in an iterative alignment procedure. Thispaper describes a new method to incorporate ego-motionestimates, including uncertainty, into the objective function of aregistration algorithm. The proposed approach is particularlysuited for feature-poor and self-similar environments,which typically present challenges to current state of theart registration algorithms. Experimental evaluation showssignificant improvements in accuracy when using data acquiredby Automatic Guided Vehicles (AGVs) in industrial productionand warehouse environments.

  • 13.
    Andreasson, Henrik
    et al.
    Örebro University, School of Science and Technology.
    Bouguerra, Abdelbaki
    Örebro University, School of Science and Technology.
    Cirillo, Marcello
    Örebro University, School of Science and Technology.
    Dimitrov, Dimitar Nikolaev
    INRIA - Grenoble, Meylan, France.
    Driankov, Dimiter
    Örebro University, School of Science and Technology.
    Karlsson, Lars
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Pecora, Federico
    Örebro University, School of Science and Technology.
    Saarinen, Jari Pekka
    Örebro University, School of Science and Technology. Aalto University, Espo, Finland .
    Sherikov, Aleksander
    Centre de recherche Grenoble Rhône-Alpes, Grenoble, France .
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Autonomous transport vehicles: where we are and what is missing2015In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 22, no 1, p. 64-75Article in journal (Refereed)
    Abstract [en]

    In this article, we address the problem of realizing a complete efficient system for automated management of fleets of autonomous ground vehicles in industrial sites. We elicit from current industrial practice and the scientific state of the art the key challenges related to autonomous transport vehicles in industrial environments and relate them to enabling techniques in perception, task allocation, motion planning, coordination, collision prediction, and control. We propose a modular approach based on least commitment, which integrates all modules through a uniform constraint-based paradigm. We describe an instantiation of this system and present a summary of the results, showing evidence of increased flexibility at the control level to adapt to contingencies.

  • 14.
    Andreasson, Henrik
    et al.
    Örebro University, Department of Technology.
    Duckett, Tom
    University of Lincoln, University of Lincoln, UK.
    Lilienthal, Achim J.
    A Minimalistic Approach to Appearance-Based Visual SLAM2008In: IEEE Transactions on Robotics, ISSN 1552-3098, Vol. 24, no 5, p. 991-1001Article in journal (Refereed)
    Abstract [en]

    This paper presents a vision-based approach to SLAM in indoor / outdoor environments with minimalistic sensing and computational requirements. The approach is based on a graph representation of robot poses, using a relaxation algorithm to obtain a globally consistent map. Each link corresponds to a relative measurement of the spatial relation between the two nodes it connects. The links describe the likelihood distribution of the relative pose as a Gaussian distribution. To estimate the covariance matrix for links obtained from an omni-directional vision sensor, a novel method is introduced based on the relative similarity of neighbouring images. This new method does not require determining distances to image features using multiple view geometry, for example. Combined indoor and outdoor experiments demonstrate that the approach can handle qualitatively different environments (without modification of the parameters), that it can cope with violations of the “flat floor assumption” to some degree, and that it scales well with increasing size of the environment, producing topologically correct and geometrically accurate maps at low computational cost. Further experiments demonstrate that the approach is also suitable for combining multiple overlapping maps, e.g. for solving the multi-robot SLAM problem with unknown initial poses.

    Download full text (pdf)
    A Minimalistic Approach to Appearance based Visual SLAM
  • 15.
    Andreasson, Henrik
    et al.
    Örebro University, Department of Technology.
    Duckett, Tom
    Dept. of Computing & Informatics, University of Lincoln, Lincoln, United Kingdom.
    Lilienthal, Achim J.
    Örebro University, Department of Technology.
    Mini-SLAM: minimalistic visual SLAM in large-scale environments based on a new interpretation of image similarity2007In: 2007 IEEE international conference on robotics and automation (ICRA), New York, NY, USA: IEEE, 2007, p. 4096-4101, article id 4209726Conference paper (Refereed)
    Abstract [en]

    This paper presents a vision-based approach to SLAM in large-scale environments with minimal sensing and computational requirements. The approach is based on a graphical representation of robot poses and links between the poses. Links between the robot poses are established based on odometry and image similarity, then a relaxation algorithm is used to generate a globally consistent map. To estimate the covariance matrix for links obtained from the vision sensor, a novel method is introduced based on the relative similarity of neighbouring images, without requiring distances to image features or multiple view geometry. Indoor and outdoor experiments demonstrate that the approach scales well to large-scale environments, producing topologically correct and geometrically accurate maps at minimal computational cost. Mini-SLAM was found to produce consistent maps in an unstructured, large-scale environment (the total path length was 1.4 km) containing indoor and outdoor passages.

    Download full text (pdf)
    Mini-SLAM: Minimalistic Visual SLAM in Large-Scale Environments Based on a New Interpretation of Image Similarity
  • 16.
    Andreasson, Henrik
    et al.
    Örebro University, Department of Technology.
    Lilienthal, Achim
    Örebro University, Department of Natural Sciences. aass.
    Vision aided 3D laser scanner based registration2007In: ECMR 2007: Proceedings of the European Conference on Mobile Robots, 2007, p. 192-197Conference paper (Refereed)
    Abstract [en]

    This paper describes a vision and 3D laser based registration approach which utilizes visual features to identify correspondences. Visual features are obtained from the images of a standard color camera and the depth of these features is determined by interpolating between the scanning points of a 3D laser range scanner, taking into consideration the visual information in the neighbourhood of the respective visual feature. The 3D laser scanner is also used to determine a position covariance estimate of the visual feature. To exploit these covariance estimates, an ICP algorithm based on the Mahalanobis distance is applied. Initial experimental results are presented in a real world indoor laboratory environment

    Download full text (pdf)
    Vision Aided 3D Laser Scanner Based Registration
  • 17.
    Andreasson, Henrik
    et al.
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    6D scan registration using depth-interpolated local image features2010In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 58, no 2, p. 157-165Article in journal (Refereed)
    Abstract [en]

    This paper describes a novel registration approach that is based on a combination of visual and 3D range information.To identify correspondences, local visual features obtained from images of a standard color camera are compared and the depth of matching features (and their position covariance) is determined from the range measurements of a 3D laserscanner. The matched depth-interpolated image features allows to apply registration with known correspondences.We compare several ICP variants in this paper and suggest an extension that considers the spatial distance betweenmatching features to eliminate false correspondences. Experimental results are presented in both outdoor and indoor environments. In addition to pair-wise registration, we also propose a global registration method that registers allscan poses simultaneously.

    Download full text (pdf)
    FULLTEXT01
  • 18.
    Andreasson, Henrik
    et al.
    Örebro University, Department of Technology.
    Lilienthal, Achim J.
    Örebro University, Department of Technology.
    Triebel, Rudolph
    Department of Computer Science, University of Freiburg, Germany.
    Vision based interpolation of 3D laser scans2006In: Proceedings of the Third International Conference on Autonomous Robots and Agents, 2006, p. 455-460Conference paper (Refereed)
    Abstract [en]

    3D range sensors, particularly 3D laser range scanners, enjoy a rising popularity and are used nowadays for many different applications. The resolution 3D range sensors provide in the image plane is typically much lower than the resolution of a modern color camera. In this paper we focus on methods to derive a high-resolution depth image from a low-resolution 3D range sensor and a color image. The main idea is to use color similarity as an indication of depth similarity, based on the observation that depth discontinuities in the scene often correspond to color or brightness changes in the camera image. We present five interpolation methods and compare them with an independently proposed method based on Markov Random Fields. The algorithms proposed in this paper are non-iterative and include a parameter-free vision-based interpolation method. In contrast to previous work, we present ground truth evaluation with real world data and analyse both indoor and outdoor data. Further, we suggest and evaluate four methods to determine a confidence measure for the accuracy of interpolated range values.

    Download full text (pdf)
    Vision based Interpolation of 3D Laser Scans
  • 19.
    Andreasson, Henrik
    et al.
    Örebro University, Department of Technology.
    Magnusson, Martin
    Örebro University, Department of Technology.
    Lilienthal, Achim
    Örebro University, Department of Natural Sciences.
    Has something changed here?: Autonomous difference detection for security patrol robots2007In: 2007 IEEE/RSJ international conference on intelligent robots and systems, New York, NY, USA: IEEE, 2007, p. 3429-3435, article id 4399381Conference paper (Refereed)
    Abstract [en]

    This paper presents a system for autonomous change detection with a security patrol robot. In an initial step a reference model of the environment is created and changes are then detected with respect to the reference model as differences in coloured 3D point clouds, which are obtained from a 3D laser range scanner and a CCD camera. The suggested approach introduces several novel aspects, including a registration method that utilizes local visual features to determine point correspondences (thus essentially working without an initial pose estimate) and the 3D-NDT representation with adaptive cell size to efficiently represent both the spatial and colour aspects of the reference model. Apart from a detailed description of the individual parts of the difference detection system, a qualitative experimental evaluation in an indoor lab environment is presented, which demonstrates that the suggested system is able register and detect changes in spatial 3D data and also to detect changes that occur in colour space and are not observable using range values only.

    Download full text (pdf)
    Has Something Changed Here?: Autonomous Difference Detection for Security Patrol Robots
  • 20.
    Andreasson, Henrik
    et al.
    Örebro University, School of Science and Technology.
    Saarinen, Jari
    Örebro University, School of Science and Technology.
    Cirillo, Marcello
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Drive the Drive: From Discrete Motion Plans to Smooth Drivable Trajectories2014In: Robotics, E-ISSN 2218-6581, Vol. 3, no 4, p. 400-416Article in journal (Refereed)
    Abstract [en]

    Autonomous navigation in real-world industrial environments is a challenging task in many respects. One of the key open challenges is fast planning and execution of trajectories to reach arbitrary target positions and orientations with high accuracy and precision, while taking into account non-holonomic vehicle constraints. In recent years, lattice-based motion planners have been successfully used to generate kinematically and kinodynamically feasible motions for non-holonomic vehicles. However, the discretized nature of these algorithms induces discontinuities in both state and control space of the obtained trajectories, resulting in a mismatch between the achieved and the target end pose of the vehicle. As endpose accuracy is critical for the successful loading and unloading of cargo in typical industrial applications, automatically planned paths have not been widely adopted in commercial AGV systems. The main contribution of this paper is a path smoothing approach, which builds on the output of a lattice-based motion planner to generate smooth drivable trajectories for non-holonomic industrial vehicles. The proposed approach is evaluated in several industrially relevant scenarios and found to be both fast (less than 2 s per vehicle trajectory) and accurate (end-point pose errors below 0.01 m in translation and 0.005 radians in orientation).

    Download full text (pdf)
    fulltext
  • 21.
    Andreasson, Henrik
    et al.
    Örebro University, School of Science and Technology.
    Saarinen, Jari
    Örebro University, School of Science and Technology.
    Cirillo, Marcello
    Örebro University, School of Science and Technology. SCANIA AB, Södertälje, Sweden.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Fast, continuous state path smoothing to improve navigation accuracy2015In: IEEE International Conference on Robotics and Automation (ICRA), 2015, IEEE Computer Society, 2015, p. 662-669Conference paper (Refereed)
    Abstract [en]

    Autonomous navigation in real-world industrial environments is a challenging task in many respects. One of the key open challenges is fast planning and execution of trajectories to reach arbitrary target positions and orientations with high accuracy and precision, while taking into account non-holonomic vehicle constraints. In recent years, lattice-based motion planners have been successfully used to generate kinematically and kinodynamically feasible motions for non-holonomic vehicles. However, the discretized nature of these algorithms induces discontinuities in both state and control space of the obtained trajectories, resulting in a mismatch between the achieved and the target end pose of the vehicle. As endpose accuracy is critical for the successful loading and unloading of cargo in typical industrial applications, automatically planned paths have not be widely adopted in commercial AGV systems. The main contribution of this paper addresses this shortcoming by introducing a path smoothing approach, which builds on the output of a lattice-based motion planner to generate smooth drivable trajectories for non-holonomic industrial vehicles. In real world tests presented in this paper we demonstrate that the proposed approach is fast enough for online use (it computes trajectories faster than they can be driven) and highly accurate. In 100 repetitions we achieve mean end-point pose errors below 0.01 meters in translation and 0.002 radians in orientation. Even the maximum errors are very small: only 0.02 meters in translation and 0.008 radians in orientation.

  • 22.
    Andreasson, Henrik
    et al.
    Örebro University, Department of Technology.
    Triebel, Rudolph
    Department of Computer Science, University of Freiburg, Freiburg, Germany.
    Lilienthal, Achim J.
    Örebro University, Department of Technology.
    Non-iterative Vision-based Interpolation of 3D Laser Scans2007In: Autonomos Agents and Robots / [ed] Mukhopadhyay, SC, Gupta, GS, Berlin/Heidelberg, Germany: Springer , 2007, Vol. 76, p. 83-90, article id 4399381Conference paper (Other academic)
    Abstract [en]

    3D range sensors, particularly 3D laser range scanners, enjoy a rising popularity and are used nowadays for many different applications. The resolution 3D range sensors provide in the image plane is typically much lower than the resolution of a modern colour camera. In this chapter we focus on methods to derive a highresolution depth image from a low-resolution 3D range sensor and a colour image. The main idea is to use colour similarity as an indication of depth similarity, based on the observation that depth discontinuities in the scene often correspond to colour or brightness changes in the camera image. We present five interpolation methods and compare them with an independently proposed method based on Markov random fields. The proposed algorithms are non-iterative and include a parameter-free vision-based interpolation method. In contrast to previous work, we present ground truth evaluation with real world data and analyse both indoor and outdoor data.

    Download full text (pdf)
    Non-iterative Vision-based Interpolation of 3D Laser Scans
  • 23.
    Arain, Muhammad Asif
    et al.
    Örebro University, School of Science and Technology.
    Cirillo, Marcello
    Örebro University, School of Science and Technology. Scania AB, Södertälje, Sweden.
    Hernandez Bennetts, Victor
    Örebro University, School of Science and Technology.
    Schaffernicht, Erik
    Örebro University, School of Science and Technology.
    Trincavelli, Marco
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Efficient Measurement Planning for Remote Gas Sensing with Mobile Robots2015In: 2015 IEEE International Conference on Robotics and Automation (ICRA), Washington, USA: IEEE, 2015, p. 3428-3434Conference paper (Refereed)
    Abstract [en]

    The problem of gas detection is relevant to manyreal-world applications, such as leak detection in industrialsettings and surveillance. In this paper we address the problemof gas detection in large areas with a mobile robotic platformequipped with a remote gas sensor. We propose a novelmethod based on convex relaxation for quickly finding anexploration plan that guarantees a complete coverage of theenvironment. Our method proves to be highly efficient in termsof computational requirements and to provide nearly-optimalsolutions. We validate our approach both in simulation andin real environments, thus demonstrating its applicability toreal-world problems.

    Download full text (pdf)
    fulltext
  • 24.
    Arain, Muhammad Asif
    et al.
    Örebro University, School of Science and Technology.
    Fan, Han
    Örebro University, School of Science and Technology.
    Hernandez Bennetts, Victor
    Örebro University, School of Science and Technology.
    Schaffernicht, Erik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Improving Gas Tomography With Mobile Robots: An Evaluation of Sensing Geometries in Complex Environments2017In: 2017 ISOCS/IEEE International Symposium on Olfaction andElectronic Nose (ISOEN 2017) Proceedings, IEEE, 2017, article id 7968895Conference paper (Refereed)
    Abstract [en]

    An accurate model of gas emissions is of high importance in several real-world applications related to monitoring and surveillance. Gas tomography is a non-intrusive optical method to estimate the spatial distribution of gas concentrations using remote sensors. The choice of sensing geometry, which is the arrangement of sensing positions to perform gas tomography, directly affects the reconstruction quality of the obtained gas distribution maps. In this paper, we present an investigation of criteria that allow to determine suitable sensing geometries for gas tomography. We consider an actuated remote gas sensor installed on a mobile robot, and evaluated a large number of sensing configurations. Experiments in complex settings were conducted using a state-of-the-art CFD-based filament gas dispersal simulator. Our quantitative comparison yields preferred sensing geometries for sensor planning, which allows to better reconstruct gas distributions.

  • 25.
    Arain, Muhammad Asif
    et al.
    Örebro University, School of Science and Technology.
    Hernandez Bennetts, Victor
    Mobile Robotics and Olfaction (MRO) Lab, Center for Applied Autonomous Sensor Systems (AASS), School of Science and Technology, Örebro University, Örebro, Sweden.
    Schaffernicht, Erik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Sniffing out fugitive methane emissions: autonomous remote gas inspection with a mobile robot2021In: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 40, no 4-5, p. 782-814Article in journal (Refereed)
    Abstract [en]

    Air pollution causes millions of premature deaths every year, and fugitive emissions of, e.g., methane are major causes of global warming. Correspondingly, air pollution monitoring systems are urgently needed. Mobile, autonomous monitoring can provide adaptive and higher spatial resolution compared with traditional monitoring stations and allows fast deployment and operation in adverse environments. We present a mobile robot solution for autonomous gas detection and gas distribution mapping using remote gas sensing. Our ‘‘Autonomous Remote Methane Explorer’’ (ARMEx) is equipped with an actuated spectroscopy-based remote gas sensor, which collects integral gas measurements along up to 30 m long optical beams. State-of-the-art 3D mapping and robot localization allow the precise location of the optical beams to be determined, which then facilitates gas tomography (tomographic reconstruction of local gas distributions from sets of integral gas measurements). To autonomously obtain informative sampling strategies for gas tomography, we reduce the search space for gas inspection missions by defining a sweep of the remote gas sensor over a selectable field of view as a sensing configuration. We describe two different ways to find sequences of sensing configurations that optimize the criteria for gas detection and gas distribution mapping while minimizing the number of measurements and distance traveled. We evaluated anARMExprototype deployed in a large, challenging indoor environment with eight gas sources. In comparison with human experts teleoperating the platform from a distant building, the autonomous strategy produced better gas maps with a lower number of sensing configurations and a slightly longer route.

  • 26.
    Arain, Muhammad Asif
    et al.
    Örebro University, School of Science and Technology.
    Schaffernicht, Erik
    Örebro University, School of Science and Technology.
    Hernandez Bennetts, Victor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    The Right Direction to Smell: Efficient Sensor Planning Strategies for Robot Assisted Gas Tomography2016In: 2016 IEEE International Conference on Robotics and Automation (ICRA), New York, USA: IEEE Robotics and Automation Society, 2016, p. 4275-4281Conference paper (Refereed)
    Abstract [en]

    Creating an accurate model of gas emissions is an important task in monitoring and surveillance applications. A promising solution for a range of real-world applications are gas-sensitive mobile robots with spectroscopy-based remote sensors that are used to create a tomographic reconstruction of the gas distribution. The quality of these reconstructions depends crucially on the chosen sensing geometry. In this paper we address the problem of sensor planning by investigating sensing geometries that minimize reconstruction errors, and then formulate an optimization algorithm that chooses sensing configurations accordingly. The algorithm decouples sensor planning for single high concentration regions (hotspots) and subsequently fuses the individual solutions to a global solution consisting of sensing poses and the shortest path between them. The proposed algorithm compares favorably to a template matching technique in a simple simulation and in a real-world experiment. In the latter, we also compare the proposed sensor planning strategy to the sensing strategy of a human expert and find indications that the quality of the reconstructed map is higher with the proposed algorithm.

    Download full text (pdf)
    Arain_etal_ICRA-2016
  • 27.
    Arain, Muhammad Asif
    et al.
    Örebro University, School of Science and Technology.
    Trincavelli, Marco
    Örebro University, School of Science and Technology.
    Cirillo, Marcello
    Örebro University, School of Science and Technology.
    Schaffernicht, Erik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Global coverage measurement planning strategies for mobile robots equipped with a remote gas sensor2015In: Sensors, E-ISSN 1424-8220, Vol. 15, no 3, p. 6845-6871Article in journal (Refereed)
    Abstract [en]

    The problem of gas detection is relevant to many real-world applications, such as leak detection in industrial settings and landfill monitoring. In this paper, we address the problem of gas detection in large areas with a mobile robotic platform equipped with a remote gas sensor. We propose an algorithm that leverages a novel method based on convex relaxation for quickly solving sensor placement problems, and for generating an efficient exploration plan for the robot. To demonstrate the applicability of our method to real-world environments, we performed a large number of experimental trials, both on randomly generated maps and on the map of a real environment. Our approach proves to be highly efficient in terms of computational requirements and to provide nearly-optimal solutions.

    Download full text (pdf)
    Arain-Sensors2015
  • 28.
    Asadi, Sahar
    et al.
    Örebro University, School of Science and Technology.
    Badica, Costin
    University of Craiova, Craiova, Romania.
    Comes, Tina
    Karslruhe Institute of Technology, Karslruhe, Germany.
    Conrado, Claudine
    Thales Research and Technology, Delft, The Netherlands.
    Evers, Vanessa
    University of Amsterdam, Amsterdam, The Netherlands.
    Groen, Frans
    University of Amsterdam, Amsterdam, The Netherlands.
    Illie, Sorin
    University of Craiova, Craiova, Romania.
    Steen Jensen, Jan
    Danish Emergency Management Agency (DEMA), Birkerød, Denmark.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Milan, Bianca
    DCMR, Delft, The Netherlands.
    Neidhart, Thomas
    Space Applications Services, Zaventem, Belgium.
    Nieuwenhuis, Kees
    Thales Research and Technology, Delft, The Netherlands.
    Pashami, Sepideh
    Örebro University, School of Science and Technology.
    Pavlin, Gregor
    Thales Research and Technology, Delft, The Netherlands.
    Pehrsson, Jan
    Prolog Development Center, Brøndby Copenhagen, Denmark.
    Pinchuk, Rani
    Space Applications and Services, Zaventem, Belgium.
    Scafes, Mihnea
    University of Craiova, Craiova, Romania.
    Schou-Jensen, Leo
    DCMR, Brøndby Copenhagen, Denmark.
    Schultmann, Frank
    Karslruhe Institute of Technology, Karlsruhe, Germany.
    Wijngaards, Niek
    Thales Research and Technology, Delft, the Netherlands.
    ICT solutions supporting collaborative information acquisition, situation assessment and decision making in contemporary environmental management problems: the DIADEM approach2011In: Proceedings of the 25th EnviroInfo Conference "Environmental Informatics", Herzogenrath: Shaker Verlag, 2011, p. 920-931Conference paper (Refereed)
    Abstract [en]

    This paper presents a framework of ICT solutions developed in the EU research project DIADEM that supports environmental management with an enhanced capacity to assess population exposure and health risks, to alert relevant groups and to organize efficient response. The emphasis is on advanced solutions which are economically feasible and maximally exploit the existing communication, computing and sensing resources. This approach enables efficient situation assessment in complex environmental management problems by exploiting relevant information obtained from citizens via the standard communication infrastructure as well as heterogeneous data acquired through dedicated sensing systems. This is achieved through a combination of (i) advanced approaches to gas detection and gas distribution modelling, (ii) a novel service-oriented approach supporting seamless integration of human-based and automated reasoning processes in large-scale collaborative sense making processes and (iii) solutions combining Multi-Criteria Decision Analysis, Scenario-Based Reasoning and advanced human-machine interfaces. This paper presents the basic principles of the DIADEM solutions, explains how different techniques are combined to a coherent decision support system and briefly discusses evaluation principles and activities in the DIADEM project.

    Download full text (pdf)
    fulltext
  • 29.
    Asadi, Sahar
    et al.
    Örebro University, School of Science and Technology.
    Fan, Han
    Örebro University, School of Science and Technology.
    Hernandez Bennetts, Victor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Time-dependent gas distribution modelling2017In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 96, p. 157-170Article in journal (Refereed)
    Abstract [en]

    Artificial olfaction can help to address pressing environmental problems due to unwanted gas emissions. Sensor networks and mobile robots equipped with gas sensors can be used for e.g. air pollution monitoring. Key in this context is the ability to derive truthful models of gas distribution from a set of sparse measurements. Most statistical gas distribution modelling methods assume that gas dispersion is a time constant random process. While this assumption approximately holds in some situations, it is necessary to model variations over time in order to enable applications of gas distribution modelling in a wider range of realistic scenarios. Time-invariant approaches cannot model well evolving gas plumes, for example, or major changes in gas dispersion due to a sudden change of the environmental conditions. This paper presents two approaches to gas distribution modelling, which introduce a time-dependency and a relation to a time-scale in generating the gas distribution model either by sub-sampling or by introducing a recency weight that relates measurement and prediction time. We evaluated these approaches in experiments performed in two real environments as well as on several simulated experiments. As expected, the comparison of different sub-sampling strategies revealed that more recent measurements are more informative to derive an estimate of the current gas distribution as long as a sufficient spatial coverage is given. Next, we compared a time-dependent gas distribution modelling approach (TD Kernel DM+V), which includes a recency weight, to the state-of-the-art gas distribution modelling approach (Kernel DM+V), which does not consider sampling times. The results indicate a consistent improvement in the prediction of unseen measurements, particularly in dynamic scenarios. Furthermore, this paper discusses the impact of meta-parameters in model selection and compares the performance of time-dependent GDM in different plume conditions. Finally, we investigated how to set the target time for which the model is created. The results indicate that TD Kernel DM+V performs best when the target time is set to the maximum sampling time in the test set.

  • 30.
    Asadi, Sahar
    et al.
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Approaches to Time-Dependent Gas Distribution Modelling2015In: 2015 European Conference on Mobile Robots (ECMR), New York: IEEE conference proceedings , 2015, article id 7324215Conference paper (Refereed)
    Abstract [en]

    Mobile robot olfaction solutions for gas distribution modelling offer a number of advantages, among them autonomous monitoring in different environments, mobility to select sampling locations, and ability to cooperate with other systems. However, most data-driven, statistical gas distribution modelling approaches assume that the gas distribution is generated by a time-invariant random process. Such time-invariant approaches cannot model well developing plumes or fundamental changes in the gas distribution. In this paper, we discuss approaches that explicitly consider the measurement time, either by sub-sampling according to a given time-scale or by introducing a recency weight that relates measurement and prediction time. We evaluate the performance of these time-dependent approaches in simulation and in real-world experiments using mobile robots. The results demonstrate that in dynamic scenarios improved gas distribution models can be obtained with time-dependent approaches.

  • 31.
    Asadi, Sahar
    et al.
    Örebro University, School of Science and Technology.
    Pashami, Sepideh
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    TD Kernel DM+V: time-dependent statistical gas distribution modelling on simulated measurements2011In: Olfaction and Electronic Nose: proceedings of the 14th International Symposium on Olfaction and Electronic Nose (ISOEN) / [ed] Perena Gouma, Springer Science+Business Media B.V., 2011, p. 281-282Conference paper (Refereed)
    Abstract [en]

    To study gas dispersion, several statistical gas distribution modelling approaches have been proposed recently. A crucial assumption in these approaches is that gas distribution models are learned from measurements that are generated by a time-invariant random process. While a time-independent random process can capture certain fluctuations in the gas distribution, more accurate models can be obtained by modelling changes in the random process over time. In this work we propose a time-scale parameter that relates the age of measurements to their validity for building the gas distribution model in a recency function. The parameters of the recency function define a time-scale and can be learned. The time-scale represents a compromise between two conflicting requirements for obtaining accurate gas distribution models: using as many measurements as possible and using only very recent measurements. We have studied several recency functions in a time-dependent extension of the Kernel DM+V algorithm (TD Kernel DM+V). Based on real-world experiments and simulations of gas dispersal (presented in this paper) we demonstrate that TD Kernel DM+V improves the obtained gas distribution models in dynamic situations. This represents an important step towards statistical modelling of evolving gas distributions.

    Download full text (pdf)
    fulltext
  • 32.
    Asadi, Sahar
    et al.
    Örebro University, School of Science and Technology.
    Reggente, Matteo
    Örebro University, School of Science and Technology.
    Stachniss, Cyrill
    University of Freiburg, Freiburg, Germany.
    Plagemann, Christian
    Stanford University, Stanford CA, USA.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Statistical gas distribution modeling using kernel methods2011In: Intelligent systems for machine olfaction: tools and methodologies / [ed] E. L. Hines and M. S. Leeson, IGI Global, 2011, 1, p. 153-179Chapter in book (Refereed)
    Abstract [en]

    Gas distribution models can provide comprehensive information about a large number of gas concentration measurements, highlighting, for example, areas of unusual gas accumulation. They can also help to locate gas sources and to plan where future measurements should be carried out. Current physical modeling methods, however, are computationally expensive and not applicable for real world scenarios with real-time and high resolution demands. This chapter reviews kernel methodsthat statistically model gas distribution. Gas measurements are treated as randomvariables, and the gas distribution is predicted at unseen locations either using akernel density estimation or a kernel regression approach. The resulting statistical 

    apmodelsdo not make strong assumptions about the functional form of the gas distribution,such as the number or locations of gas sources, for example. The majorfocus of this chapter is on two-dimensional models that provide estimates for themeans and predictive variances of the distribution. Furthermore, three extensionsto the presented kernel density estimation algorithm are described, which allow toinclude wind information, to extend the model to three dimensions, and to reflecttime-dependent changes of the random process that generates the gas distributionmeasurements. All methods are discussed based on experimental validation usingreal sensor data.

  • 33.
    Baumanns, Lukas
    et al.
    Technical University Dortmund, Dortmund, Germany.
    Pitta-Pantazi, Demetra
    University of Cyprus, Nicosia, Cyprus.
    Demosthenous, Eleni
    University of Cyprus, Nicosia, Cyprus.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology. TU Munich, Munich, Germany.
    Christou, Constantinos
    University of Cyprus, Nicosia, Cyprus.
    Schindler, Maike
    University of Cologne, Cologne, Germany.
    Pattern-Recognition Processes of First-Grade Students: An Explorative Eye-Tracking Study2024In: International Journal of Science and Mathematics Education, ISSN 1571-0068, E-ISSN 1573-1774Article in journal (Refereed)
    Abstract [en]

    Recognizing patterns is an essential skill in early mathematics education. However, first graders often have difficulties with tasks such as extending patterns of the form ABCABC. Studies show that this pattern-recognition ability is a good predictor of later pre-algebraic skills and mathematical achievement in general, or the development of mathematical difficulties on the other hand. To be able to foster children's pattern-recognition ability, it is crucial to investigate and understand their pattern-recognition processes early on. However, only a few studies have investigated the processes used to recognize patterns and how these processes are adapted to different patterns. These studies used external observations or relied on children's self-reports, yet young students often lack the ability to properly report their strategies. This paper presents the results of an empirical study using eye-tracking technology to investigate the pattern-recognition processes of 22 first-grade students. In particular, we investigated students with and without the risk of developing mathematical difficulties. The analyses of the students' eye movements reveal that the students used four different processes to recognize patterns-a finding that refines knowledge about pattern-recognition processes from previous research. In addition, we found that for patterns with different units of repeat (i.e. ABABAB versus ABCABCABC), the pattern-recognition processes used differed significantly for students at risk of developing mathematical difficulties but not for students without such risk. Our study contributes to a better understanding of the pattern-recognition processes of first-grade students, laying the foundation for enhanced, targeted support, especially for students at risk of developing mathematical difficulties.

  • 34.
    Blanco, Jose Luis
    et al.
    University of Màlaga, Màlaga, Spain.
    Monroy, Javier G.
    University of Màlaga, Màlaga, Spain.
    Gonzalez-Jimenez, Javier
    University of Màlaga, Màlaga, Spain.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    A Kalman Filter Based Approach To Probabilistic Gas Distribution Mapping2013Conference paper (Refereed)
    Abstract [en]

    Building a model of gas concentrations has important indus-trial and environmental applications, and mobile robots ontheir own or in cooperation with stationary sensors play animportant role in this task. Since an exact analytical de-scription of turbulent flow remains an intractable problem,we propose an approximate approach which not only esti-mates the concentrations but also their variances for eachlocation. Our point of view is that of sequential Bayesianestimation given a lattice of 2D cells treated as hidden vari-ables. We first discuss how a simple Kalman filter pro-vides a solution to the estimation problem. To overcomethe quadratic computational complexity with the mappedarea exhibited by a straighforward application of Kalmanfiltering, we introduce a sparse implementation which runsin constant time. Experimental results for a real robot vali-date the proposed method.

    Download full text (pdf)
    fulltext
  • 35.
    Bouguerra, Abdelbaki
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Åstrand, Björn
    Halmstad University, Halmstad, Sweden.
    Rögnvaldsson, Thorsteinn
    Halmstad University, Halmstad, Sweden.
    An autonomous robotic system for load transportation2009In: 2009 IEEE Conference on Emerging Technologies & Factory Automation (EFTA 2009), New York: IEEE conference proceedings, 2009, p. 1563-1566Conference paper (Refereed)
    Abstract [en]

    This paper presents an overview of an autonomous robotic material handling system. The goal of the system is to extend the functionalities of traditional AGVs to operate in highly dynamic environments. Traditionally, the reliable functioning of AGVs relies on the availability of adequate infrastructure to support navigation. In the target environments of our system, such infrastructure is difficult to setup in an efficient way. Additionally, the location of objects to handle are unknown, which requires that the system be able to detect and track object positions at runtime. Another requirement of the system is to be able to generate trajectories dynamically, which is uncommon in industrial AGV systems.

    Download full text (pdf)
    FULLTEXT01
  • 36.
    Bouguerra, Abdelbaki
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Åstrand, Björn
    Halmstad University.
    Rögnvaldsson, Thorsteinn
    Halmstad University, Sweden.
    MALTA: a system of multiple autonomous trucks for load transportation2009In: Proceedings of the 4th European conference on mobile robots (ECMR) / [ed] Ivan Petrovic, Achim J. Lilienthal, 2009, p. 93-98Conference paper (Refereed)
    Abstract [en]

    This paper presents an overview of an autonomousrobotic material handling system. The goal of the system is toextend the functionalities of traditional AGVs to operate in highlydynamic environments. Traditionally, the reliable functioning ofAGVs relies on the availability of adequate infrastructure tosupport navigation. In the target environments of our system,such infrastructure is difficult to setup in an efficient way.Additionally, the location of objects to handle are unknown,which requires that the system be able to detect and track objectpositions at runtime. Another requirement of the system is to beable to generate trajectories dynamically, which is uncommon inindustrial AGV systems.

    Download full text (pdf)
    Fulltext
  • 37.
    Bunz, Elsa
    et al.
    Örebro University, Örebro, Sweden.
    Chadalavada, Ravi Teja
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Krug, Robert
    Örebro University, School of Science and Technology.
    Schindler, Maike
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Spatial Augmented Reality and Eye Tracking for Evaluating Human Robot Interaction2016In: Proceedings of RO-MAN 2016 Workshop: Workshop on Communicating Intentions in Human-Robot Interaction, 2016Conference paper (Refereed)
    Abstract [en]

    Freely moving autonomous mobile robots may leadto anxiety when operating in workspaces shared with humans.Previous works have given evidence that communicating in-tentions using Spatial Augmented Reality (SAR) in the sharedworkspace will make humans more comfortable in the vicinity ofrobots. In this work, we conducted experiments with the robotprojecting various patterns in order to convey its movementintentions during encounters with humans. In these experiments,the trajectories of both humans and robot were recorded witha laser scanner. Human test subjects were also equipped withan eye tracker. We analyzed the eye gaze patterns and thelaser scan tracking data in order to understand how the robot’sintention communication affects the human movement behavior.Furthermore, we used retrospective recall interviews to aid inidentifying the reasons that lead to behavior changes.

    Download full text (pdf)
    fulltext
  • 38.
    Burgues, Javier
    et al.
    Institute for Bioengineering of Catalonia (IBEC), The Barcelona Institute of Science and Technology, Barcelona, Spain; Department of Electronics and Biomedical Engineering, Universitat de Barcelona, Barcelona, Spain.
    Hernandez Bennetts, Victor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Marco, Santiago
    Institute for Bioengineering of Catalonia (IBEC), The Barcelona Institute of Science and Technology, Barcelona, Spain; Department of Electronics and Biomedical Engineering, Universitat de Barcelona, Barcelona, Spain.
    Gas Distribution Mapping and Source Localization Using a 3D Grid of Metal Oxide Semiconductor Sensors2020In: Sensors and actuators. B, Chemical, ISSN 0925-4005, E-ISSN 1873-3077, Vol. 304, article id 127309Article in journal (Refereed)
    Abstract [en]

    The difficulty to obtain ground truth (i.e. empirical evidence) about how a gas disperses in an environment is one of the major hurdles in the field of mobile robotic olfaction (MRO), impairing our ability to develop efficient gas source localization strategies and to validate gas distribution maps produced by autonomous mobile robots. Previous ground truth measurements of gas dispersion have been mostly based on expensive tracer optical methods or 2D chemical sensor grids deployed only at ground level. With the ever-increasing trend towards gas-sensitive aerial robots, 3D measurements of gas dispersion become necessary to characterize the environment these platforms can explore. This paper presents ten different experiments performed with a 3D grid of 27 metal oxide semiconductor (MOX) sensors to visualize the temporal evolution of gas distribution produced by an evaporating ethanol source placed at different locations in an office room, including variations in height, release rate and air flow. We also studied which features of the MOX sensor signals are optimal for predicting the source location, considering different lengths of the measurement window. We found strongly time-varying and counter-intuitive gas distribution patterns that disprove some assumptions commonly held in the MRO field, such as that heavy gases disperse along ground level. Correspondingly, ground-level gas distributions were rarely useful for localizing the gas source and elevated measurements were much more informative. We make the dataset and the code publicly available to enable the community to develop, validate, and compare new approaches related to gas sensing in complex environments.

  • 39.
    Burgués, Javier
    et al.
    Institute for Bioengineering of Catalonia (IBEC),The Barcelona Institute of Science and Technology, Baldiri Reixac, Barcelona, Spain; Department of Electronics and Biomedical Engineering, Universitat de Barcelona, Barcelona, Spain.
    Hernandez Bennetts, Victor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Marco, Santiago
    Institute for Bioengineering of Catalonia (IBEC),The Barcelona Institute of Science and Technology, Baldiri Reixac, Barcelona, Spain; Department of Electronics and Biomedical Engineering, Universitat de Barcelona, Barcelona, Spain.
    Smelling Nano Aerial Vehicle for Gas Source Localization and Mapping2019In: Sensors, E-ISSN 1424-8220, Vol. 19, no 3, article id 478Article in journal (Refereed)
    Abstract [en]

    This paper describes the development and validation of the currently smallest aerial platform with olfaction capabilities. The developed Smelling Nano Aerial Vehicle (SNAV) is based on a lightweight commercial nano-quadcopter (27 g) equipped with a custom gas sensing board that can host up to two in situ metal oxide semiconductor (MOX) gas sensors. Due to its small form-factor, the SNAV is not a hazard for humans, enabling its use in public areas or inside buildings. It can autonomously carry out gas sensing missions of hazardous environments inaccessible to terrestrial robots and bigger drones, for example searching for victims and hazardous gas leaks inside pockets that form within the wreckage of collapsed buildings in the aftermath of an earthquake or explosion. The first contribution of this work is assessing the impact of the nano-propellers on the MOX sensor signals at different distances to a gas source. A second contribution is adapting the ‘bout’ detection algorithm, proposed by Schmuker et al. (2016) to extract specific features from the derivative of the MOX sensor response, for real-time operation. The third and main contribution is the experimental validation of the SNAV for gas source localization (GSL) and mapping in a large indoor environment (160 m2) with a gas source placed in challenging positions for the drone, for example hidden in the ceiling of the room or inside a power outlet box. Two GSL strategies are compared, one based on the instantaneous gas sensor response and the other one based on the bout frequency. From the measurements collected (in motion) along a predefined sweeping path we built (in less than 3 min) a 3D map of the gas distribution and identified the most likely source location. Using the bout frequency yielded on average a higher localization accuracy than using the instantaneous gas sensor response (1.38 m versus 2.05 m error), however accurate tuning of an additional parameter (the noise threshold) is required in the former case. The main conclusion of this paper is that a nano-drone has the potential to perform gas sensing tasks in complex environments.

    Download full text (pdf)
    Smelling Nano Aerial Vehicle for Gas Source Localization and Mapping
  • 40.
    Burgués, Javier
    et al.
    Department of Electronic and Biomedical Engineering, Universitat de Barcelona, Barcelona, Spain; Institute for Bioengineering of Catalonia (IBEC), Barcelona, Spain.
    Hernandez Bennetts, Victor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Marco, Santiago
    Department of Electronic and Biomedical Engineering, Universitat de Barcelona, Barcelona, Spain; Institute for Bioengineering of Catalonia (IBEC), Barcelona, Spain.
    3D Gas Distribution with and without Artificial Airflow: An Experimental Study with a Grid of Metal Oxide Semiconductor Gas Sensors2018In: Proceedings, E-ISSN 2504-3900, Vol. 2, no 13, article id 911Article in journal (Refereed)
    Abstract [en]

    Gas distribution modelling can provide potentially life-saving information when assessing the hazards of gaseous emissions and for localization of explosives, toxic or flammable chemicals. In this work, we deployed a three-dimensional (3D) grid of metal oxide semiconductor (MOX) gas sensors deployed in an office room, which allows for novel insights about the complex patterns of indoor gas dispersal. 12 independent experiments were carried out to better understand dispersion patters of a single gas source placed at different locations of the room, including variations in height, release rate and air flow profiles. This dataset is denser and richer than what is currently available, i.e., 2D datasets in wind tunnels. We make it publicly available to enable the community to develop, validate, and compare new approaches related to gas sensing in complex environments.

    Download full text (pdf)
    3D Gas Distribution with and without Artificial Airflow: An Experimental Study with a Grid of Metal Oxide Semiconductor Gas Sensors
  • 41.
    Canelhas, Daniel R.
    et al.
    Örebro University, School of Science and Technology.
    Schaffernicht, Erik
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Davison, Andrew J.
    Department of Computing, Imperial College London, London, United Kingdom.
    Compressed Voxel-Based Mapping Using Unsupervised Learning2017In: Robotics, E-ISSN 2218-6581, Vol. 6, no 3, article id 15Article in journal (Refereed)
    Abstract [en]

    In order to deal with the scaling problem of volumetric map representations, we propose spatially local methods for high-ratio compression of 3D maps, represented as truncated signed distance fields. We show that these compressed maps can be used as meaningful descriptors for selective decompression in scenarios relevant to robotic applications. As compression methods, we compare using PCA-derived low-dimensional bases to nonlinear auto-encoder networks. Selecting two application-oriented performance metrics, we evaluate the impact of different compression rates on reconstruction fidelity as well as to the task of map-aided ego-motion estimation. It is demonstrated that lossily reconstructed distance fields used as cost functions for ego-motion estimation can outperform the original maps in challenging scenarios from standard RGB-D (color plus depth) data sets due to the rejection of high-frequency noise content.

  • 42.
    Canelhas, Daniel R.
    et al.
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    From Feature Detection in Truncated Signed Distance Fields to Sparse Stable Scene Graphs2016In: IEEE Robotics and Automation Letters, ISSN 2377-3766, Vol. 1, no 2, p. 1148-1155Article in journal (Refereed)
    Abstract [en]

    With the increased availability of GPUs and multicore CPUs, volumetric map representations are an increasingly viable option for robotic applications. A particularly important representation is the truncated signed distance field (TSDF) that is at the core of recent advances in dense 3D mapping. However, there is relatively little literature exploring the characteristics of 3D feature detection in volumetric representations. In this paper we evaluate the performance of features extracted directly from a 3D TSDF representation. We compare the repeatability of Integral invariant features, specifically designed for volumetric images, to the 3D extensions of Harris and Shi & Tomasi corners. We also study the impact of different methods for obtaining gradients for their computation. We motivate our study with an example application for building sparse stable scene graphs, and present an efficient GPU-parallel algorithm to obtain the graphs, made possible by the combination of TSDF and 3D feature points. Our findings show that while the 3D extensions of 2D corner-detection perform as expected, integral invariants have shortcomings when applied to discrete TSDFs. We conclude with a discussion of the cause for these points of failure that sheds light on possible mitigation strategies.

    Download full text (pdf)
    fulltext
  • 43.
    Canelhas, Daniel R.
    et al.
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Improved local shape feature stability through dense model tracking2013In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2013, p. 3203-3209Conference paper (Refereed)
    Abstract [en]

    In this work we propose a method to effectively remove noise from depth images obtained with a commodity structured light sensor. The proposed approach fuses data into a consistent frame of reference over time, thus utilizing prior depth measurements and viewpoint information in the noise removal process. The effectiveness of the approach is compared to two state of the art, single-frame denoising methods in the context of feature descriptor matching and keypoint detection stability. To make more general statements about the effect of noise removal in these applications, we extend a method for evaluating local image gradient feature descriptors to the domain of 3D shape descriptors. We perform a comparative study of three classes of such descriptors: Normal Aligned Radial Features, Fast Point Feature Histograms and Depth Kernel Descriptors; and evaluate their performance on a real-world industrial application data set. We demonstrate that noise removal enabled by the dense map representation results in major improvements in matching across all classes of descriptors as well as having a substantial positive impact on keypoint detection reliability

  • 44.
    Canelhas, Daniel R.
    et al.
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    SDF tracker: a parallel algorithm for on-line pose estimation and scene reconstruction from depth images2013In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2013, p. 3671-3676Conference paper (Refereed)
    Abstract [en]

    Ego-motion estimation and environment mapping are two recurring problems in the field of robotics. In this work we propose a simple on-line method for tracking the pose of a depth camera in six degrees of freedom and simultaneously maintaining an updated 3D map, represented as a truncated signed distance function. The distance function representation implicitly encodes surfaces in 3D-space and is used directly to define a cost function for accurate registration of new data. The proposed algorithm is highly parallel and achieves good accuracy compared to state of the art methods. It is suitable for reconstructing single household items, workspace environments and small rooms at near real-time rates, making it practical for use on modern CPU hardware

  • 45.
    Canelhas, Daniel Ricão
    et al.
    Univrses AB, Strängnäs, Sweden.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    A Survey of Voxel Interpolation Methods and an Evaluation of Their Impact on Volumetric Map-Based Visual Odometry2018In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA),, IEEE Computer Society, 2018, p. 6337-6343Conference paper (Refereed)
    Abstract [en]

    Voxel volumes are simple to implement and lend themselves to many of the tools and algorithms available for 2D images. However, the additional dimension of voxels may be costly to manage in memory when mapping large spaces at high resolutions. While lowering the resolution and using interpolation is common work-around, in the literature we often find that authors either use trilinear interpolation or nearest neighbors and rarely any of the intermediate options. This paper presents a survey of geometric interpolation methods for voxel-based map representations. In particular we study the truncated signed distance field (TSDF) and the impact of using fewer than 8 samples to perform interpolation within a depth-camera pose tracking and mapping scenario. We find that lowering the number of samples fetched to perform the interpolation results in performance similar to the commonly used trilinear interpolation method, but leads to higher framerates. We also report that lower bit-depth generally leads to performance degradation, though not as much as may be expected, with voxels containing as few as 3 bits sometimes resulting in adequate estimation of camera trajectories.

    Download full text (pdf)
    A Survey of Voxel Interpolation Methods and an Evaluation of Their Impact on Volumetric Map-Based Visual Odometry
  • 46.
    Chadalavada, Ravi Teja
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Krug, Robert
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Empirical evaluation of human trust in an expressive mobile robot2016In: Proceedings of RSS Workshop "Social Trust in Autonomous Robots 2016", 2016Conference paper (Refereed)
    Abstract [en]

    A mobile robot communicating its intentions using Spatial Augmented Reality (SAR) on the shared floor space makes humans feel safer and more comfortable around the robot. Our previous work [1] and several other works established this fact. We built upon that work by adding an adaptable information and control to the SAR module. An empirical study about how a mobile robot builds trust in humans by communicating its intentions was conducted. A novel way of evaluating that trust is presented and experimentally shown that adaption in SAR module lead to natural interaction and the new evaluation system helped us discover that the comfort levels between human-robot interactions approached those of human-human interactions.

    Download full text (pdf)
    fulltext
  • 47.
    Chadalavada, Ravi Teja
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Krug, Robert
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    That’s on my Mind!: Robot to Human Intention Communication through on-board Projection on Shared Floor Space2015In: 2015 European Conference on Mobile Robots (ECMR), New York: IEEE conference proceedings , 2015Conference paper (Refereed)
    Abstract [en]

    The upcoming new generation of autonomous vehicles for transporting materials in industrial environments will be more versatile, flexible and efficient than traditional AGVs, which simply follow pre-defined paths. However, freely navigating vehicles can appear unpredictable to human workers and thus cause stress and render joint use of the available space inefficient. Here we address this issue and propose on-board intention projection on the shared floor space for communication from robot to human. We present a research prototype of a robotic fork-lift equipped with a LED projector to visualize internal state information and intents. We describe the projector system and discuss calibration issues. The robot’s ability to communicate its intentions is evaluated in realistic situations where test subjects meet the robotic forklift. The results show that already adding simple information, such as the trajectory and the space to be occupied by the robot in the near future, is able to effectively improve human response to the robot.

  • 48.
    Chadalavada, Ravi Teja
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Schindler, Maike
    Faculty of Human Sciences, University of Cologne, Germany, Cologne, Gemany.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Implicit intention transference using eye-tracking glasses for improved safety in human-robot interaction2019Conference paper (Refereed)
    Abstract [en]

    Eye gaze can convey information about intentions beyond what can beinferred from the trajectory and head pose of a person. We propose eye-trackingglasses as safety equipment in industrial environments shared by humans androbots. In this work, an implicit intention transference system was developed and implemented. Robot was given access to human eye gaze data, and it responds tothe eye gaze data through spatial augmented reality projections on the sharedfloor space in real-time and the robot could also adapt its path. This allows proactivesafety approaches in HRI for example by attempting to get the human'sattention when they are in the vicinity of a moving robot. A study was conductedwith workers at an industrial warehouse. The time taken to understand the behaviorof the system was recorded. Electrodermal activity and pupil diameter wererecorded to measure the increase in stress and cognitive load while interactingwith an autonomous system, using these measurements as a proxy to quantifytrust in autonomous systems.

    Download full text (pdf)
    Implicit intention transference using eye-tracking glasses for improved safety in human-robot interaction
  • 49.
    Chadalavada, Ravi Teja
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Schindler, Maike
    Örebro University, School of Science and Technology.
    Palm, Rainer
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Accessing your navigation plans! Human-Robot Intention Transfer using Eye-Tracking Glasses2018In: Advances in Manufacturing Technology XXXII: Proceedings of the 16th International Conference on Manufacturing Research, incorporating the 33rd National Conference on Manufacturing Research, September 11–13, 2018, University of Skövde, Sweden / [ed] Case K. &Thorvald P., Amsterdam, Netherlands: IOS Press, 2018, p. 253-258Conference paper (Refereed)
    Abstract [en]

    Robots in human co-habited environments need human-aware task and motion planning, ideally responding to people’s motion intentions as soon as they can be inferred from human cues. Eye gaze can convey information about intentions beyond trajectory and head pose of a person. Hence, we propose eye-tracking glasses as safety equipment in industrial environments shared by humans and robots. This paper investigates the possibility of human-to-robot implicit intention transference solely from eye gaze data.  We present experiments in which humans wearing eye-tracking glasses encountered a small forklift truck under various conditions. We evaluate how the observed eye gaze patterns of the participants related to their navigation decisions. Our analysis shows that people primarily gazed on that side of the robot they ultimately decided to pass by. We discuss implications of these results and relate to a control approach that uses human eye gaze for early obstacle avoidance.

    Download full text (pdf)
    Accessing your navigation plans! Human-Robot Intention Transfer using Eye-Tracking Glasses
  • 50.
    Chadalavada, Ravi Teja
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Schindler, Maike
    Faculty of Human Sciences, University of Cologne, Germany.
    Palm, Rainer
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Bi-directional navigation intent communication using spatial augmented reality and eye-tracking glasses for improved safety in human-robot interaction2020In: Robotics and Computer-Integrated Manufacturing, ISSN 0736-5845, E-ISSN 1879-2537, Vol. 61, article id 101830Article in journal (Refereed)
    Abstract [en]

    Safety, legibility and efficiency are essential for autonomous mobile robots that interact with humans. A key factor in this respect is bi-directional communication of navigation intent, which we focus on in this article with a particular view on industrial logistic applications. In the direction robot-to-human, we study how a robot can communicate its navigation intent using Spatial Augmented Reality (SAR) such that humans can intuitively understand the robot's intention and feel safe in the vicinity of robots. We conducted experiments with an autonomous forklift that projects various patterns on the shared floor space to convey its navigation intentions. We analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift and carried out stimulated recall interviews (SRI) in order to identify desirable features for projection of robot intentions. In the direction human-to-robot, we argue that robots in human co-habited environments need human-aware task and motion planning to support safety and efficiency, ideally responding to people's motion intentions as soon as they can be inferred from human cues. Eye gaze can convey information about intentions beyond what can be inferred from the trajectory and head pose of a person. Hence, we propose eye-tracking glasses as safety equipment in industrial environments shared by humans and robots. In this work, we investigate the possibility of human-to-robot implicit intention transference solely from eye gaze data and evaluate how the observed eye gaze patterns of the participants relate to their navigation decisions. We again analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift for clues that could reveal direction intent. Our analysis shows that people primarily gazed on that side of the robot they ultimately decided to pass by. We discuss implications of these results and relate to a control approach that uses human gaze for early obstacle avoidance.

123456 1 - 50 of 298
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf