oru.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 206) Show all publications
Fan, H., Hernandez Bennetts, V., Schaffernicht, E. & Lilienthal, A. (2018). A cluster analysis approach based on exploiting density peaks for gas discrimination with electronic noses in open environments. Sensors and actuators. B, Chemical, 259, 183-203
Open this publication in new window or tab >>A cluster analysis approach based on exploiting density peaks for gas discrimination with electronic noses in open environments
2018 (English)In: Sensors and actuators. B, Chemical, ISSN 0925-4005, E-ISSN 1873-3077, Vol. 259, p. 183-203Article in journal (Refereed) Published
Abstract [en]

Gas discrimination in open and uncontrolled environments based on smart low-cost electro-chemical sensor arrays (e-noses) is of great interest in several applications, such as exploration of hazardous areas, environmental monitoring, and industrial surveillance. Gas discrimination for e-noses is usually based on supervised pattern recognition techniques. However, the difficulty and high cost of obtaining extensive and representative labeled training data limits the applicability of supervised learning. Thus, to deal with the lack of information regarding target substances and unknown interferents, unsupervised gas discrimination is an advantageous solution. In this work, we present a cluster-based approach that can infer the number of different chemical compounds, and provide a probabilistic representation of the class labels for the acquired measurements in a given environment. Our approach is validated with the samples collected in indoor and outdoor environments using a mobile robot equipped with an array of commercial metal oxide sensors. Additional validation is carried out using a multi-compound data set collected with stationary sensor arrays inside a wind tunnel under various airflow conditions. The results show that accurate class separation can be achieved with a low sensitivity to the selection of the only free parameter, namely the neighborhood size, which is used for density estimation in the clustering process.

Place, publisher, year, edition, pages
Amsterda, Netherlands: Elsevier, 2018
Keywords
Gas discrimination, environmental monitoring, metal oxide sensors, cluster analysis, unsupervised learning
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-63468 (URN)10.1016/j.snb.2017.10.063 (DOI)000424877600023 ()2-s2.0-85038032167 (Scopus ID)
Projects
SmokBot
Funder
EU, Horizon 2020, 645101
Available from: 2017-12-19 Created: 2017-12-19 Last updated: 2018-09-17Bibliographically approved
Fan, H., Kucner, T. P., Magnusson, M., Li, T. & Lilienthal, A. (2018). A Dual PHD Filter for Effective Occupancy Filtering in a Highly Dynamic Environment. IEEE transactions on intelligent transportation systems (Print), 19(9), 2977-2993
Open this publication in new window or tab >>A Dual PHD Filter for Effective Occupancy Filtering in a Highly Dynamic Environment
Show others...
2018 (English)In: IEEE transactions on intelligent transportation systems (Print), ISSN 1524-9050, E-ISSN 1558-0016, Vol. 19, no 9, p. 2977-2993Article in journal (Refereed) Published
Abstract [en]

Environment monitoring remains a major challenge for mobile robots, especially in densely cluttered or highly populated dynamic environments, where uncertainties originated from environment and sensor significantly challenge the robot's perception. This paper proposes an effective occupancy filtering method called the dual probability hypothesis density (DPHD) filter, which models uncertain phenomena, such as births, deaths, occlusions, false alarms, and miss detections, by using random finite sets. The key insight of our method lies in the connection of the idea of dynamic occupancy with the concepts of the phase space density in gas kinetic and the PHD in multiple target tracking. By modeling the environment as a mixture of static and dynamic parts, the DPHD filter separates the dynamic part from the static one with a unified filtering process, but has a higher computational efficiency than existing Bayesian Occupancy Filters (BOFs). Moreover, an adaptive newborn function and a detection model considering occlusions are proposed to improve the filtering efficiency further. Finally, a hybrid particle implementation of the DPHD filter is proposed, which uses a box particle filter with constant discrete states and an ordinary particle filter with a time-varying number of particles in a continuous state space to process the static part and the dynamic part, respectively. This filter has a linear complexity with respect to the number of grid cells occupied by dynamic obstacles. Real-world experiments on data collected by a lidar at a busy roundabout demonstrate that our approach can handle monitoring of a highly dynamic environment in real time.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2018
Keywords
Mobile robot, occupancy filtering, PHD filter, BOF, particle filter, random finite set
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-63981 (URN)10.1109/TITS.2017.2770152 (DOI)000444611400021 ()2-s2.0-85038368968 (Scopus ID)
Note

Funding Agencies:

EU Project SPENCER  600877 

Marie Sklodowska-Curie Individual Fellowship  709267 

National Twelfth Five-Year Plan for Science and Technology Support of China  2014BAK12B03 

Available from: 2018-01-09 Created: 2018-01-09 Last updated: 2018-09-28Bibliographically approved
Mielle, M., Magnusson, M. & Lilienthal, A. J. (2018). A method to segment maps from different modalities using free space layout MAORIS: map of ripples segmentation. In: : . Paper presented at IEEE International Conference on Robotics and Automation (ICRA 2018), Brisbane, Australia, May 21-25, 2018 (pp. 4993-4999). IEEE Computer Society
Open this publication in new window or tab >>A method to segment maps from different modalities using free space layout MAORIS: map of ripples segmentation
2018 (English)Conference paper, Published paper (Refereed)
Abstract [en]

How to divide floor plans or navigation maps into semantic representations, such as rooms and corridors, is an important research question in fields such as human-robot interaction, place categorization, or semantic mapping. While most works focus on segmenting robot built maps, those are not the only types of map a robot, or its user, can use. We present a method for segmenting maps from different modalities, focusing on robot built maps and hand-drawn sketch maps, and show better results than state of the art for both types.

Our method segments the map by doing a convolution between the distance image of the map and a circular kernel, and grouping pixels of the same value. Segmentation is done by detecting ripple-like patterns where pixel values vary quickly, and merging neighboring regions with similar values.

We identify a flaw in the segmentation evaluation metric used in recent works and propose a metric based on Matthews correlation coefficient (MCC). We compare our results to ground-truth segmentations of maps from a publicly available dataset, on which we obtain a better MCC than the state of the art with 0.98 compared to 0.65 for a recent Voronoi-based segmentation method and 0.70 for the DuDe segmentation method.

We also provide a dataset of sketches of an indoor environment, with two possible sets of ground truth segmentations, on which our method obtains an MCC of 0.56 against 0.28 for the Voronoi-based segmentation method and 0.30 for DuDe.

Place, publisher, year, edition, pages
IEEE Computer Society, 2018
Keywords
map segmentation, free space, layout
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-68421 (URN)000446394503114 ()
Conference
IEEE International Conference on Robotics and Automation (ICRA 2018), Brisbane, Australia, May 21-25, 2018
Funder
EU, Horizon 2020, ICT-23-2014 645101 SmokeBotKnowledge Foundation, 20140220
Available from: 2018-08-09 Created: 2018-08-09 Last updated: 2018-10-22Bibliographically approved
Canelhas, D. R., Stoyanov, T. & Lilienthal, A. J. (2018). A Survey of Voxel Interpolation Methods and an Evaluation of Their Impact on Volumetric Map-Based Visual Odometry. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA),: . Paper presented at IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, May 21-25, 2018 (pp. 6337-6343). IEEE Computer Society
Open this publication in new window or tab >>A Survey of Voxel Interpolation Methods and an Evaluation of Their Impact on Volumetric Map-Based Visual Odometry
2018 (English)In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA),, IEEE Computer Society, 2018, p. 6337-6343Conference paper, Published paper (Refereed)
Abstract [en]

Voxel volumes are simple to implement and lend themselves to many of the tools and algorithms available for 2D images. However, the additional dimension of voxels may be costly to manage in memory when mapping large spaces at high resolutions. While lowering the resolution and using interpolation is common work-around, in the literature we often find that authors either use trilinear interpolation or nearest neighbors and rarely any of the intermediate options. This paper presents a survey of geometric interpolation methods for voxel-based map representations. In particular we study the truncated signed distance field (TSDF) and the impact of using fewer than 8 samples to perform interpolation within a depth-camera pose tracking and mapping scenario. We find that lowering the number of samples fetched to perform the interpolation results in performance similar to the commonly used trilinear interpolation method, but leads to higher framerates. We also report that lower bit-depth generally leads to performance degradation, though not as much as may be expected, with voxels containing as few as 3 bits sometimes resulting in adequate estimation of camera trajectories.

Place, publisher, year, edition, pages
IEEE Computer Society, 2018
Keywords
Voxels, Compression, Interpolation, TSDF, Visual Odometry
National Category
Robotics Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-67850 (URN)000446394504116 ()
Conference
IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, May 21-25, 2018
Projects
H2020 ILIADH2020 Roblog
Funder
EU, Horizon 2020, 732737
Available from: 2018-07-11 Created: 2018-07-11 Last updated: 2018-10-22Bibliographically approved
Chadalavada, R. T., Andreasson, H., Schindler, M., Palm, R. & Lilienthal, A. (2018). Accessing your navigation plans! Human-Robot Intention Transfer using Eye-Tracking Glasses. In: Case K. &Thorvald P. (Ed.), Advances in Manufacturing Technology XXXII: Proceedings of the 16th International Conference on Manufacturing Research, incorporating the 33rd National Conference on Manufacturing Research, September 11–13, 2018, University of Skövde, Sweden. Paper presented at 16th International Conference on Manufacturing Research, incorporating the 33rd National Conference on Manufacturing Research, University of Skövde, Sweden, September 11–13, 2018 (pp. 253-258). Amsterdam, Netherlands: IOS Press
Open this publication in new window or tab >>Accessing your navigation plans! Human-Robot Intention Transfer using Eye-Tracking Glasses
Show others...
2018 (English)In: Advances in Manufacturing Technology XXXII: Proceedings of the 16th International Conference on Manufacturing Research, incorporating the 33rd National Conference on Manufacturing Research, September 11–13, 2018, University of Skövde, Sweden / [ed] Case K. &Thorvald P., Amsterdam, Netherlands: IOS Press, 2018, p. 253-258Conference paper, Published paper (Refereed)
Abstract [en]

Robots in human co-habited environments need human-aware task and motion planning, ideally responding to people’s motion intentions as soon as they can be inferred from human cues. Eye gaze can convey information about intentions beyond trajectory and head pose of a person. Hence, we propose eye-tracking glasses as safety equipment in industrial environments shared by humans and robots. This paper investigates the possibility of human-to-robot implicit intention transference solely from eye gaze data.  We present experiments in which humans wearing eye-tracking glasses encountered a small forklift truck under various conditions. We evaluate how the observed eye gaze patterns of the participants related to their navigation decisions. Our analysis shows that people primarily gazed on that side of the robot they ultimately decided to pass by. We discuss implications of these results and relate to a control approach that uses human eye gaze for early obstacle avoidance.

Place, publisher, year, edition, pages
Amsterdam, Netherlands: IOS Press, 2018
Series
Advances in Transdisciplinary Engineering, ISSN 2352-751X, E-ISSN 2352-7528
Keywords
Human-Robot Interaction (HRI), Eye-tracking, Eye-Tracking Glasses, Navigation Intent, Implicit Intention Transference, Obstacle avoidance.
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-70706 (URN)10.3233/978-1-61499-902-7-253 (DOI)2-s2.0-85057390000 (Scopus ID)978-1-61499-901-0 (ISBN)978-1-61499-902-7 (ISBN)
Conference
16th International Conference on Manufacturing Research, incorporating the 33rd National Conference on Manufacturing Research, University of Skövde, Sweden, September 11–13, 2018
Projects
Action and Intention Recognition (AIR)ILIAD
Available from: 2018-12-12 Created: 2018-12-12 Last updated: 2018-12-18Bibliographically approved
Swaminathan, C. S., Kucner, T. P., Magnusson, M., Palmieri, L. & Lilienthal, A. (2018). Down the CLiFF: Flow-Aware Trajectory Planning under Motion Pattern Uncertainty. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): . Paper presented at 31st IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, October 1-5, 2018 (pp. 7403-7409). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Down the CLiFF: Flow-Aware Trajectory Planning under Motion Pattern Uncertainty
Show others...
2018 (English)In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE), 2018, p. 7403-7409Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we address the problem of flow-aware trajectory planning in dynamic environments considering flow model uncertainty. Flow-aware planning aims to plan trajectories that adhere to existing flow motion patterns in the environment, with the goal to make robots more efficient, less intrusive and safer. We use a statistical model called CLiFF-map that can map flow patterns for both continuous media and discrete objects. We propose novel cost and biasing functions for an RRT* planning algorithm, which exploits all the information available in the CLiFF-map model, including uncertainties due to flow variability or partial observability. Qualitatively, a benefit of our approach is that it can also be tuned to yield trajectories with different qualities such as exploratory or cautious, depending on application requirements. Quantitatively, we demonstrate that our approach produces more flow-compliant trajectories, compared to two baselines.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2018
Keywords
Trajectory, Robots, Planning, Cost function, Uncertainty, Veichle dynamics, Aerospace electronics
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-70143 (URN)10.1109/IROS.2018.8593905 (DOI)
Conference
31st IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, October 1-5, 2018
Projects
ILIAD
Available from: 2018-11-12 Created: 2018-11-12 Last updated: 2019-01-08Bibliographically approved
Palm, R. & Lilienthal, A. (2018). Fuzzy logic and control in Human-Robot Systems: geometrical and kinematic considerations. In: IEEE (Ed.), WCCI 2018: 2018 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). Paper presented at FUZZ-IEEE 2018, Rio de Janeiro, Brazil, 8-13 July, 2018 (pp. 827-834). IEEE
Open this publication in new window or tab >>Fuzzy logic and control in Human-Robot Systems: geometrical and kinematic considerations
2018 (English)In: WCCI 2018: 2018 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) / [ed] IEEE, IEEE, 2018, p. 827-834Conference paper, Published paper (Refereed)
Abstract [en]

The interaction between humans and mobile robots in shared areas requires adequate control for both humans and robots.The online path planning of the robot depending on the estimated or intended movement of the person is crucial for the obstacle avoidance and close cooperation between them. The velocity obstacles method and its fuzzification optimizes the relationship between the velocities of a robot and a human agent during the interaction. In order to find the estimated intersection between robot and human in the case of positions/orientations disturbed by noise, analytical and fuzzified versions are presented. The orientation of a person is estimated by eye tracking, with the help of which the intersection area is calculated. Eye tracking leads to clusters of fixations that are condensed into cluster centers by fuzzy-time clustering to detect the intention and attention of humans.

Place, publisher, year, edition, pages
IEEE, 2018
Keywords
Human-robot interaction, fuzzy control, obstacle avoidance, eye tracking
National Category
Robotics
Research subject
Human-Computer Interaction
Identifiers
urn:nbn:se:oru:diva-68021 (URN)978-1-5090-6020-7 (ISBN)
Conference
FUZZ-IEEE 2018, Rio de Janeiro, Brazil, 8-13 July, 2018
Available from: 2018-07-23 Created: 2018-07-23 Last updated: 2018-09-04Bibliographically approved
Neumann, P. P., Hüllmann, D., Krentel, D., Kluge, M., Kohlhoff, H. & Lilienthal, A. (2018). Gas Tomography Up In The Air!. In: Proceedings of the IEEE Sensors 2018: . Paper presented at IEEE Sensors 2018, New Dehli, India, 28-31 October, 2018.
Open this publication in new window or tab >>Gas Tomography Up In The Air!
Show others...
2018 (English)In: Proceedings of the IEEE Sensors 2018, 2018Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we present an autonomous aerial robot to reconstruct tomographic 2D slices of gas plumes in outdoor environments. Our platform, the so-called Unmanned Aerial Vehicle for Remote Gas Sensing (UAV-REGAS) combines a lightweight Tunable Diode Laser Absorption Spectroscopy (TDLAS) sensor with a 3-axis aerial stabilization gimbal for aiming on a versatile octocopter. The TDLAS sensor provides integral gas concentration measurements but no information regarding the distance traveled by the laser diode's beam or the distribution of the gas along the optical path. We complemented the set-up with a laser rangefinder and apply principles of Computed Tomography (CT) to create a model of the spatial gas distribution from these integral concentration measurements. To allow for a rudimentary ground truth evaluation of the applied gas tomography algorithm, we set up a unique outdoor test environment based on two 3D ultrasonic anemometers and a distributed array of 10 infrared gas transmitters. We present first results showing the 2D plume reconstruction capabilities of the system under realistic conditions.

Keywords
Aerial robot, gas tomography, plume, TDLAS
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-71523 (URN)978-1-5386-4707-3 (ISBN)
Conference
IEEE Sensors 2018, New Dehli, India, 28-31 October, 2018
Available from: 2019-01-17 Created: 2019-01-17 Last updated: 2019-01-22Bibliographically approved
Almqvist, H., Magnusson, M., Kucner, T. P. & Lilienthal, A. (2018). Learning to detect misaligned point clouds. Journal of Field Robotics, 35(5), 662-677
Open this publication in new window or tab >>Learning to detect misaligned point clouds
2018 (English)In: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 35, no 5, p. 662-677Article in journal (Refereed) Published
Abstract [en]

Matching and merging overlapping point clouds is a common procedure in many applications, including mobile robotics, three-dimensional mapping, and object visualization. However, fully automatic point-cloud matching, without manual verification, is still not possible because no matching algorithms exist today that can provide any certain methods for detecting misaligned point clouds. In this article, we make a comparative evaluation of geometric consistency methods for classifying aligned and nonaligned point-cloud pairs. We also propose a method that combines the results of the evaluated methods to further improve the classification of the point clouds. We compare a range of methods on two data sets from different environments related to mobile robotics and mapping. The results show that methods based on a Normal Distributions Transform representation of the point clouds perform best under the circumstances presented herein.

Place, publisher, year, edition, pages
John Wiley & Sons, 2018
Keywords
perception, mapping, position estimation
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-62985 (URN)10.1002/rob.21768 (DOI)000437836900002 ()2-s2.0-85037622789 (Scopus ID)
Projects
ILIADALLO
Funder
EU, Horizon 2020, 732737Knowledge Foundation, 20110214
Available from: 2017-12-05 Created: 2017-12-05 Last updated: 2018-07-27Bibliographically approved
Hüllmann, D., Paul, N., Kohlhoff, H., Neumann, P. P. & Lilienthal, A. (2018). Measuring rotor speed for wind vector estimation on multirotor aircraft. Paper presented at 34th Danubia Adria Symposium on Advances in Experimental Mechanics, 19-22 September, Trieste, Italy. Materials Today: Proceedings, 5(13), 26703-26708
Open this publication in new window or tab >>Measuring rotor speed for wind vector estimation on multirotor aircraft
Show others...
2018 (English)In: Materials Today: Proceedings, E-ISSN 2214-7853, Vol. 5, no 13, p. 26703-26708Article in journal (Refereed) Published
Abstract [en]

For several applications involving multirotor aircraft, it is crucial to know both the direction and speed of the ambient wind. In this paper, an approach to wind vector estimation based on an equilibrium of the principal forces acting on the aircraft is shown. As the thrust force generated by the rotors depends on their rotational speed, a sensor to measure this quantity is required. Two concepts for such a sensor are presented: One is based on tapping the signal carrying the speed setpoint for the motor controllers, the other one uses phototransistors placed underneath the rotor blades. While some complications were encountered with the first approach, the second yields accurate measurement data. This is shown by an experiment comparing the proposed speed sensor to a commercial non-contact tachometer.

Place, publisher, year, edition, pages
Amsterdam, Netherlands: Elsevier, 2018
Keywords
Rotor speed, tachometer, UAV, wind vector estimation
National Category
Robotics Signal Processing
Research subject
Mechanical Engineering; Electrical Engineering
Identifiers
urn:nbn:se:oru:diva-71526 (URN)10.1016/j.matpr.2018.08.139 (DOI)
Conference
34th Danubia Adria Symposium on Advances in Experimental Mechanics, 19-22 September, Trieste, Italy
Available from: 2019-01-17 Created: 2019-01-17 Last updated: 2019-01-17Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-0217-9326

Search in DiVA

Show all publications