oru.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 201) Show all publications
Fan, H., Hernandez Bennetts, V., Schaffernicht, E. & Lilienthal, A. (2018). A cluster analysis approach based on exploiting density peaks for gas discrimination with electronic noses in open environments. Sensors and actuators. B, Chemical, 259, 183-203
Open this publication in new window or tab >>A cluster analysis approach based on exploiting density peaks for gas discrimination with electronic noses in open environments
2018 (English)In: Sensors and actuators. B, Chemical, ISSN 0925-4005, E-ISSN 1873-3077, Vol. 259, p. 183-203Article in journal (Refereed) Published
Abstract [en]

Gas discrimination in open and uncontrolled environments based on smart low-cost electro-chemical sensor arrays (e-noses) is of great interest in several applications, such as exploration of hazardous areas, environmental monitoring, and industrial surveillance. Gas discrimination for e-noses is usually based on supervised pattern recognition techniques. However, the difficulty and high cost of obtaining extensive and representative labeled training data limits the applicability of supervised learning. Thus, to deal with the lack of information regarding target substances and unknown interferents, unsupervised gas discrimination is an advantageous solution. In this work, we present a cluster-based approach that can infer the number of different chemical compounds, and provide a probabilistic representation of the class labels for the acquired measurements in a given environment. Our approach is validated with the samples collected in indoor and outdoor environments using a mobile robot equipped with an array of commercial metal oxide sensors. Additional validation is carried out using a multi-compound data set collected with stationary sensor arrays inside a wind tunnel under various airflow conditions. The results show that accurate class separation can be achieved with a low sensitivity to the selection of the only free parameter, namely the neighborhood size, which is used for density estimation in the clustering process.

Place, publisher, year, edition, pages
Amsterda, Netherlands: Elsevier, 2018
Keywords
Gas discrimination, environmental monitoring, metal oxide sensors, cluster analysis, unsupervised learning
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-63468 (URN)10.1016/j.snb.2017.10.063 (DOI)000424877600023 ()2-s2.0-85038032167 (Scopus ID)
Projects
SmokBot
Funder
EU, Horizon 2020, 645101
Available from: 2017-12-19 Created: 2017-12-19 Last updated: 2018-09-17Bibliographically approved
Fan, H., Kucner, T. P., Magnusson, M., Li, T. & Lilienthal, A. (2018). A Dual PHD Filter for Effective Occupancy Filtering in a Highly Dynamic Environment. IEEE transactions on intelligent transportation systems (Print), 19(9), 2977-2993
Open this publication in new window or tab >>A Dual PHD Filter for Effective Occupancy Filtering in a Highly Dynamic Environment
Show others...
2018 (English)In: IEEE transactions on intelligent transportation systems (Print), ISSN 1524-9050, E-ISSN 1558-0016, Vol. 19, no 9, p. 2977-2993Article in journal (Refereed) Published
Abstract [en]

Environment monitoring remains a major challenge for mobile robots, especially in densely cluttered or highly populated dynamic environments, where uncertainties originated from environment and sensor significantly challenge the robot's perception. This paper proposes an effective occupancy filtering method called the dual probability hypothesis density (DPHD) filter, which models uncertain phenomena, such as births, deaths, occlusions, false alarms, and miss detections, by using random finite sets. The key insight of our method lies in the connection of the idea of dynamic occupancy with the concepts of the phase space density in gas kinetic and the PHD in multiple target tracking. By modeling the environment as a mixture of static and dynamic parts, the DPHD filter separates the dynamic part from the static one with a unified filtering process, but has a higher computational efficiency than existing Bayesian Occupancy Filters (BOFs). Moreover, an adaptive newborn function and a detection model considering occlusions are proposed to improve the filtering efficiency further. Finally, a hybrid particle implementation of the DPHD filter is proposed, which uses a box particle filter with constant discrete states and an ordinary particle filter with a time-varying number of particles in a continuous state space to process the static part and the dynamic part, respectively. This filter has a linear complexity with respect to the number of grid cells occupied by dynamic obstacles. Real-world experiments on data collected by a lidar at a busy roundabout demonstrate that our approach can handle monitoring of a highly dynamic environment in real time.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2018
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-63981 (URN)10.1109/TITS.2017.2770152 (DOI)
Available from: 2018-01-09 Created: 2018-01-09 Last updated: 2018-09-17Bibliographically approved
Mielle, M., Magnusson, M. & Lilienthal, A. J. (2018). A method to segment maps from different modalities using free space layout MAORIS: map of ripples segmentation. In: : . Paper presented at IEEE International Conference on Robotics and Automation (ICRA 2018), Brisbane, May 21 - May 25, 2018.
Open this publication in new window or tab >>A method to segment maps from different modalities using free space layout MAORIS: map of ripples segmentation
2018 (English)Conference paper, Published paper (Refereed)
Abstract [en]

How to divide floor plans or navigation maps into semantic representations, such as rooms and corridors, is an important research question in fields such as human-robot interaction, place categorization, or semantic mapping. While most works focus on segmenting robot built maps, those are not the only types of map a robot, or its user, can use. We present a method for segmenting maps from different modalities, focusing on robot built maps and hand-drawn sketch maps, and show better results than state of the art for both types.

Our method segments the map by doing a convolution between the distance image of the map and a circular kernel, and grouping pixels of the same value. Segmentation is done by detecting ripple-like patterns where pixel values vary quickly, and merging neighboring regions with similar values.

We identify a flaw in the segmentation evaluation metric used in recent works and propose a metric based on Matthews correlation coefficient (MCC). We compare our results to ground-truth segmentations of maps from a publicly available dataset, on which we obtain a better MCC than the state of the art with 0.98 compared to 0.65 for a recent Voronoi-based segmentation method and 0.70 for the DuDe segmentation method.

We also provide a dataset of sketches of an indoor environment, with two possible sets of ground truth segmentations, on which our method obtains an MCC of 0.56 against 0.28 for the Voronoi-based segmentation method and 0.30 for DuDe.

Keywords
map segmentation, free space, layout
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-68421 (URN)
Conference
IEEE International Conference on Robotics and Automation (ICRA 2018), Brisbane, May 21 - May 25, 2018
Funder
EU, Horizon 2020, ICT-23-2014 645101 SmokeBot
Available from: 2018-08-09 Created: 2018-08-09 Last updated: 2018-08-13Bibliographically approved
Canelhas, D. R., Stoyanov, T. & Lilienthal, A. J. (2018). A Survey of Voxel Interpolation Methods and an Evaluation of Their Impact on Volumetric Map-Based Visual Odometry. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA),: . Paper presented at IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, May 21 - 25, 2018.
Open this publication in new window or tab >>A Survey of Voxel Interpolation Methods and an Evaluation of Their Impact on Volumetric Map-Based Visual Odometry
2018 (English)In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA),, 2018Conference paper, Published paper (Refereed)
Abstract [en]

Voxel volumes are simple to implement and lend themselves to many of the tools and algorithms available for 2D images. However, the additional dimension of voxels may be costly to manage in memory when mapping large spaces at high resolutions. While lowering the resolution and using interpolation is common work-around, in the literature we often find that authors either use trilinear interpolation or nearest neighbors and rarely any of the intermediate options. This paper presents a survey of geometric interpolation methods for voxel-based map representations. In particular we study the truncated signed distance field (TSDF) and the impact of using fewer than 8 samples to perform interpolation within a depth-camera pose tracking and mapping scenario. We find that lowering the number of samples fetched to perform the interpolation results in performance similar to the commonly used trilinear interpolation method, but leads to higher framerates. We also report that lower bit-depth generally leads to performance degradation, though not as much as may be expected, with voxels containing as few as 3 bits sometimes resulting in adequate estimation of camera trajectories.

Keywords
Voxels, Compression, Interpolation, TSDF, Visual Odometry
National Category
Robotics Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-67850 (URN)
Conference
IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, May 21 - 25, 2018
Projects
H2020 ILIADH2020 Roblog
Funder
EU, Horizon 2020, 732737
Available from: 2018-07-11 Created: 2018-07-11 Last updated: 2018-08-30Bibliographically approved
Palm, R. & Lilienthal, A. (2018). Fuzzy logic and control in Human-Robot Systems: geometrical and kinematic considerations. In: IEEE (Ed.), WCCI 2018: 2018 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). Paper presented at FUZZ-IEEE 2018, Rio de Janeiro, Brazil, 8-13 July, 2018 (pp. 827-834). IEEE
Open this publication in new window or tab >>Fuzzy logic and control in Human-Robot Systems: geometrical and kinematic considerations
2018 (English)In: WCCI 2018: 2018 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) / [ed] IEEE, IEEE, 2018, p. 827-834Conference paper, Published paper (Refereed)
Abstract [en]

The interaction between humans and mobile robots in shared areas requires adequate control for both humans and robots.The online path planning of the robot depending on the estimated or intended movement of the person is crucial for the obstacle avoidance and close cooperation between them. The velocity obstacles method and its fuzzification optimizes the relationship between the velocities of a robot and a human agent during the interaction. In order to find the estimated intersection between robot and human in the case of positions/orientations disturbed by noise, analytical and fuzzified versions are presented. The orientation of a person is estimated by eye tracking, with the help of which the intersection area is calculated. Eye tracking leads to clusters of fixations that are condensed into cluster centers by fuzzy-time clustering to detect the intention and attention of humans.

Place, publisher, year, edition, pages
IEEE, 2018
Keywords
Human-robot interaction, fuzzy control, obstacle avoidance, eye tracking
National Category
Robotics
Research subject
Human-Computer Interaction
Identifiers
urn:nbn:se:oru:diva-68021 (URN)978-1-5090-6020-7 (ISBN)
Conference
FUZZ-IEEE 2018, Rio de Janeiro, Brazil, 8-13 July, 2018
Available from: 2018-07-23 Created: 2018-07-23 Last updated: 2018-09-04Bibliographically approved
Almqvist, H., Magnusson, M., Kucner, T. P. & Lilienthal, A. (2018). Learning to detect misaligned point clouds. Journal of Field Robotics, 35(5), 662-677
Open this publication in new window or tab >>Learning to detect misaligned point clouds
2018 (English)In: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 35, no 5, p. 662-677Article in journal (Refereed) Published
Abstract [en]

Matching and merging overlapping point clouds is a common procedure in many applications, including mobile robotics, three-dimensional mapping, and object visualization. However, fully automatic point-cloud matching, without manual verification, is still not possible because no matching algorithms exist today that can provide any certain methods for detecting misaligned point clouds. In this article, we make a comparative evaluation of geometric consistency methods for classifying aligned and nonaligned point-cloud pairs. We also propose a method that combines the results of the evaluated methods to further improve the classification of the point clouds. We compare a range of methods on two data sets from different environments related to mobile robotics and mapping. The results show that methods based on a Normal Distributions Transform representation of the point clouds perform best under the circumstances presented herein.

Place, publisher, year, edition, pages
John Wiley & Sons, 2018
Keywords
perception, mapping, position estimation
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-62985 (URN)10.1002/rob.21768 (DOI)000437836900002 ()2-s2.0-85037622789 (Scopus ID)
Projects
ILIADALLO
Funder
EU, Horizon 2020, 732737Knowledge Foundation, 20110214
Available from: 2017-12-05 Created: 2017-12-05 Last updated: 2018-07-27Bibliographically approved
Wiedemann, T., Shutin, D., Hernandez Bennetts, V., Schaffernicht, E. & Lilienthal, A. (2017). Bayesian Gas Source Localization and Exploration with a Multi-Robot System Using Partial Differential Equation Based Modeling. In: 2017 ISOCS/IEEE International Symposium on Olfaction and Electronic Nose (ISOEN 2017): Proceedings. Paper presented at International Symposium on Olfaction and Electronic Nose (ISOEN 2017), Montreal, Canada, May 28-31, 2017 (pp. 122-124).
Open this publication in new window or tab >>Bayesian Gas Source Localization and Exploration with a Multi-Robot System Using Partial Differential Equation Based Modeling
Show others...
2017 (English)In: 2017 ISOCS/IEEE International Symposium on Olfaction and Electronic Nose (ISOEN 2017): Proceedings, 2017, p. 122-124Conference paper, Published paper (Refereed)
Abstract [en]

Here we report on active water sampling devices forunderwater chemical sensing robots. Crayfish generate jetlikewater currents during food search by waving theflagella of their maxillipeds. The jets generated toward theirsides induce an inflow from the surroundings to the jets.Odor sample collection from the surroundings to theirolfactory organs is promoted by the generated inflow.Devices that model the jet discharge of crayfish have beendeveloped to investigate the effectiveness of the activechemical sampling. Experimental results are presented toconfirm that water samples are drawn to the chemicalsensors from the surroundings more rapidly by using theaxisymmetric flow field generated by the jet discharge thanby centrosymmetric flow field generated by simple watersuction. Results are also presented to show that there is atradeoff between the angular range of chemical samplecollection and the sample collection time.

National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-60688 (URN)9781509023936 (ISBN)9781509023929 (ISBN)
Conference
International Symposium on Olfaction and Electronic Nose (ISOEN 2017), Montreal, Canada, May 28-31, 2017
Available from: 2017-09-08 Created: 2017-09-08 Last updated: 2018-08-06Bibliographically approved
Neumann, P. P., Kohlhoff, H., Hüllmann, D., Lilienthal, A. & Kluge, M. (2017). Bringing Mobile Robot Olfaction to the Next Dimension - UAV-based Remote Sensing of Gas Clouds and Source Localization. In: 2017 IEEE International Conference on Robotics and Automation (ICRA): . Paper presented at 2017 IEEE International Conference on Robotics and Automation (ICRA 2017), Singapore, May 29 - June 3, 2017 (pp. 3910-3916). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Bringing Mobile Robot Olfaction to the Next Dimension - UAV-based Remote Sensing of Gas Clouds and Source Localization
Show others...
2017 (English)In: 2017 IEEE International Conference on Robotics and Automation (ICRA), Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 3910-3916Conference paper, Published paper (Refereed)
Abstract [en]

This paper introduces a novel robotic platform for aerial remote gas sensing. Spectroscopic measurement methods for remote sensing of selected gases lend themselves for use on mini-copters, which offer a number of advantages for inspection and surveillance. No direct contact with the target gas is needed and thus the influence of the aerial platform on the measured gas plume can be kept to a minimum. This allows to overcome one of the major issues with gas-sensitive mini-copters. On the other hand, remote gas sensors, most prominently Tunable Diode Laser Absorption Spectroscopy (TDLAS) sensors have been too bulky given the payload and energy restrictions of mini-copters. Here, we introduce and present the Unmanned Aerial Vehicle for Remote Gas Sensing (UAV-REGAS), which combines a novel lightweight TDLAS sensor with a 3-axis aerial stabilization gimbal for aiming on a versatile hexacopter. The proposed system can be deployed in scenarios that cannot be addressed by currently available robots and thus constitutes a significant step forward for the field of Mobile Robot Olfaction (MRO). It enables tomographic reconstruction of gas plumes and a localization of gas sources. We also present first results showing the gas sensing and aiming capabilities under realistic conditions.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2017
Series
IEEE International Conference on Robotics and Automation, ISSN 1050-4729
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-64767 (URN)10.1109/ICRA.2017.7989450 (DOI)2-s2.0-85027982828 (Scopus ID)978-1-5090-4633-1 (ISBN)978-1-5090-4634-8 (ISBN)
Conference
2017 IEEE International Conference on Robotics and Automation (ICRA 2017), Singapore, May 29 - June 3, 2017
Available from: 2018-02-01 Created: 2018-02-01 Last updated: 2018-02-02Bibliographically approved
Canelhas, D. R., Schaffernicht, E., Stoyanov, T., Lilienthal, A. & Davison, A. J. (2017). Compressed Voxel-Based Mapping Using Unsupervised Learning. Robotics, 6(3), Article ID 15.
Open this publication in new window or tab >>Compressed Voxel-Based Mapping Using Unsupervised Learning
Show others...
2017 (English)In: Robotics, E-ISSN 2218-6581, Vol. 6, no 3, article id 15Article in journal (Refereed) Published
Abstract [en]

In order to deal with the scaling problem of volumetric map representations, we propose spatially local methods for high-ratio compression of 3D maps, represented as truncated signed distance fields. We show that these compressed maps can be used as meaningful descriptors for selective decompression in scenarios relevant to robotic applications. As compression methods, we compare using PCA-derived low-dimensional bases to nonlinear auto-encoder networks. Selecting two application-oriented performance metrics, we evaluate the impact of different compression rates on reconstruction fidelity as well as to the task of map-aided ego-motion estimation. It is demonstrated that lossily reconstructed distance fields used as cost functions for ego-motion estimation can outperform the original maps in challenging scenarios from standard RGB-D (color plus depth) data sets due to the rejection of high-frequency noise content.

Place, publisher, year, edition, pages
Basel, Switzerland: MDPI AG, 2017
Keywords
3D mapping, TSDF, compression, dictionary learning, auto-encoder, denoising
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-64420 (URN)10.3390/robotics6030015 (DOI)000419218300002 ()2-s2.0-85030989493 (Scopus ID)
Note

Funding Agencies:

European Commission  FP7-ICT-270350 

H-ICT  732737 

Available from: 2018-01-19 Created: 2018-01-19 Last updated: 2018-01-19Bibliographically approved
Lilienthal, A. & Schindler, M. (2017). Conducting Dual Portable Eye-Tracking in Mathematical Creativity Research. In: Kaur, B., Ho, W.K., Toh, T.L., & Choy, B.H (Ed.), Proceedings the 41th Conference of the International Group for the Psychology of Mathematics Education: . Paper presented at The 41th Conference of the International Group for the Psychology of Mathematics Education, Singapore, July 17 – 22, 2017 (pp. 233-233). Singapore: PME, 1
Open this publication in new window or tab >>Conducting Dual Portable Eye-Tracking in Mathematical Creativity Research
2017 (English)In: Proceedings the 41th Conference of the International Group for the Psychology of Mathematics Education / [ed] Kaur, B., Ho, W.K., Toh, T.L., & Choy, B.H, Singapore: PME , 2017, Vol. 1, p. 233-233Conference paper, Published paper (Refereed)
Abstract [en]

Eye-tracking opens a window to the focus of attention of persons and promises to allow studying, e.g., creative processes “in vivo” (Nüssli, 2011). Most eye-tracking studies in mathematics education research focus on single students. However, following a Vygotskyan notion of learning and development where the individual and the social are dialectically interrelated, eye-tracking studies of collaborating persons appear beneficial for understanding students’ learning in their social facet. Dual eye-tracking, where two persons’ eye-movements are recorded and related to a joint coordinate-system, has hardly been used in mathematics education research. Especially dual portable eye-tracking (DPET) with goggles has hardly been explored due to its technical challenges compared to screen-based eye-tracking.In our interdisciplinary research project between mathematics education and computer science, we conduct DPET for studying collective mathematical creativity (Levenson, 2011) in a process perspective. DPET offers certain advantages, including to carry out paper and pen tasks in rather natural settings. Our research interests are: conducting DPET (technical), investigating opportunities and limitations of DPET for studying students’ collective creativity (methodological), and studying students’ collective creative problem solving (empirical).We carried out experiments with two pairs of university students wearing Pupil Pro eye tracking goggles. The students were given 45 min to solve a geometry problem in as many ways as possible. For our analysis, we first programmed MATLAB code to synchronize data from both participants’ goggles; resulting in a video displaying both students’ eye-movements projected on the task sheet, the sound recorded by the goggles, and additional information, e.g. pupil dilation. With these videos we expect to get insights into how students’ attentions meet, if students’ eye-movements follow one another, or verbal inputs, etc. We expect insights into promotive aspects in students’ collaboration: e.g., if pointing on the figure or intensive verbal communication promote students’ joint attention (cf. Nüssli, 2011). Finally, we think that the expected insights can contribute to existing research on collective mathematical creativity, especially to the question of how to enhance students’ creative collaboration.

Place, publisher, year, edition, pages
Singapore: PME, 2017
National Category
Didactics
Research subject
Computer Engineering; Computer Science
Identifiers
urn:nbn:se:oru:diva-64763 (URN)978-138-71-3608-7 (ISBN)
Conference
The 41th Conference of the International Group for the Psychology of Mathematics Education, Singapore, July 17 – 22, 2017
Available from: 2018-02-01 Created: 2018-02-01 Last updated: 2018-08-14Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-0217-9326

Search in DiVA

Show all publications