To Örebro University

oru.seÖrebro University Publications
Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 313) Show all publications
Sun, S., Mielle, M., Lilienthal, A. J. & Magnusson, M. (2024). 3QFP: Efficient neural implicit surface reconstruction using Tri-Quadtrees and Fourier feature Positional encoding. In: : . Paper presented at 2024 IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan, May 13-17, 2024. IEEE
Open this publication in new window or tab >>3QFP: Efficient neural implicit surface reconstruction using Tri-Quadtrees and Fourier feature Positional encoding
2024 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Neural implicit surface representations are currently receiving a lot of interest as a means to achieve high-fidelity surface reconstruction at a low memory cost, compared to traditional explicit representations. However, state-of-the-art methods still struggle with excessive memory usage and non-smooth surfaces. This is particularly problematic in large-scale applications with sparse inputs, as is common in robotics use cases. To address these issues, we first introduce a sparse structure, tri-quadtrees, which represents the environment using learnable features stored in three planar quadtree projections. Secondly, we concatenate the learnable features with a Fourier feature positional encoding. The combined features are then decoded into signed distance values through a small multi-layer perceptron. We demonstrate that this approach facilitates smoother reconstruction with a higher completion ratio with fewer holes. Compared to two recent baselines, one implicit and one explicit, our approach requires only 10%–50% as much memory, while achieving competitive quality. The code is released on https://github.com/ljjTYJR/3QFP.

Place, publisher, year, edition, pages
IEEE, 2024
Series
IEEE International Conference on Robotics and Automation (ICRA), ISSN 1050-4729, E-ISSN 2577-087X
National Category
Robotics
Identifiers
urn:nbn:se:oru:diva-117117 (URN)10.1109/ICRA57147.2024.10610338 (DOI)2-s2.0-85202450420 (Scopus ID)9798350384574 (ISBN)9798350384581 (ISBN)
Conference
2024 IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan, May 13-17, 2024
Funder
EU, Horizon 2020, 101017274
Available from: 2024-10-30 Created: 2024-10-30 Last updated: 2024-10-31Bibliographically approved
Alhashimi, A., Adolfsson, D., Andreasson, H., Lilienthal, A. & Magnusson, M. (2024). BFAR: improving radar odometry estimation using a bounded false alarm rate detector. Autonomous Robots, 48(8), Article ID 29.
Open this publication in new window or tab >>BFAR: improving radar odometry estimation using a bounded false alarm rate detector
Show others...
2024 (English)In: Autonomous Robots, ISSN 0929-5593, E-ISSN 1573-7527, Vol. 48, no 8, article id 29Article in journal (Refereed) Published
Abstract [en]

This work introduces a novel detector, bounded false-alarm rate (BFAR), for distinguishing true detections from noise in radar data, leading to improved accuracy in radar odometry estimation. Scanning frequency-modulated continuous wave (FMCW) radars can serve as valuable tools for localization and mapping under low visibility conditions. However, they tend to yield a higher level of noise in comparison to the more commonly employed lidars, thereby introducing additional challenges to the detection process. We propose a new radar target detector called BFAR which uses an affine transformation of the estimated noise level compared to the classical constant false-alarm rate (CFAR) detector. This transformation employs learned parameters that minimize the error in odometry estimation. Conceptually, BFAR can be viewed as an optimized blend of CFAR and fixed-level thresholding designed to minimize odometry estimation error. The strength of this approach lies in its simplicity. Only a single parameter needs to be learned from a training dataset when the affine transformation scale parameter is maintained. Compared to ad-hoc detectors, BFAR has the advantage of a specified upper-bound for the false-alarm probability, and better noise handling than CFAR. Repeatability tests show that BFAR yields highly repeatable detections with minimal redundancy. We have conducted simulations to compare the detection and false-alarm probabilities of BFAR with those of three baselines in non-homogeneous noise and varying target sizes. The results show that BFAR outperforms the other detectors. Moreover, We apply BFAR to the use case of radar odometry, and adapt a recent odometry pipeline, replacing its original conservative filtering with BFAR. In this way, we reduce the translation/rotation odometry errors/100 m from 1.3%/0.4◦ to 1.12%/0.38◦, and from 1.62%/0.57◦ to 1.21%/0.32◦, improving translation error by 14.2% and 25% on Oxford and Mulran public data sets, respectively.

Place, publisher, year, edition, pages
Springer, 2024
Keywords
Radar, CFAR, Odometry, FMCW
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-117575 (URN)10.1007/s10514-024-10176-2 (DOI)001358908800001 ()2-s2.0-85209565335 (Scopus ID)
Funder
Örebro University
Available from: 2024-12-05 Created: 2024-12-05 Last updated: 2024-12-05Bibliographically approved
Palm, R. & Lilienthal, A. J. (2024). Crossing-Point Estimation in Human-Robot Navigation-Statistical Linearization versus Sigma-Point Transformation. Sensors, 24(11), Article ID 3303.
Open this publication in new window or tab >>Crossing-Point Estimation in Human-Robot Navigation-Statistical Linearization versus Sigma-Point Transformation
2024 (English)In: Sensors, E-ISSN 1424-8220, Vol. 24, no 11, article id 3303Article in journal (Refereed) Published
Abstract [en]

Interactions between mobile robots and human operators in common areas require a high level of safety, especially in terms of trajectory planning, obstacle avoidance and mutual cooperation. In this connection, the crossings of planned trajectories and their uncertainty based on model fluctuations, system noise and sensor noise play an outstanding role. This paper discusses the calculation of the expected areas of interactions during human-robot navigation with respect to fuzzy and noisy information. The expected crossing points of the possible trajectories are nonlinearly associated with the positions and orientations of the robots and humans. The nonlinear transformation of a noisy system input, such as the directions of the motion of humans and robots, to a system output, the expected area of intersection of their trajectories, is performed by two methods: statistical linearization and the sigma-point transformation. For both approaches, fuzzy approximations are presented and the inverse problem is discussed where the input distribution parameters are computed from the given output distribution parameters.

Place, publisher, year, edition, pages
MDPI, 2024
Keywords
Gaussian noise, human–robot interaction, sigma-point transformation, unscented Kalman filter
National Category
Control Engineering
Identifiers
urn:nbn:se:oru:diva-114306 (URN)10.3390/s24113303 (DOI)38894096 (PubMedID)
Funder
EU, Horizon 2020, 101017274 (DARKO)
Available from: 2024-06-19 Created: 2024-06-19 Last updated: 2024-06-19Bibliographically approved
Winkler, N. P., Neumann, P. P., Schaffernicht, E. & Lilienthal, A. J. (2024). Gas Distribution Mapping With Radius-Based, Bi-directional Graph Neural Networks (RABI-GNN). In: 2024 IEEE International Symposium on Olfaction and Electronic Nose (ISOEN): . Paper presented at International Symposium on Olfaction and Electronic Nose (ISOEN 2024), Grapevine, TX, USA, May 12-15, 2024. IEEE
Open this publication in new window or tab >>Gas Distribution Mapping With Radius-Based, Bi-directional Graph Neural Networks (RABI-GNN)
2024 (English)In: 2024 IEEE International Symposium on Olfaction and Electronic Nose (ISOEN), IEEE , 2024Conference paper, Published paper (Refereed)
Abstract [en]

Gas Distribution Mapping (GDM) is essential in monitoring hazardous environments, where uneven sampling and spatial sparsity of data present significant challenges. Traditional methods for GDM often fall short in accuracy and expressiveness. Modern learning-based approaches employing Convolutional Neural Networks (CNNs) require regular-sized input data, limiting their adaptability to irregular and sparse datasets typically encountered in GDM. This study addresses these shortcomings by showcasing Graph Neural Networks (GNNs) for learningbased GDM on irregular and spatially sparse sensor data. Our Radius-Based, Bi-Directionally connected GNN (RABI-GNN) was trained on a synthetic gas distribution dataset on which it outperforms our previous CNN-based model while overcoming its constraints. We demonstrate the flexibility of RABI-GNN by applying it to real-world data obtained in an industrial steel factory, highlighting promising opportunities for more accurate GDM models.

Place, publisher, year, edition, pages
IEEE, 2024
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-115645 (URN)10.1109/ISOEN61239.2024.10556309 (DOI)001259381600051 ()2-s2.0-85197434833 (Scopus ID)9798350348668 (ISBN)9798350348651 (ISBN)
Conference
International Symposium on Olfaction and Electronic Nose (ISOEN 2024), Grapevine, TX, USA, May 12-15, 2024
Available from: 2024-08-27 Created: 2024-08-27 Last updated: 2024-08-27Bibliographically approved
Sun, S., Mielle, M., Lilienthal, A. J. & Magnusson, M. (2024). High-Fidelity SLAM Using Gaussian Splatting with Rendering-Guided Densification and Regularized Optimization. In: : .
Open this publication in new window or tab >>High-Fidelity SLAM Using Gaussian Splatting with Rendering-Guided Densification and Regularized Optimization
2024 (English)Conference paper, Published paper (Refereed)
Abstract [en]

We propose a dense RGBD SLAM system based on 3D Gaussian Splatting that provides metrically accurate pose tracking and visually realistic reconstruction. To this end, we first propose a Gaussian densification strategy based on the rendering loss to map unobserved areas and refine reobserved areas. Second, we introduce extra regularization parameters to alleviate the “forgetting” problem during contiunous mapping, where parameters tend to overfit the latest frame and result in decreasing rendering quality for previous frames. Both mapping and tracking are performed with Gaussian parameters by minimizing re-rendering loss in a differentiable way. Compared to recent neural and concurrently developed Gaussian splatting RGBD SLAM baselines, our method achieves state-of-the-art results on the synthetic dataset Replica and competitive results on the real-world dataset TUM. The code is released on https://github.com/ljjTYJR/HF-SLAM.

National Category
Robotics
Identifiers
urn:nbn:se:oru:diva-117115 (URN)
Funder
EU, Horizon 2020, 101017274
Note

Accepted by IROS 2024

Available from: 2024-10-30 Created: 2024-10-30 Last updated: 2024-10-31Bibliographically approved
Fan, H., Schaffernicht, E. & Lilienthal, A. J. (2024). Identification of Gas Mixtures with Few Labels Using Graph Convolutional Networks. In: 2024 IEEE International Symposium on Olfaction and Electronic Nose (ISOEN): . Paper presented at International Symposium on Olfaction and Electronic Nose (ISOEN 2024), Grapevine, TX, USA, May 12-15, 2024. IEEE
Open this publication in new window or tab >>Identification of Gas Mixtures with Few Labels Using Graph Convolutional Networks
2024 (English)In: 2024 IEEE International Symposium on Olfaction and Electronic Nose (ISOEN), IEEE , 2024Conference paper, Published paper (Refereed)
Abstract [en]

In real-world scenarios, gas sensor responses to mixtures of different compositions can be costly to determine a-priori, posing difficulties in identifying the presence of target analytes. In this paper, we propose the use of graph convolutional networks (GCN) to handle gas mixtures with few labelled data. We transform sensor responses into a graph structure using manifold learning and clustering, and then apply GCN for semisupervised node classification. Our approach does not require extensive training data of gas mixtures like many competing approaches, but it outperforms classical semi-supervised learning methods and achieves classification accuracy exceeding 88.5% and over 0.85 Cohen's kappa score given only 5% labelled data for training. This result demonstrates the potential towards realistic gas identification when varied mixtures are present.

Place, publisher, year, edition, pages
IEEE, 2024
Keywords
gas identification, gas mixture, electronic nose, graph convolutional networks, weakly supervised learning
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-115646 (URN)10.1109/ISOEN61239.2024.10556166 (DOI)001259381600033 ()2-s2.0-85197389618 (Scopus ID)9798350348668 (ISBN)9798350348651 (ISBN)
Conference
International Symposium on Olfaction and Electronic Nose (ISOEN 2024), Grapevine, TX, USA, May 12-15, 2024
Funder
Swedish Energy Agency
Note

This work is supported by the project SP13 'Monitoring of airflow and airborne particles, to provide early warning of irrespirable atmospheric conditions' under the academic program Sustainable Underground Mining (SUM), jointly financed by LKAB and the Swedish Energy Agency.

Available from: 2024-08-27 Created: 2024-08-27 Last updated: 2024-08-27Bibliographically approved
Zhu, Y., Fan, H., Rudenko, A., Magnusson, M., Schaffernicht, E. & Lilienthal, A. (2024). LaCE-LHMP: Airflow Modelling-Inspired Long-Term Human Motion Prediction By Enhancing Laminar Characteristics in Human Flow. In: 2024 IEEE International Conference on Robotics and Automation (ICRA): . Paper presented at IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan, May 13-17, 2024 (pp. 11281-11288). IEEE
Open this publication in new window or tab >>LaCE-LHMP: Airflow Modelling-Inspired Long-Term Human Motion Prediction By Enhancing Laminar Characteristics in Human Flow
Show others...
2024 (English)In: 2024 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2024, p. 11281-11288Conference paper, Published paper (Refereed)
Abstract [en]

Long-term human motion prediction (LHMP) is essential for safely operating autonomous robots and vehicles in populated environments. It is fundamental for various applications, including motion planning, tracking, human-robot interaction and safety monitoring. However, accurate prediction of human trajectories is challenging due to complex factors, including, for example, social norms and environmental conditions. The influence of such factors can be captured through Maps of Dynamics (MoDs), which encode spatial motion patterns learned from (possibly scattered and partial) past observations of motion in the environment and which can be used for data-efficient, interpretable motion prediction (MoD-LHMP). To address the limitations of prior work, especially regarding accuracy and sensitivity to anomalies in long-term prediction, we propose the Laminar Component Enhanced LHMP approach (LaCE-LHMP). Our approach is inspired by data-driven airflow modelling, which estimates laminar and turbulent flow components and uses predominantly the laminar components to make flow predictions. Based on the hypothesis that human trajectory patterns also manifest laminar flow (that represents predictable motion) and turbulent flow components (that reflect more unpredictable and arbitrary motion), LaCE-LHMP extracts the laminar patterns in human dynamics and uses them for human motion prediction. We demonstrate the superior prediction performance of LaCE-LHMP through benchmark comparisons with state-of-the-art LHMP methods, offering an unconventional perspective and a more intuitive understanding of human movement patterns.

Place, publisher, year, edition, pages
IEEE, 2024
Series
IEEE International Conference on Robotics and Automation (ICRA), ISSN 1050-4729, E-ISSN 2577-087X
Keywords
Human-Robot Interaction
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-117873 (URN)10.1109/ICRA57147.2024.10610717 (DOI)2-s2.0-85202449603 (Scopus ID)9798350384574 (ISBN)9798350384581 (ISBN)
Conference
IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan, May 13-17, 2024
Projects
DARKO
Funder
EU, Horizon 2020, 101017274
Note

This work has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 101017274 (DARKO), and is also partially funded by the academic program Sustainable Underground Mining (SUM) project, jointly financed by LKAB and the Swedish Energy Agency.

Available from: 2024-12-18 Created: 2024-12-18 Last updated: 2024-12-19Bibliographically approved
Baumanns, L., Pitta-Pantazi, D., Demosthenous, E., Lilienthal, A. J., Christou, C. & Schindler, M. (2024). Pattern-Recognition Processes of First-Grade Students: An Explorative Eye-Tracking Study. International Journal of Science and Mathematics Education, 22(8), 1663-1682
Open this publication in new window or tab >>Pattern-Recognition Processes of First-Grade Students: An Explorative Eye-Tracking Study
Show others...
2024 (English)In: International Journal of Science and Mathematics Education, ISSN 1571-0068, E-ISSN 1573-1774, Vol. 22, no 8, p. 1663-1682Article in journal (Refereed) Published
Abstract [en]

Recognizing patterns is an essential skill in early mathematics education. However, first graders often have difficulties with tasks such as extending patterns of the form ABCABC. Studies show that this pattern-recognition ability is a good predictor of later pre-algebraic skills and mathematical achievement in general, or the development of mathematical difficulties on the other hand. To be able to foster children's pattern-recognition ability, it is crucial to investigate and understand their pattern-recognition processes early on. However, only a few studies have investigated the processes used to recognize patterns and how these processes are adapted to different patterns. These studies used external observations or relied on children's self-reports, yet young students often lack the ability to properly report their strategies. This paper presents the results of an empirical study using eye-tracking technology to investigate the pattern-recognition processes of 22 first-grade students. In particular, we investigated students with and without the risk of developing mathematical difficulties. The analyses of the students' eye movements reveal that the students used four different processes to recognize patterns-a finding that refines knowledge about pattern-recognition processes from previous research. In addition, we found that for patterns with different units of repeat (i.e. ABABAB versus ABCABCABC), the pattern-recognition processes used differed significantly for students at risk of developing mathematical difficulties but not for students without such risk. Our study contributes to a better understanding of the pattern-recognition processes of first-grade students, laying the foundation for enhanced, targeted support, especially for students at risk of developing mathematical difficulties.

Place, publisher, year, edition, pages
Springer, 2024
Keywords
Pattern recognition, Eye tracking, Mathematical difficulties, First-grade students
National Category
Educational Sciences
Identifiers
urn:nbn:se:oru:diva-111446 (URN)10.1007/s10763-024-10441-x (DOI)001148710900002 ()2-s2.0-85182989808 (Scopus ID)
Note

Open Access funding enabled and organized by Projekt DEAL. This publication has received funding from the Erasmus + grant program of the European Union under grant agreement No 2020–1-DE03-KA201-077597.

Available from: 2024-02-08 Created: 2024-02-08 Last updated: 2025-01-07Bibliographically approved
Gupta, H., Kotlyar, O., Andreasson, H. & Lilienthal, A. J. (2024). Robust Object Detection in Challenging Weather Conditions. In: 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV): Conference Proceedings. Paper presented at 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV 2024), Waikoloa, HI, USA, January 3-8, 2024 (pp. 7508-7517). IEEE
Open this publication in new window or tab >>Robust Object Detection in Challenging Weather Conditions
2024 (English)In: 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV): Conference Proceedings, IEEE, 2024, p. 7508-7517Conference paper, Published paper (Refereed)
Abstract [en]

Object detection is crucial in diverse autonomous systems like surveillance, autonomous driving, and driver assistance, ensuring safety by recognizing pedestrians, vehicles, traffic lights, and signs. However, adverse weather conditions such as snow, fog, and rain pose a challenge, affecting detection accuracy and risking accidents and damage. This clearly demonstrates the need for robust object detection solutions that work in all weather conditions. We employed three strategies to enhance deep learningbased object detection in adverse weather: training on real world all-weather images, training on images with synthetic augmented weather noise, and integrating object detection with adverse weather image denoising. The synthetic weather noise is generated using analytical methods, GAN networks, and style-transfer networks. We compared the performance of these strategies by training object detection models using real-world all-weather images from the BDD100K dataset and, for assessment, employed unseen real-world adverse weather images. Adverse weather denoising methods were evaluated by denoising real-world adverse weather images, and the results of object detection denoised and original noisy images were compared. We found that the model trained using all-weather real-world images performed best, while the strategy of doing object detection on denoised images performed worst.

Place, publisher, year, edition, pages
IEEE, 2024
Series
Proceedings (IEEE Workshop on Applications of Computer Vision), ISSN 2472-6737, E-ISSN 2642-9381
Keywords
Computer Vision, Object Detection, Adverse Weather
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-115243 (URN)10.1109/WACV57701.2024.00735 (DOI)9798350318937 (ISBN)9798350318920 (ISBN)
Conference
2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV 2024), Waikoloa, HI, USA, January 3-8, 2024
Funder
EU, Horizon 2020, 858101
Available from: 2024-08-07 Created: 2024-08-07 Last updated: 2024-08-12Bibliographically approved
Pitta-Pantazi, D., Demosthenous, E., Schindler, M., Lilienthal, A. J. & Christou, C. (2024). Structure sense in students' quantity comparison and repeating pattern extension tasks: an eye-tracking study with first graders. Educational Studies in Mathematics
Open this publication in new window or tab >>Structure sense in students' quantity comparison and repeating pattern extension tasks: an eye-tracking study with first graders
Show others...
2024 (English)In: Educational Studies in Mathematics, ISSN 0013-1954, E-ISSN 1573-0816Article in journal (Refereed) Epub ahead of print
Abstract [en]

There is growing evidence that the ability to perceive structure is essential for students' mathematical development. Looking at students' structure sense in basic numerical and patterning tasks seems promising for understanding how these tasks set the foundation for the development of later mathematical skills. Previous studies have shown how students use structure sense in enumeration tasks. However, little is known about students' use of structure sense in other early mathematical tasks. The main aim of this study is to investigate the ways in which structure sense is manifested in first-grade students' work across tasks, in quantity comparison and repeating pattern extension tasks. We investigated students' strategies in quantity comparison and pattern extension tasks and how students employ structure sense. We conducted an eye-tracking study with 21 first-grade students, which provided novel insights into commonalities among strategies for these types of tasks. We found that for both tasks, quantity comparison and repeating pattern extension tasks, strategies can be distinguished into those employing structure sense and serial strategies.

Place, publisher, year, edition, pages
Springer, 2024
Keywords
Eye tracking, Quantity comparison, Repeating pattern extension, Structure sense, Serial strategies
National Category
Educational Sciences
Identifiers
urn:nbn:se:oru:diva-111445 (URN)10.1007/s10649-023-10290-5 (DOI)001142887800001 ()
Available from: 2024-02-08 Created: 2024-02-08 Last updated: 2024-02-08Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-0217-9326

Search in DiVA

Show all publications