To Örebro University

oru.seÖrebro University Publications
Planned maintenance
A system upgrade is planned for 10/12-2024, at 12:00-13:00. During this time DiVA will be unavailable.
Change search
Link to record
Permanent link

Direct link
Publications (10 of 99) Show all publications
Alhashimi, A., Adolfsson, D., Andreasson, H., Lilienthal, A. & Magnusson, M. (2024). BFAR: improving radar odometry estimation using a bounded false alarm rate detector. Autonomous Robots, 48(8), Article ID 29.
Open this publication in new window or tab >>BFAR: improving radar odometry estimation using a bounded false alarm rate detector
Show others...
2024 (English)In: Autonomous Robots, ISSN 0929-5593, E-ISSN 1573-7527, Vol. 48, no 8, article id 29Article in journal (Refereed) Published
Abstract [en]

This work introduces a novel detector, bounded false-alarm rate (BFAR), for distinguishing true detections from noise in radar data, leading to improved accuracy in radar odometry estimation. Scanning frequency-modulated continuous wave (FMCW) radars can serve as valuable tools for localization and mapping under low visibility conditions. However, they tend to yield a higher level of noise in comparison to the more commonly employed lidars, thereby introducing additional challenges to the detection process. We propose a new radar target detector called BFAR which uses an affine transformation of the estimated noise level compared to the classical constant false-alarm rate (CFAR) detector. This transformation employs learned parameters that minimize the error in odometry estimation. Conceptually, BFAR can be viewed as an optimized blend of CFAR and fixed-level thresholding designed to minimize odometry estimation error. The strength of this approach lies in its simplicity. Only a single parameter needs to be learned from a training dataset when the affine transformation scale parameter is maintained. Compared to ad-hoc detectors, BFAR has the advantage of a specified upper-bound for the false-alarm probability, and better noise handling than CFAR. Repeatability tests show that BFAR yields highly repeatable detections with minimal redundancy. We have conducted simulations to compare the detection and false-alarm probabilities of BFAR with those of three baselines in non-homogeneous noise and varying target sizes. The results show that BFAR outperforms the other detectors. Moreover, We apply BFAR to the use case of radar odometry, and adapt a recent odometry pipeline, replacing its original conservative filtering with BFAR. In this way, we reduce the translation/rotation odometry errors/100 m from 1.3%/0.4◦ to 1.12%/0.38◦, and from 1.62%/0.57◦ to 1.21%/0.32◦, improving translation error by 14.2% and 25% on Oxford and Mulran public data sets, respectively.

Place, publisher, year, edition, pages
Springer, 2024
Keywords
Radar, CFAR, Odometry, FMCW
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-117575 (URN)10.1007/s10514-024-10176-2 (DOI)001358908800001 ()2-s2.0-85209565335 (Scopus ID)
Funder
Örebro University
Available from: 2024-12-05 Created: 2024-12-05 Last updated: 2024-12-05Bibliographically approved
Bazzana, B., Andreasson, H. & Grisetti, G. (2024). How-to Augmented Lagrangian on Factor Graphs. IEEE Robotics and Automation Letters, 9(3), 2806-2813
Open this publication in new window or tab >>How-to Augmented Lagrangian on Factor Graphs
2024 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 9, no 3, p. 2806-2813Article in journal (Refereed) Published
Abstract [en]

Factor graphs are a very powerful graphical representation, used to model many problems in robotics. They are widely spread in the areas of Simultaneous Localization and Mapping (SLAM), computer vision, and localization. However, the physics of many real-world problems is better modeled through constraints, e.g., estimation in the presence of inconsistent measurements, or optimal control. Constraints handling is hard because the solution cannot be found by following the gradient descent direction as done by traditional factor graph solvers. The core idea of our method is to encapsulate the Augmented Lagrangian (AL) method in factors that can be integrated straightforwardly in existing factor graph solvers. Besides being a tool to unify different robotics areas, the modularity of factor graphs allows to easily combine multiple objectives and effectively exploiting the problem structure for efficiency. We show the generality of our approach by addressing three applications, arising from different areas: pose estimation, rotation synchronization and Model Predictive Control (MPC) of a pseudo-omnidirectional platform. We implemented our approach using C++ and ROS. Application results show that we can favorably compare against domain specific approaches.

Place, publisher, year, edition, pages
IEEE, 2024
Keywords
Optimization, Robots, Computational modeling, Trajectory, Simultaneous localization and mapping, Synchronization, Optimal control, Localization, integrated planning and control, optimization and optimal control
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-112817 (URN)10.1109/LRA.2024.3361282 (DOI)001174297500013 ()2-s2.0-85184334012 (Scopus ID)
Funder
Swedish Research Council Formas
Available from: 2024-04-03 Created: 2024-04-03 Last updated: 2024-04-03Bibliographically approved
Gupta, H., Kotlyar, O., Andreasson, H. & Lilienthal, A. J. (2024). Robust Object Detection in Challenging Weather Conditions. In: 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV): Conference Proceedings. Paper presented at 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV 2024), Waikoloa, HI, USA, January 3-8, 2024 (pp. 7508-7517). IEEE
Open this publication in new window or tab >>Robust Object Detection in Challenging Weather Conditions
2024 (English)In: 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV): Conference Proceedings, IEEE, 2024, p. 7508-7517Conference paper, Published paper (Refereed)
Abstract [en]

Object detection is crucial in diverse autonomous systems like surveillance, autonomous driving, and driver assistance, ensuring safety by recognizing pedestrians, vehicles, traffic lights, and signs. However, adverse weather conditions such as snow, fog, and rain pose a challenge, affecting detection accuracy and risking accidents and damage. This clearly demonstrates the need for robust object detection solutions that work in all weather conditions. We employed three strategies to enhance deep learningbased object detection in adverse weather: training on real world all-weather images, training on images with synthetic augmented weather noise, and integrating object detection with adverse weather image denoising. The synthetic weather noise is generated using analytical methods, GAN networks, and style-transfer networks. We compared the performance of these strategies by training object detection models using real-world all-weather images from the BDD100K dataset and, for assessment, employed unseen real-world adverse weather images. Adverse weather denoising methods were evaluated by denoising real-world adverse weather images, and the results of object detection denoised and original noisy images were compared. We found that the model trained using all-weather real-world images performed best, while the strategy of doing object detection on denoised images performed worst.

Place, publisher, year, edition, pages
IEEE, 2024
Series
Proceedings (IEEE Workshop on Applications of Computer Vision), ISSN 2472-6737, E-ISSN 2642-9381
Keywords
Computer Vision, Object Detection, Adverse Weather
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-115243 (URN)10.1109/WACV57701.2024.00735 (DOI)9798350318937 (ISBN)9798350318920 (ISBN)
Conference
2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV 2024), Waikoloa, HI, USA, January 3-8, 2024
Funder
EU, Horizon 2020, 858101
Available from: 2024-08-07 Created: 2024-08-07 Last updated: 2024-08-12Bibliographically approved
Hilger, M., Kubelka, V., Adolfsson, D., Andreasson, H. & Lilienthal, A. (2024). Towards introspective loop closure in 4D radar SLAM. In: : . Paper presented at Radar in Robotics: Resilience from Signal to Navigation - Full-Day Workshop at 2024 IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan, May 13-17, 2024.
Open this publication in new window or tab >>Towards introspective loop closure in 4D radar SLAM
Show others...
2024 (English)Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

Imaging radar is an emerging sensor modality in the context of Localization and Mapping (SLAM), especially suitable for vision-obstructed environments. This article investigates the use of 4D imaging radars for SLAM and analyzes the challenges in robust loop closure. Previous work indicates that 4D radars, together with inertial measurements, offer ample information for accurate odometry estimation. However, the low field of view, limited resolution, and sparse and noisy measurements render loop closure a significantly more challenging problem. Our work builds on the previous work - TBV SLAM - which was proposed for robust loop closure with 360∘ spinning radars. This article highlights and addresses challenges inherited from a directional 4D radar, such as sparsity, noise, and reduced field of view, and discusses why the common definition of a loop closure is unsuitable. By combining multiple quality measures for accurate loop closure detection adapted to 4D radar data, significant results in trajectory estimation are achieved; the absolute trajectory error is as low as 0.46 m over a distance of 1.8 km, with consistent operation over multiple environments. 

National Category
Robotics
Identifiers
urn:nbn:se:oru:diva-114189 (URN)
Conference
Radar in Robotics: Resilience from Signal to Navigation - Full-Day Workshop at 2024 IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan, May 13-17, 2024
Funder
EU, Horizon 2020, 858101
Available from: 2024-06-12 Created: 2024-06-12 Last updated: 2024-06-13Bibliographically approved
Liao, Q., Sun, D., Zhang, S., Loutfi, A. & Andreasson, H. (2023). Fuzzy Cluster-based Group-wise Point Set Registration with Quality Assessment. IEEE Transactions on Image Processing, 32, 550-564
Open this publication in new window or tab >>Fuzzy Cluster-based Group-wise Point Set Registration with Quality Assessment
Show others...
2023 (English)In: IEEE Transactions on Image Processing, ISSN 1057-7149, E-ISSN 1941-0042, Vol. 32, p. 550-564Article in journal (Refereed) Published
Abstract [en]

This article studies group-wise point set registration and makes the following contributions: "FuzzyGReg", which is a new fuzzy cluster-based method to register multiple point sets jointly, and "FuzzyQA", which is the associated quality assessment to check registration accuracy automatically. Given a group of point sets, FuzzyGReg creates a model of fuzzy clusters and equally treats all the point sets as the elements of the fuzzy clusters. Then, the group-wise registration is turned into a fuzzy clustering problem. To resolve this problem, FuzzyGReg applies a fuzzy clustering algorithm to identify the parameters of the fuzzy clusters while jointly transforming all the point sets to achieve an alignment. Next, based on the identified fuzzy clusters, FuzzyQA calculates the spatial properties of the transformed point sets and then checks the alignment accuracy by comparing the similarity degrees of the spatial properties of the point sets. When a local misalignment is detected, a local re-alignment is performed to improve accuracy. The proposed method is cost-efficient and convenient to be implemented. In addition, it provides reliable quality assessments in the absence of ground truth and user intervention. In the experiments, different point sets are used to test the proposed method and make comparisons with state-of-the-art registration techniques. The experimental results demonstrate the effectiveness of our method.The code is available at https://gitsvn-nt.oru.se/qianfang.liao/FuzzyGRegWithQA

Place, publisher, year, edition, pages
IEEE, 2023
Keywords
Quality assessment, Measurement, Three-dimensional displays, Registers, Probability distribution, Point cloud compression, Optimization, Group-wise registration, registration quality assessment, joint alignment, fuzzy clusters, 3D point sets
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-102755 (URN)10.1109/TIP.2022.3231132 (DOI)000908058200002 ()
Funder
Vinnova, 2019- 05878Swedish Research Council Formas, 2019-02264
Available from: 2022-12-16 Created: 2022-12-16 Last updated: 2023-04-03Bibliographically approved
Adolfsson, D., Magnusson, M., Alhashimi, A., Lilienthal, A. & Andreasson, H. (2023). Lidar-Level Localization With Radar? The CFEAR Approach to Accurate, Fast, and Robust Large-Scale Radar Odometry in Diverse Environments. IEEE Transactions on robotics, 39(2), 1476-1495
Open this publication in new window or tab >>Lidar-Level Localization With Radar? The CFEAR Approach to Accurate, Fast, and Robust Large-Scale Radar Odometry in Diverse Environments
Show others...
2023 (English)In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 39, no 2, p. 1476-1495Article in journal (Refereed) Published
Abstract [en]

This article presents an accurate, highly efficient, and learning-free method for large-scale odometry estimation using spinning radar, empirically found to generalize well across very diverse environments—outdoors, from urban to woodland, and indoors in warehouses and mines—without changing parameters. Our method integrates motion compensation within a sweep with one-to-many scan registration that minimizes distances between nearby oriented surface points and mitigates outliers with a robust loss function. Extending our previous approach conservative filtering for efficient and accurate radar odometry (CFEAR), we present an in-depth investigation on a wider range of datasets, quantifying the importance of filtering, resolution, registration cost and loss functions, keyframe history, and motion compensation. We present a new solving strategy and configuration that overcomes previous issues with sparsity and bias, and improves our state-of-the-art by 38%, thus, surprisingly, outperforming radar simultaneous localization and mapping (SLAM) and approaching lidar SLAM. The most accurate configuration achieves 1.09% error at 5 Hz on the Oxford benchmark, and the fastest achieves 1.79% error at 160 Hz.

Place, publisher, year, edition, pages
IEEE, 2023
Keywords
Radar, Sensors, Spinning, Azimuth, Simultaneous localization and mapping, Estimation, Location awareness, Localization, radar odometry, range sensing, SLAM
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems) Robotics
Research subject
Computer and Systems Science; Computer Science
Identifiers
urn:nbn:se:oru:diva-103116 (URN)10.1109/tro.2022.3221302 (DOI)000912778500001 ()2-s2.0-85144032264 (Scopus ID)
Available from: 2023-01-16 Created: 2023-01-16 Last updated: 2023-10-18
Gupta, H., Lilienthal, A., Andreasson, H. & Kurtser, P. (2023). NDT-6D for color registration in agri-robotic applications. Journal of Field Robotics, 40(6), 1603-1619
Open this publication in new window or tab >>NDT-6D for color registration in agri-robotic applications
2023 (English)In: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 40, no 6, p. 1603-1619Article in journal (Refereed) Published
Abstract [en]

Registration of point cloud data containing both depth and color information is critical for a variety of applications, including in-field robotic plant manipulation, crop growth modeling, and autonomous navigation. However, current state-of-the-art registration methods often fail in challenging agricultural field conditions due to factors such as occlusions, plant density, and variable illumination. To address these issues, we propose the NDT-6D registration method, which is a color-based variation of the Normal Distribution Transform (NDT) registration approach for point clouds. Our method computes correspondences between pointclouds using both geometric and color information and minimizes the distance between these correspondences using only the three-dimensional (3D) geometric dimensions. We evaluate the method using the GRAPES3D data set collected with a commercial-grade RGB-D sensor mounted on a mobile platform in a vineyard. Results show that registration methods that only rely on depth information fail to provide quality registration for the tested data set. The proposed color-based variation outperforms state-of-the-art methods with a root mean square error (RMSE) of 1.1-1.6 cm for NDT-6D compared with 1.1-2.3 cm for other color-information-based methods and 1.2-13.7 cm for noncolor-information-based methods. The proposed method is shown to be robust against noises using the TUM RGBD data set by artificially adding noise present in an outdoor scenario. The relative pose error (RPE) increased similar to 14% for our method compared to an increase of similar to 75% for the best-performing registration method. The obtained average accuracy suggests that the NDT-6D registration methods can be used for in-field precision agriculture applications, for example, crop detection, size-based maturity estimation, and growth modeling.

Place, publisher, year, edition, pages
John Wiley & Sons, 2023
Keywords
agricultural robotics, color pointcloud, in-field sensing, machine perception, RGB-D registration, stereo IR, vineyard
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-106131 (URN)10.1002/rob.22194 (DOI)000991774400001 ()2-s2.0-85159844423 (Scopus ID)
Funder
EU, Horizon 2020
Available from: 2023-06-01 Created: 2023-06-01 Last updated: 2023-11-28Bibliographically approved
Gupta, H., Andreasson, H., Magnusson, M., Julier, S. & Lilienthal, A. J. (2023). Revisiting Distribution-Based Registration Methods. In: Marques, L.; Markovic, I. (Ed.), 2023 European Conference on Mobile Robots (ECMR): . Paper presented at 11th European Conference on Mobile Robots (ECMR 2023), Coimbra, Portugal, September 4-7, 2023 (pp. 43-48). IEEE
Open this publication in new window or tab >>Revisiting Distribution-Based Registration Methods
Show others...
2023 (English)In: 2023 European Conference on Mobile Robots (ECMR) / [ed] Marques, L.; Markovic, I., IEEE , 2023, p. 43-48Conference paper, Published paper (Refereed)
Abstract [en]

Normal Distribution Transformation (NDT) registration is a fast, learning-free point cloud registration algorithm that works well in diverse environments. It uses the compact NDT representation to represent point clouds or maps as a spatial probability function that models the occupancy likelihood in an environment. However, because of the grid discretization in NDT maps, the global minima of the registration cost function do not always correlate to ground truth, particularly for rotational alignment. In this study, we examined the NDT registration cost function in-depth. We evaluated three modifications (Student-t likelihood function, inflated covariance/heavily broadened likelihood curve, and overlapping grid cells) that aim to reduce the negative impact of discretization in classical NDT registration. The first NDT modification improves likelihood estimates for matching the distributions of small population sizes; the second modification reduces discretization artifacts by broadening the likelihood tails through covariance inflation; and the third modification achieves continuity by creating the NDT representations with overlapping grid cells (without increasing the total number of cells). We used the Pomerleau Dataset evaluation protocol for our experiments and found significant improvements compared to the classic NDT D2D registration approach (27.7% success rate) using the registration cost functions "heavily broadened likelihood NDT" (HBL-NDT) (34.7% success rate) and "overlapping grid cells NDT" (OGC-NDT) (33.5% success rate). However, we could not observe a consistent improvement using the Student-t likelihood-based registration cost function (22.2% success rate) over the NDT P2D registration cost function (23.7% success rate). A comparative analysis with other state-of-art registration algorithms is also presented in this work. We found that HBL-NDT worked best for easy initial pose difficulties scenarios making it suitable for consecutive point cloud registration in SLAM application.

Place, publisher, year, edition, pages
IEEE, 2023
Series
European Conference on Mobile Robots, ISSN 2639-7919, E-ISSN 2767-8733
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-109681 (URN)10.1109/ECMR59166.2023.10256416 (DOI)001082260500007 ()2-s2.0-8517439971 (Scopus ID)9798350307047 (ISBN)9798350307054 (ISBN)
Conference
11th European Conference on Mobile Robots (ECMR 2023), Coimbra, Portugal, September 4-7, 2023
Funder
EU, Horizon 2020, 858101
Available from: 2023-11-15 Created: 2023-11-15 Last updated: 2023-11-15Bibliographically approved
Gupta, H., Andreasson, H., Lilienthal, A. J. & Kurtser, P. (2023). Robust Scan Registration for Navigation in Forest Environment Using Low-Resolution LiDAR Sensors. Sensors, 23(10), Article ID 4736.
Open this publication in new window or tab >>Robust Scan Registration for Navigation in Forest Environment Using Low-Resolution LiDAR Sensors
2023 (English)In: Sensors, E-ISSN 1424-8220, Vol. 23, no 10, article id 4736Article in journal (Refereed) Published
Abstract [en]

Automated forest machines are becoming important due to human operators' complex and dangerous working conditions, leading to a labor shortage. This study proposes a new method for robust SLAM and tree mapping using low-resolution LiDAR sensors in forestry conditions. Our method relies on tree detection to perform scan registration and pose correction using only low-resolution LiDAR sensors (16Ch, 32Ch) or narrow field of view Solid State LiDARs without additional sensory modalities like GPS or IMU. We evaluate our approach on three datasets, including two private and one public dataset, and demonstrate improved navigation accuracy, scan registration, tree localization, and tree diameter estimation compared to current approaches in forestry machine automation. Our results show that the proposed method yields robust scan registration using detected trees, outperforming generalized feature-based registration algorithms like Fast Point Feature Histogram, with an above 3 m reduction in RMSE for the 16Chanel LiDAR sensor. For Solid-State LiDAR the algorithm achieves a similar RMSE of 3.7 m. Additionally, our adaptive pre-processing and heuristic approach to tree detection increased the number of detected trees by 13% compared to the current approach of using fixed radius search parameters for pre-processing. Our automated tree trunk diameter estimation method yields a mean absolute error of 4.3 cm (RSME = 6.5 cm) for the local map and complete trajectory maps.

Place, publisher, year, edition, pages
MDPI, 2023
Keywords
tree segmentation, LiDAR mapping, forest inventory, SLAM, forestry robotics, scan registration
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:oru:diva-106315 (URN)10.3390/s23104736 (DOI)000997887900001 ()37430655 (PubMedID)2-s2.0-85160406537 (Scopus ID)
Funder
EU, Horizon 2020, 858101
Available from: 2023-06-19 Created: 2023-06-19 Last updated: 2023-07-12Bibliographically approved
Adolfsson, D., Karlsson, M., Kubelka, V., Magnusson, M. & Andreasson, H. (2023). TBV Radar SLAM - Trust but Verify Loop Candidates. IEEE Robotics and Automation Letters, 8(6), 3613-3620
Open this publication in new window or tab >>TBV Radar SLAM - Trust but Verify Loop Candidates
Show others...
2023 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 8, no 6, p. 3613-3620Article in journal (Refereed) Published
Abstract [en]

Robust SLAM in large-scale environments requires fault resilience and awareness at multiple stages, from sensing and odometry estimation to loop closure. In this work, we present TBV (Trust But Verify) Radar SLAM, a method for radar SLAM that introspectively verifies loop closure candidates. TBV Radar SLAM achieves a high correct-loop-retrieval rate by combining multiple place-recognition techniques: tightly coupled place similarity and odometry uncertainty search, creating loop descriptors from origin-shifted scans, and delaying loop selection until after verification. Robustness to false constraints is achieved by carefully verifying and selecting the most likely ones from multiple loop constraints. Importantly, the verification and selection are carried out after registration when additional sources of loop evidence can easily be computed. We integrate our loop retrieval and verification method with a robust odometry pipeline within a pose graph framework. By evaluation on public benchmarks we found that TBV Radar SLAM achieves 65% lower error than the previous state of the art. We also show that it generalizes across environments without needing to change any parameters. We provide the open-source implementation at https://github.com/dan11003/tbv_slam_public

Place, publisher, year, edition, pages
IEEE, 2023
Keywords
SLAM, localization, radar, introspection
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-106249 (URN)10.1109/LRA.2023.3268040 (DOI)000981889200013 ()2-s2.0-85153499426 (Scopus ID)
Funder
Vinnova, 2021-04714 2019-05878
Available from: 2023-06-13 Created: 2023-06-13 Last updated: 2024-01-17Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-2953-1564

Search in DiVA

Show all publications