To Örebro University

oru.seÖrebro University Publications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 103) Show all publications
Forte, P., Gupta, H., Andreasson, H., Köckemann, U. & Lilienthal, A. J. (2025). On Robust Context-Aware Navigation for Autonomous Ground Vehicles. IEEE Robotics and Automation Letters, 10(2), 1449-1456
Open this publication in new window or tab >>On Robust Context-Aware Navigation for Autonomous Ground Vehicles
Show others...
2025 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 10, no 2, p. 1449-1456Article in journal (Refereed) Published
Abstract [en]

We propose a context-aware navigation framework designed to support the navigation of autonomous ground vehicles, including articulated ones. The proposed framework employs a behavior tree with novel nodes to manage the navigation tasks: planner and controller selections, path planning, path following, and recovery. It incorporates a weather detection system and configurable global path planning and controller strategy selectors implemented as behavior tree action nodes. These components are integrated into a sub-tree that supervises and manages available options and parameters for global planners and control strategies by evaluating map and real-time sensor data. The proposed approach offers three key benefits: overcoming the limitations of single planner strategies in challenging scenarios; ensuring efficient path planning by balancing between optimization and computational effort; and achieving smoother navigation by reducing path curvature and improving drivability. The performance of the proposed framework is analyzed empirically, and compared against state of the art navigation systems with single path planning strategies.

Place, publisher, year, edition, pages
IEEE, 2025
Keywords
Autonomous Vehicle Navigation, Motion and Path Planning, Robotics and Automation in Construction
National Category
Robotics and automation Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-117948 (URN)10.1109/LRA.2024.3520920 (DOI)001389508500001 ()
Funder
EU, Horizon 2020, 858101
Available from: 2024-12-26 Created: 2024-12-26 Last updated: 2025-09-15Bibliographically approved
Borngrund, C., Bodin, U., Andreasson, H. & Sandin, F. (2024). Automating the Short-Loading Cycle: Survey and Integration Framework. Applied Sciences, 14(11), 4674-4674
Open this publication in new window or tab >>Automating the Short-Loading Cycle: Survey and Integration Framework
2024 (English)In: Applied Sciences, E-ISSN 2076-3417, Vol. 14, no 11, p. 4674-4674Article in journal (Refereed) Published
National Category
Robotics and automation
Identifiers
urn:nbn:se:oru:diva-118335 (URN)10.3390/app14114674 (DOI)001245643100001 ()2-s2.0-85195976956 (Scopus ID)
Available from: 2025-01-13 Created: 2025-01-13 Last updated: 2025-01-30Bibliographically approved
Alhashimi, A., Adolfsson, D., Andreasson, H., Lilienthal, A. & Magnusson, M. (2024). BFAR: improving radar odometry estimation using a bounded false alarm rate detector. Autonomous Robots, 48(8), Article ID 29.
Open this publication in new window or tab >>BFAR: improving radar odometry estimation using a bounded false alarm rate detector
Show others...
2024 (English)In: Autonomous Robots, ISSN 0929-5593, E-ISSN 1573-7527, Vol. 48, no 8, article id 29Article in journal (Refereed) Published
Abstract [en]

This work introduces a novel detector, bounded false-alarm rate (BFAR), for distinguishing true detections from noise in radar data, leading to improved accuracy in radar odometry estimation. Scanning frequency-modulated continuous wave (FMCW) radars can serve as valuable tools for localization and mapping under low visibility conditions. However, they tend to yield a higher level of noise in comparison to the more commonly employed lidars, thereby introducing additional challenges to the detection process. We propose a new radar target detector called BFAR which uses an affine transformation of the estimated noise level compared to the classical constant false-alarm rate (CFAR) detector. This transformation employs learned parameters that minimize the error in odometry estimation. Conceptually, BFAR can be viewed as an optimized blend of CFAR and fixed-level thresholding designed to minimize odometry estimation error. The strength of this approach lies in its simplicity. Only a single parameter needs to be learned from a training dataset when the affine transformation scale parameter is maintained. Compared to ad-hoc detectors, BFAR has the advantage of a specified upper-bound for the false-alarm probability, and better noise handling than CFAR. Repeatability tests show that BFAR yields highly repeatable detections with minimal redundancy. We have conducted simulations to compare the detection and false-alarm probabilities of BFAR with those of three baselines in non-homogeneous noise and varying target sizes. The results show that BFAR outperforms the other detectors. Moreover, We apply BFAR to the use case of radar odometry, and adapt a recent odometry pipeline, replacing its original conservative filtering with BFAR. In this way, we reduce the translation/rotation odometry errors/100 m from 1.3%/0.4◦ to 1.12%/0.38◦, and from 1.62%/0.57◦ to 1.21%/0.32◦, improving translation error by 14.2% and 25% on Oxford and Mulran public data sets, respectively.

Place, publisher, year, edition, pages
Springer, 2024
Keywords
Radar, CFAR, Odometry, FMCW
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:oru:diva-117575 (URN)10.1007/s10514-024-10176-2 (DOI)001358908800001 ()2-s2.0-85209565335 (Scopus ID)
Funder
Örebro University
Available from: 2024-12-05 Created: 2024-12-05 Last updated: 2025-02-07Bibliographically approved
Bazzana, B., Andreasson, H. & Grisetti, G. (2024). How-to Augmented Lagrangian on Factor Graphs. IEEE Robotics and Automation Letters, 9(3), 2806-2813
Open this publication in new window or tab >>How-to Augmented Lagrangian on Factor Graphs
2024 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 9, no 3, p. 2806-2813Article in journal (Refereed) Published
Abstract [en]

Factor graphs are a very powerful graphical representation, used to model many problems in robotics. They are widely spread in the areas of Simultaneous Localization and Mapping (SLAM), computer vision, and localization. However, the physics of many real-world problems is better modeled through constraints, e.g., estimation in the presence of inconsistent measurements, or optimal control. Constraints handling is hard because the solution cannot be found by following the gradient descent direction as done by traditional factor graph solvers. The core idea of our method is to encapsulate the Augmented Lagrangian (AL) method in factors that can be integrated straightforwardly in existing factor graph solvers. Besides being a tool to unify different robotics areas, the modularity of factor graphs allows to easily combine multiple objectives and effectively exploiting the problem structure for efficiency. We show the generality of our approach by addressing three applications, arising from different areas: pose estimation, rotation synchronization and Model Predictive Control (MPC) of a pseudo-omnidirectional platform. We implemented our approach using C++ and ROS. Application results show that we can favorably compare against domain specific approaches.

Place, publisher, year, edition, pages
IEEE, 2024
Keywords
Optimization, Robots, Computational modeling, Trajectory, Simultaneous localization and mapping, Synchronization, Optimal control, Localization, integrated planning and control, optimization and optimal control
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:oru:diva-112817 (URN)10.1109/LRA.2024.3361282 (DOI)001174297500013 ()2-s2.0-85184334012 (Scopus ID)
Funder
Swedish Research Council Formas
Available from: 2024-04-03 Created: 2024-04-03 Last updated: 2025-02-07Bibliographically approved
Gupta, H., Kotlyar, O., Andreasson, H. & Lilienthal, A. J. (2024). Robust Object Detection in Challenging Weather Conditions. In: 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV): Conference Proceedings. Paper presented at 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV 2024), Waikoloa, HI, USA, January 3-8, 2024 (pp. 7508-7517). IEEE
Open this publication in new window or tab >>Robust Object Detection in Challenging Weather Conditions
2024 (English)In: 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV): Conference Proceedings, IEEE, 2024, p. 7508-7517Conference paper, Published paper (Refereed)
Abstract [en]

Object detection is crucial in diverse autonomous systems like surveillance, autonomous driving, and driver assistance, ensuring safety by recognizing pedestrians, vehicles, traffic lights, and signs. However, adverse weather conditions such as snow, fog, and rain pose a challenge, affecting detection accuracy and risking accidents and damage. This clearly demonstrates the need for robust object detection solutions that work in all weather conditions. We employed three strategies to enhance deep learningbased object detection in adverse weather: training on real world all-weather images, training on images with synthetic augmented weather noise, and integrating object detection with adverse weather image denoising. The synthetic weather noise is generated using analytical methods, GAN networks, and style-transfer networks. We compared the performance of these strategies by training object detection models using real-world all-weather images from the BDD100K dataset and, for assessment, employed unseen real-world adverse weather images. Adverse weather denoising methods were evaluated by denoising real-world adverse weather images, and the results of object detection denoised and original noisy images were compared. We found that the model trained using all-weather real-world images performed best, while the strategy of doing object detection on denoised images performed worst.

Place, publisher, year, edition, pages
IEEE, 2024
Series
Proceedings (IEEE Workshop on Applications of Computer Vision), ISSN 2472-6737, E-ISSN 2642-9381
Keywords
Computer Vision, Object Detection, Adverse Weather
National Category
Computer graphics and computer vision
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-115243 (URN)10.1109/WACV57701.2024.00735 (DOI)001222964607064 ()9798350318937 (ISBN)9798350318920 (ISBN)
Conference
2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV 2024), Waikoloa, HI, USA, January 3-8, 2024
Funder
EU, Horizon 2020, 858101
Available from: 2024-08-07 Created: 2024-08-07 Last updated: 2025-03-17Bibliographically approved
Molina, S., Mannucci, A., Magnusson, M., Adolfsson, D., Andreasson, H., Hamad, M., . . . Lilienthal, A. J. (2024). The ILIAD Safety Stack: Human-Aware Infrastructure-Free Navigation of Industrial Mobile Robots. IEEE robotics & automation magazine, 31(3), 48-59
Open this publication in new window or tab >>The ILIAD Safety Stack: Human-Aware Infrastructure-Free Navigation of Industrial Mobile Robots
Show others...
2024 (English)In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 31, no 3, p. 48-59Article in journal (Refereed) Published
Abstract [en]

Current intralogistics services require keeping up with e-commerce demands, reducing delivery times and waste, and increasing overall flexibility. As a consequence, the use of automated guided vehicles (AGVs) and, more recently, autonomous mobile robots (AMRs) for logistics operations is steadily increasing.

Place, publisher, year, edition, pages
IEEE, 2024
Keywords
Robots, Safety, Navigation, Mobile robots, Human-robot interaction, Hidden Markov models, Trajectory
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:oru:diva-108145 (URN)10.1109/MRA.2023.3296983 (DOI)001051249900001 ()2-s2.0-85167792783 (Scopus ID)
Funder
EU, Horizon 2020, 732737
Available from: 2023-09-14 Created: 2023-09-14 Last updated: 2025-02-07Bibliographically approved
Hilger, M., Kubelka, V., Adolfsson, D., Andreasson, H. & Lilienthal, A. (2024). Towards introspective loop closure in 4D radar SLAM. In: : . Paper presented at Radar in Robotics: Resilience from Signal to Navigation - Full-Day Workshop at 2024 IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan, May 13-17, 2024.
Open this publication in new window or tab >>Towards introspective loop closure in 4D radar SLAM
Show others...
2024 (English)Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

Imaging radar is an emerging sensor modality in the context of Localization and Mapping (SLAM), especially suitable for vision-obstructed environments. This article investigates the use of 4D imaging radars for SLAM and analyzes the challenges in robust loop closure. Previous work indicates that 4D radars, together with inertial measurements, offer ample information for accurate odometry estimation. However, the low field of view, limited resolution, and sparse and noisy measurements render loop closure a significantly more challenging problem. Our work builds on the previous work - TBV SLAM - which was proposed for robust loop closure with 360∘ spinning radars. This article highlights and addresses challenges inherited from a directional 4D radar, such as sparsity, noise, and reduced field of view, and discusses why the common definition of a loop closure is unsuitable. By combining multiple quality measures for accurate loop closure detection adapted to 4D radar data, significant results in trajectory estimation are achieved; the absolute trajectory error is as low as 0.46 m over a distance of 1.8 km, with consistent operation over multiple environments. 

National Category
Robotics and automation
Identifiers
urn:nbn:se:oru:diva-114189 (URN)
Conference
Radar in Robotics: Resilience from Signal to Navigation - Full-Day Workshop at 2024 IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan, May 13-17, 2024
Funder
EU, Horizon 2020, 858101
Available from: 2024-06-12 Created: 2024-06-12 Last updated: 2025-02-09Bibliographically approved
Gupta, H., Kotlyar, O., Andreasson, H. & Lilienthal, A. J. (2024). Video WeAther RecoGnition (VARG): An Intensity-Labeled Video Weather Recognition Dataset. Journal of imaging, 10(11), Article ID 281.
Open this publication in new window or tab >>Video WeAther RecoGnition (VARG): An Intensity-Labeled Video Weather Recognition Dataset
2024 (English)In: Journal of imaging, E-ISSN 2313-433X, Vol. 10, no 11, article id 281Article in journal (Refereed) Published
Abstract [en]

Adverse weather (rain, snow, and fog) can negatively impact computer vision tasks by introducing noise in sensor data; therefore, it is essential to recognize weather conditions for building safe and robust autonomous systems in the agricultural and autonomous driving/drone sectors. The performance degradation in computer vision tasks due to adverse weather depends on the type of weather and the intensity, which influences the amount of noise in sensor data. However, existing weather recognition datasets often lack intensity labels, limiting their effectiveness. To address this limitation, we present VARG, a novel video-based weather recognition dataset with weather intensity labels. The dataset comprises a diverse set of short video sequences collected from various social media platforms and videos recorded by the authors, processed into usable clips, and categorized into three major weather categories, rain, fog, and snow, with three intensity classes: absent/no, moderate, and high. The dataset contains 6742 annotated clips from 1079 videos, with the training set containing 5159 clips and the test set containing 1583 clips. Two sets of annotations are provided for training, the first set to train the models as a multi-label weather intensity classifier and the second set to train the models as a multi-class classifier for three weather scenarios. This paper describes the dataset characteristics and presents an evaluation study using several deep learning-based video recognition approaches for weather intensity prediction.

Place, publisher, year, edition, pages
MDPI, 2024
Keywords
Video classification, weather detection, weather intensity classification
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:oru:diva-117637 (URN)10.3390/jimaging10110281 (DOI)001365444700001 ()39590745 (PubMedID)2-s2.0-85210322007 (Scopus ID)
Funder
EU, Horizon 2020, 858101
Available from: 2024-12-09 Created: 2024-12-09 Last updated: 2024-12-09Bibliographically approved
Borngrund, C., Bodin, U., Sandin, F. & Andreasson, H. (2023). Autonomous Navigation of Wheel Loaders using Task Decomposition and Reinforcement Learning. In: 2023 IEEE 19th International Conference on Automation Science and Engineering (CASE): . Paper presented at 19th International Conference on Automation Science and Engineering (CASE), Auckland, New Zealand, August 26-20, 2023. IEEE
Open this publication in new window or tab >>Autonomous Navigation of Wheel Loaders using Task Decomposition and Reinforcement Learning
2023 (English)In: 2023 IEEE 19th International Conference on Automation Science and Engineering (CASE), IEEE, 2023Conference paper, Published paper (Refereed)
Abstract [en]

The short-loading cycle is a repetitive task performed in high quantities making it a good candidate for automation. Expert operators perform this task to upkeep high productivity while minimizing the environmental impact of using energy to propel the wheel loader. The need to balance productivity and environmental performance is essential for the sub-task of navigating the wheel loader between the pile of material and a dump truck receiving the material. This task is further complicated by behaviours of the wheel loader such as wheel slip depending on the tire-to-surface friction that is hard to model. Such uncertainties motivate the use of data-driven and adaptable approaches like reinforcement learning to automate navigation. In this paper, we examine the possibility to use reinforcement learning for the navigation sub-task. We focus on the process of developing a solution to the complete sub-task by decomposing it into two distinct steps and training two different agents to perform them separately. These steps are reversing from the pile and approaching the dump truck. The agents are trained in a simulation environment in which the wheel loader is modelled. Our results indicate that task decomposition can be helpful in performing the navigation compared to training a single agent for the entire sub-task. We present unsuccessful experiments using a single agent for the entire sub-task to illustrate difficulties associated with such an approach. A video of the results is available online Video available at https://youtu.be/IZbgvHvSltI.

Place, publisher, year, edition, pages
IEEE, 2023
Series
IEEE International Conference on Automation Science and Engineering, ISSN 2161-8070, E-ISSN 2161-8089
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:oru:diva-118579 (URN)10.1109/CASE56687.2023.10260481 (DOI)2-s2.0-85174394497 (Scopus ID)9798350320695 (ISBN)9798350320701 (ISBN)
Conference
19th International Conference on Automation Science and Engineering (CASE), Auckland, New Zealand, August 26-20, 2023
Funder
Vinnova, 2021-05035
Note

This research was conducted with support from Sweden’s Innovation Agency and the VALD project under grant agreement no. 2021-05035.

Available from: 2025-01-16 Created: 2025-01-16 Last updated: 2025-02-07Bibliographically approved
Liao, Q., Sun, D., Zhang, S., Loutfi, A. & Andreasson, H. (2023). Fuzzy Cluster-based Group-wise Point Set Registration with Quality Assessment. IEEE Transactions on Image Processing, 32, 550-564
Open this publication in new window or tab >>Fuzzy Cluster-based Group-wise Point Set Registration with Quality Assessment
Show others...
2023 (English)In: IEEE Transactions on Image Processing, ISSN 1057-7149, E-ISSN 1941-0042, Vol. 32, p. 550-564Article in journal (Refereed) Published
Abstract [en]

This article studies group-wise point set registration and makes the following contributions: "FuzzyGReg", which is a new fuzzy cluster-based method to register multiple point sets jointly, and "FuzzyQA", which is the associated quality assessment to check registration accuracy automatically. Given a group of point sets, FuzzyGReg creates a model of fuzzy clusters and equally treats all the point sets as the elements of the fuzzy clusters. Then, the group-wise registration is turned into a fuzzy clustering problem. To resolve this problem, FuzzyGReg applies a fuzzy clustering algorithm to identify the parameters of the fuzzy clusters while jointly transforming all the point sets to achieve an alignment. Next, based on the identified fuzzy clusters, FuzzyQA calculates the spatial properties of the transformed point sets and then checks the alignment accuracy by comparing the similarity degrees of the spatial properties of the point sets. When a local misalignment is detected, a local re-alignment is performed to improve accuracy. The proposed method is cost-efficient and convenient to be implemented. In addition, it provides reliable quality assessments in the absence of ground truth and user intervention. In the experiments, different point sets are used to test the proposed method and make comparisons with state-of-the-art registration techniques. The experimental results demonstrate the effectiveness of our method.The code is available at https://gitsvn-nt.oru.se/qianfang.liao/FuzzyGRegWithQA

Place, publisher, year, edition, pages
IEEE, 2023
Keywords
Quality assessment, Measurement, Three-dimensional displays, Registers, Probability distribution, Point cloud compression, Optimization, Group-wise registration, registration quality assessment, joint alignment, fuzzy clusters, 3D point sets
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:oru:diva-102755 (URN)10.1109/TIP.2022.3231132 (DOI)000908058200002 ()
Funder
Vinnova, 2019- 05878Swedish Research Council Formas, 2019-02264
Available from: 2022-12-16 Created: 2022-12-16 Last updated: 2025-02-07Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-2953-1564

Search in DiVA

Show all publications