To Örebro University

oru.seÖrebro University Publications
Change search
Link to record
Permanent link

Direct link
Magnusson, Martin, DocentORCID iD iconorcid.org/0000-0001-8658-2985
Publications (10 of 87) Show all publications
Sun, S., Mielle, M., Lilienthal, A. J. & Magnusson, M. (2024). 3QFP: Efficient neural implicit surface reconstruction using Tri-Quadtrees and Fourier feature Positional encoding. In: : . Paper presented at 2024 IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan, May 13-17, 2024. IEEE
Open this publication in new window or tab >>3QFP: Efficient neural implicit surface reconstruction using Tri-Quadtrees and Fourier feature Positional encoding
2024 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Neural implicit surface representations are currently receiving a lot of interest as a means to achieve high-fidelity surface reconstruction at a low memory cost, compared to traditional explicit representations. However, state-of-the-art methods still struggle with excessive memory usage and non-smooth surfaces. This is particularly problematic in large-scale applications with sparse inputs, as is common in robotics use cases. To address these issues, we first introduce a sparse structure, tri-quadtrees, which represents the environment using learnable features stored in three planar quadtree projections. Secondly, we concatenate the learnable features with a Fourier feature positional encoding. The combined features are then decoded into signed distance values through a small multi-layer perceptron. We demonstrate that this approach facilitates smoother reconstruction with a higher completion ratio with fewer holes. Compared to two recent baselines, one implicit and one explicit, our approach requires only 10%–50% as much memory, while achieving competitive quality. The code is released on https://github.com/ljjTYJR/3QFP.

Place, publisher, year, edition, pages
IEEE, 2024
Series
IEEE International Conference on Robotics and Automation (ICRA), ISSN 1050-4729, E-ISSN 2577-087X
National Category
Robotics and automation
Identifiers
urn:nbn:se:oru:diva-117117 (URN)10.1109/ICRA57147.2024.10610338 (DOI)001294576203025 ()2-s2.0-85202450420 (Scopus ID)9798350384574 (ISBN)9798350384581 (ISBN)
Conference
2024 IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan, May 13-17, 2024
Funder
EU, Horizon 2020, 101017274
Available from: 2024-10-30 Created: 2024-10-30 Last updated: 2025-02-09Bibliographically approved
Heuer, L., Palmieri, L., Mannucci, A., Koenig, S. & Magnusson, M. (2024). Benchmarking Multi-Robot Coordination in Realistic, Unstructured Human-Shared Environments. In: 2024 IEEE International Conference on Robotics and Automation (ICRA): . Paper presented at 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 13-17 May, 2024 (pp. 14541-14547). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Benchmarking Multi-Robot Coordination in Realistic, Unstructured Human-Shared Environments
Show others...
2024 (English)In: 2024 IEEE International Conference on Robotics and Automation (ICRA), Institute of Electrical and Electronics Engineers (IEEE), 2024, p. 14541-14547Conference paper, Published paper (Refereed)
Abstract [en]

Coordinating a fleet of robots in unstructured, human-shared environments is challenging. Human behavior is hard to predict, and its uncertainty impacts the performance of the robotic fleet. Various multi-robot planning and coordination algorithms have been proposed, including Multi-Agent Path Finding (MAPF) methods to precedence-based algorithms. However, it is still unclear how human presence impacts different coordination strategies in both simulated environments and the real world. With the goal of studying and further improving multi-robot planning capabilities in those settings, we propose a method to develop and benchmark different multi-robot coordination algorithms in realistic, unstructured and human-shared environments. To this end, we introduce a multi-robot benchmark framework that is based on state-of-the-art open-source navigation and simulation frameworks and can use different types of robots, environments and human motion models. We show a possible application of the benchmark framework with two different environments and three centralized coordination methods (two MAPF algorithms and a loosely-coupled coordination method based on precedence constraints). We evaluate each environment for different human densities to investigate its impact on each coordination method. We also present preliminary results that show how informing each coordination method about human presence can help the coordination method to find faster paths for the robots.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Keywords
Adversarial machine learning, Fleet operations, Human robot interaction, Industrial robots, Intelligent robots, Microrobots, Multi agent systems, Multipurpose robots, Nanorobotics, Nanorobots, Robot programming, Coordination methods, Human behaviors, Multi agent, Multi-robot coordination, Multirobots, Performance, Planning algorithms, Robot coordination, Robot planning, Uncertainty, Chatbots
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:oru:diva-118538 (URN)10.1109/ICRA57147.2024.10611005 (DOI)2-s2.0-85202452005 (Scopus ID)9798350384574 (ISBN)
Conference
2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 13-17 May, 2024
Funder
EU, Horizon 2020, 101017274
Note

Funding:

This work was partly supported by the EU Horizon 2020 research and innovation program under grant agreement No. 101017274 (DARKO) and NSF grant 1837779

Available from: 2025-01-15 Created: 2025-01-15 Last updated: 2025-01-15Bibliographically approved
Alhashimi, A., Adolfsson, D., Andreasson, H., Lilienthal, A. & Magnusson, M. (2024). BFAR: improving radar odometry estimation using a bounded false alarm rate detector. Autonomous Robots, 48(8), Article ID 29.
Open this publication in new window or tab >>BFAR: improving radar odometry estimation using a bounded false alarm rate detector
Show others...
2024 (English)In: Autonomous Robots, ISSN 0929-5593, E-ISSN 1573-7527, Vol. 48, no 8, article id 29Article in journal (Refereed) Published
Abstract [en]

This work introduces a novel detector, bounded false-alarm rate (BFAR), for distinguishing true detections from noise in radar data, leading to improved accuracy in radar odometry estimation. Scanning frequency-modulated continuous wave (FMCW) radars can serve as valuable tools for localization and mapping under low visibility conditions. However, they tend to yield a higher level of noise in comparison to the more commonly employed lidars, thereby introducing additional challenges to the detection process. We propose a new radar target detector called BFAR which uses an affine transformation of the estimated noise level compared to the classical constant false-alarm rate (CFAR) detector. This transformation employs learned parameters that minimize the error in odometry estimation. Conceptually, BFAR can be viewed as an optimized blend of CFAR and fixed-level thresholding designed to minimize odometry estimation error. The strength of this approach lies in its simplicity. Only a single parameter needs to be learned from a training dataset when the affine transformation scale parameter is maintained. Compared to ad-hoc detectors, BFAR has the advantage of a specified upper-bound for the false-alarm probability, and better noise handling than CFAR. Repeatability tests show that BFAR yields highly repeatable detections with minimal redundancy. We have conducted simulations to compare the detection and false-alarm probabilities of BFAR with those of three baselines in non-homogeneous noise and varying target sizes. The results show that BFAR outperforms the other detectors. Moreover, We apply BFAR to the use case of radar odometry, and adapt a recent odometry pipeline, replacing its original conservative filtering with BFAR. In this way, we reduce the translation/rotation odometry errors/100 m from 1.3%/0.4◦ to 1.12%/0.38◦, and from 1.62%/0.57◦ to 1.21%/0.32◦, improving translation error by 14.2% and 25% on Oxford and Mulran public data sets, respectively.

Place, publisher, year, edition, pages
Springer, 2024
Keywords
Radar, CFAR, Odometry, FMCW
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:oru:diva-117575 (URN)10.1007/s10514-024-10176-2 (DOI)001358908800001 ()2-s2.0-85209565335 (Scopus ID)
Funder
Örebro University
Available from: 2024-12-05 Created: 2024-12-05 Last updated: 2025-02-07Bibliographically approved
Kubelka, V., Fritz, E. & Magnusson, M. (2024). Do we need scan-matching in radar odometry?. In: 2024 IEEE International Conference on Robotics and Automation (ICRA): . Paper presented at 2024 IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan, May 13-17, 2024 (pp. 13710-13716). IEEE Robotics and Automation Society
Open this publication in new window or tab >>Do we need scan-matching in radar odometry?
2024 (English)In: 2024 IEEE International Conference on Robotics and Automation (ICRA), IEEE Robotics and Automation Society, 2024, p. 13710-13716Conference paper, Published paper (Refereed)
Abstract [en]

There is a current increase in the development of "4D" Doppler-capable radar and lidar range sensors that produce 3D point clouds where all points also have information about the radial velocity relative to the sensor. 4D radars in particular are interesting for object perception and navigation in low-visibility conditions (dust, smoke) where lidars and cameras typically fail. With the advent of high-resolution Doppler-capable radars comes the possibility of estimating odometry from single point clouds, foregoing the need for scan registration which is error-prone in feature-sparse field environments. We compare several odometry estimation methods, from direct integration of Doppler/IMU data and Kalman filter sensor fusion to 3D scan-to-scan and scan-to-map registration, on three datasets with data from two recent 4D radars and two IMUs. Surprisingly, our results show that the odometry from Doppler and IMU data alone give similar or better results than 3D point cloud registration. In our experiments, the position drift can be as low as 0.9% over 1.8 and 4.5km trajectories. That allows accurate estimation of 6-DOF ego-motion over long distances also in feature-sparse mine environments. These results are useful not least for applications of navigation with resource-constrained robot platforms in feature-sparse and low-visibility conditions such as mining, construction, and search & rescue operations.

Place, publisher, year, edition, pages
IEEE Robotics and Automation Society, 2024
Keywords
Point cloud compression, Accuracy, Three-dimensional displays, Laser radar, Estimation, Radar, Radar imaging, 4D Radar, Radar Odometry, Mobile robot, Localization
National Category
Robotics and automation
Research subject
Computer Science; Electrical Engineering
Identifiers
urn:nbn:se:oru:diva-118183 (URN)10.1109/ICRA57147.2024.10610666 (DOI)2-s2.0-85202433241 (Scopus ID)9798350384574 (ISBN)9798350384581 (ISBN)
Conference
2024 IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan, May 13-17, 2024
Projects
Sweden’s Innovation Agency under grant number 2021-04714 (Radarize)
Funder
Vinnova, 2021-04714
Available from: 2025-01-09 Created: 2025-01-09 Last updated: 2025-02-09Bibliographically approved
Galeote-Luque, A., Kubelka, V., Magnusson, M., Ruiz-Sarmiento, J.-R. & Gonzalez-Jimenez, J. (2024). Doppler-only Single-scan 3D Vehicle Odometry. In: 2024 IEEE International Conference on Robotics and Automation (ICRA): . Paper presented at 2024 IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan, May 13-17, 2024 (pp. 13703-13709). IEEE Robotics and Automation Society
Open this publication in new window or tab >>Doppler-only Single-scan 3D Vehicle Odometry
Show others...
2024 (English)In: 2024 IEEE International Conference on Robotics and Automation (ICRA), IEEE Robotics and Automation Society, 2024, p. 13703-13709Conference paper, Published paper (Refereed)
Abstract [en]

We present a novel 3D odometry method that recovers the full motion of a vehicle only from a Doppler-capable range sensor. It leverages the radial velocities measured from the scene, estimating the sensor’s velocity from a single scan. The vehicle’s 3D motion, defined by its linear and angular velocities, is calculated taking into consideration its kinematic model which provides a constraint between the velocity measured at the sensor frame and the vehicle frame.Experiments carried out prove the viability of our single-sensor method compared to mounting an additional IMU. Our method provides a more reliable translation of the sensor, compared to the errors linked to IMUs due to noise and biases. Its short-term accuracy and fast operation (∼5ms) make it a proper candidate to supply the initialization to more complex localization algorithms or mapping pipelines. Not only does it reduce the error of the mapper, but it does so at a comparable level of accuracy as an IMU would. All without the need to mount and calibrate an extra sensor on the vehicle.

Place, publisher, year, edition, pages
IEEE Robotics and Automation Society, 2024
Keywords
Accuracy, Three-dimensional displays, Roads, Heuristic algorithms, Pipelines, Kinematics, Odometry, Localization, Range Sensing, Autonomous Vehicle Navigation, Range Odometry, Radar, Doppler
National Category
Robotics and automation
Research subject
Computer Science; Electrical Engineering
Identifiers
urn:nbn:se:oru:diva-118182 (URN)10.1109/ICRA57147.2024.10611199 (DOI)2-s2.0-85202437431 (Scopus ID)9798350384574 (ISBN)9798350384581 (ISBN)
Conference
2024 IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan, May 13-17, 2024
Note

This work has been supported by the grant program PRE2018-085026 and the research project ARPEGGIO (PID2020-117057GB-I00), all funded by the Spanish Government.

Available from: 2025-01-09 Created: 2025-01-09 Last updated: 2025-02-09Bibliographically approved
Sun, S., Mielle, M., Lilienthal, A. J. & Magnusson, M. (2024). High-Fidelity SLAM Using Gaussian Splatting with Rendering-Guided Densification and Regularized Optimization. In: : .
Open this publication in new window or tab >>High-Fidelity SLAM Using Gaussian Splatting with Rendering-Guided Densification and Regularized Optimization
2024 (English)Conference paper, Published paper (Refereed)
Abstract [en]

We propose a dense RGBD SLAM system based on 3D Gaussian Splatting that provides metrically accurate pose tracking and visually realistic reconstruction. To this end, we first propose a Gaussian densification strategy based on the rendering loss to map unobserved areas and refine reobserved areas. Second, we introduce extra regularization parameters to alleviate the “forgetting” problem during contiunous mapping, where parameters tend to overfit the latest frame and result in decreasing rendering quality for previous frames. Both mapping and tracking are performed with Gaussian parameters by minimizing re-rendering loss in a differentiable way. Compared to recent neural and concurrently developed Gaussian splatting RGBD SLAM baselines, our method achieves state-of-the-art results on the synthetic dataset Replica and competitive results on the real-world dataset TUM. The code is released on https://github.com/ljjTYJR/HF-SLAM.

National Category
Robotics and automation
Identifiers
urn:nbn:se:oru:diva-117115 (URN)
Funder
EU, Horizon 2020, 101017274
Note

Accepted by IROS 2024

Available from: 2024-10-30 Created: 2024-10-30 Last updated: 2025-02-09Bibliographically approved
Skog, M., Kotlyar, O., Kubelka, V. & Magnusson, M. (2024). Human Detection from 4D Radar Data in Low-Visibility Field Conditions. In: : . Paper presented at Radar in Robotics: Resilience from Signal to Navigation - Full-Day Workshop at 2024 IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan, May 13-17, 2024.
Open this publication in new window or tab >>Human Detection from 4D Radar Data in Low-Visibility Field Conditions
2024 (English)Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

Autonomous driving technology is increasingly being used on public roads and in industrial settings such as mines. While it is essential to detect pedestrians, vehicles, or other obstacles, adverse field conditions negatively affect the performance of classical sensors such as cameras or lidars. Radar, on the other hand, is a promising modality that is less affected by, e.g., dust, smoke, water mist or fog. In particular, modern 4D imaging radars provide target responses across the range, vertical angle, horizontal angle and Doppler velocity dimensions. We propose TMVA4D, a CNN architecture that leverages this 4D radar modality for semantic segmentation. The CNN is trained to distinguish between the background and person classes based on a series of 2D projections of the 4D radar data that include the elevation, azimuth, range, and Doppler velocity dimensions. We also outline the process of compiling a novel dataset consisting of data collected in industrial settings with a car-mounted 4D radar and describe how the ground-truth labels were generated from reference thermal images. Using TMVA4D on this dataset, we achieve an mIoU score of 78.2% and an mDice score of 86.1%, evaluated on the two classes background and person.

Keywords
Automotive Radar, 4D Radar, Human Detection, Semantic Segmentation, Convolutional Neural Network, Deep Learning
National Category
Robotics and automation
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-118150 (URN)
Conference
Radar in Robotics: Resilience from Signal to Navigation - Full-Day Workshop at 2024 IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan, May 13-17, 2024
Available from: 2025-01-09 Created: 2025-01-09 Last updated: 2025-02-09Bibliographically approved
Schreiter, T., Rudenko, A., Magnusson, M. & Lilienthal, A. (2024). Human Gaze and Head Rotation during Navigation, Exploration and Object Manipulation in Shared Environments with Robots. In: 2024 33rd IEEE International Conference on Robot and Human Interactive Communication (ROMAN): . Paper presented at 2024 33rd IEEE International Conference on Robot and Human Interactive Communication (ROMAN), Passadena, CA, USA, 26-30 Aug. 2024 (pp. 1258-1265). IEEE Computer Society
Open this publication in new window or tab >>Human Gaze and Head Rotation during Navigation, Exploration and Object Manipulation in Shared Environments with Robots
2024 (English)In: 2024 33rd IEEE International Conference on Robot and Human Interactive Communication (ROMAN), IEEE Computer Society, 2024, p. 1258-1265Conference paper, Published paper (Refereed)
Abstract [en]

The human gaze is an important cue to signal intention, attention, distraction, and the regions of interest in the immediate surroundings. Gaze tracking can transform how robots perceive, understand, and react to people, enabling new modes of robot control, interaction, and collaboration. In this paper, we use gaze tracking data from a rich dataset of human motion (THÖR-MAGNI) to investigate the coordination between gaze direction and head rotation of humans engaged in various indoor activities involving navigation, interaction with objects, and collaboration with a mobile robot. In particular, we study the spread and central bias of fixations in diverse activities and examine the correlation between gaze direction and head rotation. We introduce various human motion metrics to enhance the understanding of gaze behavior in dynamic interactions. Finally, we apply semantic object labeling to decompose the gaze distribution into activity-relevant regions.

Place, publisher, year, edition, pages
IEEE Computer Society, 2024
Keywords
Adversarial machine learning, Behavioral research, Human engineering, Human robot interaction, Industrial robots, Machine Perception, Microrobots, Motion tracking, Gaze direction, Gaze-tracking, Head rotation, Human motions, Indoor activities, Object manipulation, Region-of-interest, Regions of interest, Robots control, Tracking data, Mobile robots
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:oru:diva-118537 (URN)10.1109/RO-MAN60168.2024.10731190 (DOI)001348918600163 ()2-s2.0-85206976290 (Scopus ID)9798350375022 (ISBN)
Conference
2024 33rd IEEE International Conference on Robot and Human Interactive Communication (ROMAN), Passadena, CA, USA, 26-30 Aug. 2024
Funder
EU, Horizon 2020, 101017274
Available from: 2025-01-15 Created: 2025-01-15 Last updated: 2025-01-20Bibliographically approved
Zhu, Y., Fan, H., Rudenko, A., Magnusson, M., Schaffernicht, E. & Lilienthal, A. (2024). LaCE-LHMP: Airflow Modelling-Inspired Long-Term Human Motion Prediction By Enhancing Laminar Characteristics in Human Flow. In: 2024 IEEE International Conference on Robotics and Automation (ICRA): . Paper presented at IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan, May 13-17, 2024 (pp. 11281-11288). IEEE
Open this publication in new window or tab >>LaCE-LHMP: Airflow Modelling-Inspired Long-Term Human Motion Prediction By Enhancing Laminar Characteristics in Human Flow
Show others...
2024 (English)In: 2024 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2024, p. 11281-11288Conference paper, Published paper (Refereed)
Abstract [en]

Long-term human motion prediction (LHMP) is essential for safely operating autonomous robots and vehicles in populated environments. It is fundamental for various applications, including motion planning, tracking, human-robot interaction and safety monitoring. However, accurate prediction of human trajectories is challenging due to complex factors, including, for example, social norms and environmental conditions. The influence of such factors can be captured through Maps of Dynamics (MoDs), which encode spatial motion patterns learned from (possibly scattered and partial) past observations of motion in the environment and which can be used for data-efficient, interpretable motion prediction (MoD-LHMP). To address the limitations of prior work, especially regarding accuracy and sensitivity to anomalies in long-term prediction, we propose the Laminar Component Enhanced LHMP approach (LaCE-LHMP). Our approach is inspired by data-driven airflow modelling, which estimates laminar and turbulent flow components and uses predominantly the laminar components to make flow predictions. Based on the hypothesis that human trajectory patterns also manifest laminar flow (that represents predictable motion) and turbulent flow components (that reflect more unpredictable and arbitrary motion), LaCE-LHMP extracts the laminar patterns in human dynamics and uses them for human motion prediction. We demonstrate the superior prediction performance of LaCE-LHMP through benchmark comparisons with state-of-the-art LHMP methods, offering an unconventional perspective and a more intuitive understanding of human movement patterns.

Place, publisher, year, edition, pages
IEEE, 2024
Series
IEEE International Conference on Robotics and Automation (ICRA), ISSN 1050-4729, E-ISSN 2577-087X
Keywords
Human-Robot Interaction
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-117873 (URN)10.1109/ICRA57147.2024.10610717 (DOI)2-s2.0-85202449603 (Scopus ID)9798350384574 (ISBN)9798350384581 (ISBN)
Conference
IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan, May 13-17, 2024
Projects
DARKO
Funder
EU, Horizon 2020, 101017274
Note

This work has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 101017274 (DARKO), and is also partially funded by the academic program Sustainable Underground Mining (SUM) project, jointly financed by LKAB and the Swedish Energy Agency.

Available from: 2024-12-18 Created: 2024-12-18 Last updated: 2024-12-19Bibliographically approved
Shih-Min, Y., Magnusson, M., Stork, J. A. & Stoyanov, T. (2024). Learning Extrinsic Dexterity with Parameterized Manipulation Primitives. In: 2024 IEEE International Conference on Robotics and Automation (ICRA): . Paper presented at IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan, May 13-17, 2024 (pp. 5404-5410). IEEE
Open this publication in new window or tab >>Learning Extrinsic Dexterity with Parameterized Manipulation Primitives
2024 (English)In: 2024 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2024, p. 5404-5410Conference paper, Published paper (Refereed)
Abstract [en]

Many practically relevant robot grasping problems feature a target object for which all grasps are occluded, e.g., by the environment. Single-shot grasp planning invariably fails in such scenarios. Instead, it is necessary to first manipulate the object into a configuration that affords a grasp. We solve this problem by learning a sequence of actions that utilize the environment to change the object’s pose. Concretely, we employ hierarchical reinforcement learning to combine a sequence of learned parameterized manipulation primitives. By learning the low-level manipulation policies, our approach can control the object’s state through exploiting interactions between the object, the gripper, and the environment. Designing such a complex behavior analytically would be infeasible under uncontrolled conditions, as an analytic approach requires accurate physical modeling of the interaction and contact dynamics. In contrast, we learn a hierarchical policy model that operates directly on depth perception data, without the need for object detection, pose estimation, or manual design of controllers. We evaluate our approach on picking box-shaped objects of various weight, shape, and friction properties from a constrained table-top workspace. Our method transfers to a real robot and is able to successfully complete the object picking task in 98% of experimental trials.

Place, publisher, year, edition, pages
IEEE, 2024
Series
IEEE International Conference on Robotics and Automation (ICRA), ISSN 1050-4729, E-ISSN 2577-087X
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-117863 (URN)10.1109/ICRA57147.2024.10611431 (DOI)001294576204026 ()2-s2.0-85202434994 (Scopus ID)9798350384574 (ISBN)9798350384581 (ISBN)
Conference
IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan, May 13-17, 2024
Projects
DARKO
Funder
EU, Horizon 2020, 101017274Wallenberg AI, Autonomous Systems and Software Program (WASP)
Note

This work has received funding from the EU’s Horizon 2020 research and innovation programme under grant agreement No 101017274, and was supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.

Available from: 2024-12-18 Created: 2024-12-18 Last updated: 2025-02-04Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-8658-2985

Search in DiVA

Show all publications