To Örebro University

oru.seÖrebro University Publications
Change search
Link to record
Permanent link

Direct link
Magnusson, Martin, DocentORCID iD iconorcid.org/0000-0001-8658-2985
Publications (10 of 74) Show all publications
Schreiter, T., Morillo-Mendez, L., Chadalavada, R. T., Rudenko, A., Billing, E., Magnusson, M., . . . Lilienthal, A. J. (2023). Advantages of Multimodal versus Verbal-Only Robot-to-Human Communication with an Anthropomorphic Robotic Mock Driver. In: 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN): Proceedings. Paper presented at 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Busan, South Korea, August 28-31, 2023 (pp. 293-300). IEEE
Open this publication in new window or tab >>Advantages of Multimodal versus Verbal-Only Robot-to-Human Communication with an Anthropomorphic Robotic Mock Driver
Show others...
2023 (English)In: 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN): Proceedings, IEEE, 2023, p. 293-300Conference paper, Published paper (Refereed)
Abstract [en]

Robots are increasingly used in shared environments with humans, making effective communication a necessity for successful human-robot interaction. In our work, we study a crucial component: active communication of robot intent. Here, we present an anthropomorphic solution where a humanoid robot communicates the intent of its host robot acting as an "Anthropomorphic Robotic Mock Driver" (ARMoD). We evaluate this approach in two experiments in which participants work alongside a mobile robot on various tasks, while the ARMoD communicates a need for human attention, when required, or gives instructions to collaborate on a joint task. The experiments feature two interaction styles of the ARMoD: a verbal-only mode using only speech and a multimodal mode, additionally including robotic gaze and pointing gestures to support communication and register intent in space. Our results show that the multimodal interaction style, including head movements and eye gaze as well as pointing gestures, leads to more natural fixation behavior. Participants naturally identified and fixated longer on the areas relevant for intent communication, and reacted faster to instructions in collaborative tasks. Our research further indicates that the ARMoD intent communication improves engagement and social interaction with mobile robots in workplace settings.

Place, publisher, year, edition, pages
IEEE, 2023
Series
IEEE RO-MAN, ISSN 1944-9445, E-ISSN 1944-9437
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-110873 (URN)10.1109/RO-MAN57019.2023.10309629 (DOI)001108678600042 ()9798350336702 (ISBN)9798350336719 (ISBN)
Conference
32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Busan, South Korea, August 28-31, 2023
Funder
EU, Horizon 2020, 101017274 (DARKO)
Available from: 2024-01-22 Created: 2024-01-22 Last updated: 2024-01-22Bibliographically approved
Zhu, Y., Rudenko, A., Kucner, T., Palmieri, L., Arras, K., Lilienthal, A. & Magnusson, M. (2023). CLiFF-LHMP: Using Spatial Dynamics Patterns for Long-Term Human Motion Prediction. In: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 01-05 October 2023, Detroit, MI, USA: . Paper presented at 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023), Detroit, MI, USA, October 1-5, 2023 (pp. 3795-3802). IEEE
Open this publication in new window or tab >>CLiFF-LHMP: Using Spatial Dynamics Patterns for Long-Term Human Motion Prediction
Show others...
2023 (English)In: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 01-05 October 2023, Detroit, MI, USA, IEEE, 2023, p. 3795-3802Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

Human motion prediction is important for mobile service robots and intelligent vehicles to operate safely and smoothly around people. The more accurate predictions are, particularly over extended periods of time, the better a system can, e.g., assess collision risks and plan ahead. In this paper, we propose to exploit maps of dynamics (MoDs, a class of general representations of place-dependent spatial motion patterns, learned from prior observations) for long-term human motion prediction (LHMP). We present a new MoD-informed human motion prediction approach, named CLiFF-LHMP, which is data efficient, explainable, and insensitive to errors from an upstream tracking system. Our approach uses CLiFF -map, a specific MoD trained with human motion data recorded in the same environment. We bias a constant velocity prediction with samples from the CLiFF-map to generate multi-modal trajectory predictions. In two public datasets we show that this algorithm outperforms the state of the art for predictions over very extended periods of time, achieving 45 % more accurate prediction performance at 50s compared to the baseline.

Place, publisher, year, edition, pages
IEEE, 2023
Series
IEEE International Conference on Intelligent Robots and Systems. Proceedings, ISSN 2153-0858, E-ISSN 2153-0866
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-111183 (URN)10.1109/IROS55552.2023.10342031 (DOI)2-s2.0-85182524296 (Scopus ID)9781665491914 (ISBN)9781665491907 (ISBN)
Conference
2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023), Detroit, MI, USA, October 1-5, 2023
Projects
DARKO
Funder
EU, Horizon 2020, 101017274
Available from: 2024-01-29 Created: 2024-01-29 Last updated: 2024-01-29Bibliographically approved
Adolfsson, D., Magnusson, M., Alhashimi, A., Lilienthal, A. & Andreasson, H. (2023). Lidar-Level Localization With Radar? The CFEAR Approach to Accurate, Fast, and Robust Large-Scale Radar Odometry in Diverse Environments. IEEE Transactions on robotics, 39(2), 1476-1495
Open this publication in new window or tab >>Lidar-Level Localization With Radar? The CFEAR Approach to Accurate, Fast, and Robust Large-Scale Radar Odometry in Diverse Environments
Show others...
2023 (English)In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 39, no 2, p. 1476-1495Article in journal (Refereed) Published
Abstract [en]

This article presents an accurate, highly efficient, and learning-free method for large-scale odometry estimation using spinning radar, empirically found to generalize well across very diverse environments—outdoors, from urban to woodland, and indoors in warehouses and mines—without changing parameters. Our method integrates motion compensation within a sweep with one-to-many scan registration that minimizes distances between nearby oriented surface points and mitigates outliers with a robust loss function. Extending our previous approach conservative filtering for efficient and accurate radar odometry (CFEAR), we present an in-depth investigation on a wider range of datasets, quantifying the importance of filtering, resolution, registration cost and loss functions, keyframe history, and motion compensation. We present a new solving strategy and configuration that overcomes previous issues with sparsity and bias, and improves our state-of-the-art by 38%, thus, surprisingly, outperforming radar simultaneous localization and mapping (SLAM) and approaching lidar SLAM. The most accurate configuration achieves 1.09% error at 5 Hz on the Oxford benchmark, and the fastest achieves 1.79% error at 160 Hz.

Place, publisher, year, edition, pages
IEEE, 2023
Keywords
Radar, Sensors, Spinning, Azimuth, Simultaneous localization and mapping, Estimation, Location awareness, Localization, radar odometry, range sensing, SLAM
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems) Robotics
Research subject
Computer and Systems Science; Computer Science
Identifiers
urn:nbn:se:oru:diva-103116 (URN)10.1109/tro.2022.3221302 (DOI)000912778500001 ()2-s2.0-85144032264 (Scopus ID)
Available from: 2023-01-16 Created: 2023-01-16 Last updated: 2023-10-18
Gupta, H., Andreasson, H., Magnusson, M., Julier, S. & Lilienthal, A. J. (2023). Revisiting Distribution-Based Registration Methods. In: Marques, L.; Markovic, I. (Ed.), 2023 European Conference on Mobile Robots (ECMR): . Paper presented at 11th European Conference on Mobile Robots (ECMR 2023), Coimbra, Portugal, September 4-7, 2023 (pp. 43-48). IEEE
Open this publication in new window or tab >>Revisiting Distribution-Based Registration Methods
Show others...
2023 (English)In: 2023 European Conference on Mobile Robots (ECMR) / [ed] Marques, L.; Markovic, I., IEEE , 2023, p. 43-48Conference paper, Published paper (Refereed)
Abstract [en]

Normal Distribution Transformation (NDT) registration is a fast, learning-free point cloud registration algorithm that works well in diverse environments. It uses the compact NDT representation to represent point clouds or maps as a spatial probability function that models the occupancy likelihood in an environment. However, because of the grid discretization in NDT maps, the global minima of the registration cost function do not always correlate to ground truth, particularly for rotational alignment. In this study, we examined the NDT registration cost function in-depth. We evaluated three modifications (Student-t likelihood function, inflated covariance/heavily broadened likelihood curve, and overlapping grid cells) that aim to reduce the negative impact of discretization in classical NDT registration. The first NDT modification improves likelihood estimates for matching the distributions of small population sizes; the second modification reduces discretization artifacts by broadening the likelihood tails through covariance inflation; and the third modification achieves continuity by creating the NDT representations with overlapping grid cells (without increasing the total number of cells). We used the Pomerleau Dataset evaluation protocol for our experiments and found significant improvements compared to the classic NDT D2D registration approach (27.7% success rate) using the registration cost functions "heavily broadened likelihood NDT" (HBL-NDT) (34.7% success rate) and "overlapping grid cells NDT" (OGC-NDT) (33.5% success rate). However, we could not observe a consistent improvement using the Student-t likelihood-based registration cost function (22.2% success rate) over the NDT P2D registration cost function (23.7% success rate). A comparative analysis with other state-of-art registration algorithms is also presented in this work. We found that HBL-NDT worked best for easy initial pose difficulties scenarios making it suitable for consecutive point cloud registration in SLAM application.

Place, publisher, year, edition, pages
IEEE, 2023
Series
European Conference on Mobile Robots, ISSN 2639-7919, E-ISSN 2767-8733
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-109681 (URN)10.1109/ECMR59166.2023.10256416 (DOI)001082260500007 ()2-s2.0-8517439971 (Scopus ID)9798350307047 (ISBN)9798350307054 (ISBN)
Conference
11th European Conference on Mobile Robots (ECMR 2023), Coimbra, Portugal, September 4-7, 2023
Funder
EU, Horizon 2020, 858101
Available from: 2023-11-15 Created: 2023-11-15 Last updated: 2023-11-15Bibliographically approved
Kucner, T. P., Magnusson, M., Mghames, S., Palmieri, L., Verdoja, F., Swaminathan, C. S., . . . Lilienthal, A. J. (2023). Survey of maps of dynamics for mobile robots. The international journal of robotics research, 42(11), 977-1006
Open this publication in new window or tab >>Survey of maps of dynamics for mobile robots
Show others...
2023 (English)In: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 42, no 11, p. 977-1006Article in journal (Refereed) Published
Abstract [en]

Robotic mapping provides spatial information for autonomous agents. Depending on the tasks they seek to enable, the maps created range from simple 2D representations of the environment geometry to complex, multilayered semantic maps. This survey article is about maps of dynamics (MoDs), which store semantic information about typical motion patterns in a given environment. Some MoDs use trajectories as input, and some can be built from short, disconnected observations of motion. Robots can use MoDs, for example, for global motion planning, improved localization, or human motion prediction. Accounting for the increasing importance of maps of dynamics, we present a comprehensive survey that organizes the knowledge accumulated in the field and identifies promising directions for future work. Specifically, we introduce field-specific vocabulary, summarize existing work according to a novel taxonomy, and describe possible applications and open research problems. We conclude that the field is mature enough, and we expect that maps of dynamics will be increasingly used to improve robot performance in real-world use cases. At the same time, the field is still in a phase of rapid development where novel contributions could significantly impact this research area.

Place, publisher, year, edition, pages
Sage Publications, 2023
Keywords
mapping, maps of dynamics, localization and mapping, acceptability and trust, human-robot interaction, human-aware motion planning
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-107930 (URN)10.1177/02783649231190428 (DOI)001042374800001 ()2-s2.0-85166946627 (Scopus ID)
Funder
EU, Horizon 2020, 101017274
Note

Funding agencies:

Czech Ministry of Education by OP VVV CZ.02.1.01/0.0/0.0/16 019/0000765

Business Finland 9249/31/2021

 

Available from: 2023-08-30 Created: 2023-08-30 Last updated: 2024-01-03Bibliographically approved
Adolfsson, D., Karlsson, M., Kubelka, V., Magnusson, M. & Andreasson, H. (2023). TBV Radar SLAM - Trust but Verify Loop Candidates. IEEE Robotics and Automation Letters, 8(6), 3613-3620
Open this publication in new window or tab >>TBV Radar SLAM - Trust but Verify Loop Candidates
Show others...
2023 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 8, no 6, p. 3613-3620Article in journal (Refereed) Published
Abstract [en]

Robust SLAM in large-scale environments requires fault resilience and awareness at multiple stages, from sensing and odometry estimation to loop closure. In this work, we present TBV (Trust But Verify) Radar SLAM, a method for radar SLAM that introspectively verifies loop closure candidates. TBV Radar SLAM achieves a high correct-loop-retrieval rate by combining multiple place-recognition techniques: tightly coupled place similarity and odometry uncertainty search, creating loop descriptors from origin-shifted scans, and delaying loop selection until after verification. Robustness to false constraints is achieved by carefully verifying and selecting the most likely ones from multiple loop constraints. Importantly, the verification and selection are carried out after registration when additional sources of loop evidence can easily be computed. We integrate our loop retrieval and verification method with a robust odometry pipeline within a pose graph framework. By evaluation on public benchmarks we found that TBV Radar SLAM achieves 65% lower error than the previous state of the art. We also show that it generalizes across environments without needing to change any parameters. We provide the open-source implementation at https://github.com/dan11003/tbv_slam_public

Place, publisher, year, edition, pages
IEEE, 2023
Keywords
SLAM, localization, radar, introspection
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-106249 (URN)10.1109/LRA.2023.3268040 (DOI)000981889200013 ()2-s2.0-85153499426 (Scopus ID)
Funder
Vinnova, 2021-04714 2019-05878
Available from: 2023-06-13 Created: 2023-06-13 Last updated: 2024-01-17Bibliographically approved
Molina, S., Mannucci, A., Magnusson, M., Adolfsson, D., Andreasson, H., Hamad, M., . . . Lilienthal, A. J. (2023). The ILIAD Safety Stack: Human-Aware Infrastructure-Free Navigation of Industrial Mobile Robots. IEEE robotics & automation magazine
Open this publication in new window or tab >>The ILIAD Safety Stack: Human-Aware Infrastructure-Free Navigation of Industrial Mobile Robots
Show others...
2023 (English)In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223XArticle in journal (Refereed) Epub ahead of print
Abstract [en]

Current intralogistics services require keeping up with e-commerce demands, reducing delivery times and waste, and increasing overall flexibility. As a consequence, the use of automated guided vehicles (AGVs) and, more recently, autonomous mobile robots (AMRs) for logistics operations is steadily increasing.

Place, publisher, year, edition, pages
IEEE, 2023
Keywords
Robots, Safety, Navigation, Mobile robots, Human-robot interaction, Hidden Markov models, Trajectory
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-108145 (URN)10.1109/MRA.2023.3296983 (DOI)001051249900001 ()
Funder
EU, Horizon 2020, 732737
Available from: 2023-09-14 Created: 2023-09-14 Last updated: 2024-01-02Bibliographically approved
Almeida, T., Rudenko, A., Schreiter, T., Zhu, Y., Gutiérrez Maestro, E., Morillo-Mendez, L., . . . Lilienthal, A. (2023). THÖR-Magni: Comparative Analysis of Deep Learning Models for Role-Conditioned Human Motion Prediction. In: Proceedings of the IEEE/CVF International Conference on Computer Vision: . Paper presented at IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, Paris, France, October 2-6, 2023 (pp. 2200-2209).
Open this publication in new window or tab >>THÖR-Magni: Comparative Analysis of Deep Learning Models for Role-Conditioned Human Motion Prediction
Show others...
2023 (English)In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, p. 2200-2209Conference paper, Published paper (Refereed)
Abstract [en]

Autonomous systems, that need to operate in human environments and interact with the users, rely on understanding and anticipating human activity and motion. Among the many factors which influence human motion, semantic attributes, such as the roles and ongoing activities of the detected people, provide a powerful cue on their future motion, actions, and intentions. In this work we adapt several popular deep learning models for trajectory prediction with labels corresponding to the roles of the people. To this end we use the novel THOR-Magni dataset, which captures human activity in industrial settings and includes the relevant semantic labels for people who navigate complex environments, interact with objects and robots, work alone and in groups. In qualitative and quantitative experiments we show that the role-conditioned LSTM, Transformer, GAN and VAE methods can effectively incorporate the semantic categories, better capture the underlying input distribution and therefore produce more accurate motion predictions in terms of Top-K ADE/FDE and log-likelihood metrics.

National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-109508 (URN)
Conference
IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, Paris, France, October 2-6, 2023
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP), NT4220EU, Horizon 2020, 101017274 (DARKO)
Available from: 2023-10-31 Created: 2023-10-31 Last updated: 2023-11-01Bibliographically approved
Swaminathan, C. S., Kucner, T. P., Magnusson, M., Palmieri, L., Molina, S., Mannucci, A., . . . Lilienthal, A. J. (2022). Benchmarking the utility of maps of dynamics for human-aware motion planning. Frontiers in Robotics and AI, 9, Article ID 916153.
Open this publication in new window or tab >>Benchmarking the utility of maps of dynamics for human-aware motion planning
Show others...
2022 (English)In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 9, article id 916153Article in journal (Refereed) Published
Abstract [en]

Robots operating with humans in highly dynamic environments need not only react to moving persons and objects but also to anticipate and adhere to patterns of motion of dynamic agents in their environment. Currently, robotic systems use information about dynamics locally, through tracking and predicting motion within their direct perceptual range. This limits robots to reactive response to observed motion and to short-term predictions in their immediate vicinity. In this paper, we explore how maps of dynamics (MoDs) that provide information about motion patterns outside of the direct perceptual range of the robot can be used in motion planning to improve the behaviour of a robot in a dynamic environment. We formulate cost functions for four MoD representations to be used in any optimizing motion planning framework. Further, to evaluate the performance gain through using MoDs in motion planning, we design objective metrics, and we introduce a simulation framework for rapid benchmarking. We find that planners that utilize MoDs waste less time waiting for pedestrians, compared to planners that use geometric information alone. In particular, planners utilizing both intensity (proportion of observations at a grid cell where a dynamic entity was detected) and direction information have better task execution efficiency.

Place, publisher, year, edition, pages
Frontiers Media S.A., 2022
Keywords
ATC, benchmarking, dynamic environments, human-aware motion planning, human-populated environments, maps of dynamics
National Category
Robotics
Identifiers
urn:nbn:se:oru:diva-102370 (URN)10.3389/frobt.2022.916153 (DOI)000885477300001 ()36405073 (PubMedID)2-s2.0-85142125253 (Scopus ID)
Funder
European Commission, 101017274
Available from: 2022-11-24 Created: 2022-11-24 Last updated: 2022-12-20Bibliographically approved
Adolfsson, D., Castellano-Quero, M., Magnusson, M., Lilienthal, A. J. & Andreasson, H. (2022). CorAl: Introspection for robust radar and lidar perception in diverse environments using differential entropy. Robotics and Autonomous Systems, 155, Article ID 104136.
Open this publication in new window or tab >>CorAl: Introspection for robust radar and lidar perception in diverse environments using differential entropy
Show others...
2022 (English)In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 155, article id 104136Article in journal (Refereed) Published
Abstract [en]

Robust perception is an essential component to enable long-term operation of mobile robots. It depends on failure resilience through reliable sensor data and pre-processing, as well as failure awareness through introspection, for example the ability to self-assess localization performance. This paper presents CorAl: a principled, intuitive, and generalizable method to measure the quality of alignment between pairs of point clouds, which learns to detect alignment errors in a self-supervised manner. CorAl compares the differential entropy in the point clouds separately with the entropy in their union to account for entropy inherent to the scene. By making use of dual entropy measurements, we obtain a quality metric that is highly sensitive to small alignment errors and still generalizes well to unseen environments. In this work, we extend our previous work on lidar-only CorAl to radar data by proposing a two-step filtering technique that produces high-quality point clouds from noisy radar scans. Thus, we target robust perception in two ways: by introducing a method that introspectively assesses alignment quality, and by applying it to an inherently robust sensor modality. We show that our filtering technique combined with CorAl can be applied to the problem of alignment classification, and that it detects small alignment errors in urban settings with up to 98% accuracy, and with up to 96% if trained only in a different environment. Our lidar and radar experiments demonstrate that CorAl outperforms previous methods both on the ETH lidar benchmark, which includes several indoor and outdoor environments, and the large-scale Oxford and MulRan radar data sets for urban traffic scenarios. The results also demonstrate that CorAl generalizes very well across substantially different environments without the need of retraining.

Place, publisher, year, edition, pages
Elsevier, 2022
Keywords
Radar, Introspection, Localization
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-100756 (URN)10.1016/j.robot.2022.104136 (DOI)000833416900001 ()2-s2.0-85132693467 (Scopus ID)
Funder
Knowledge FoundationEuropean Commission, 101017274Vinnova, 2019-05878
Available from: 2022-08-24 Created: 2022-08-24 Last updated: 2024-01-02Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-8658-2985

Search in DiVA

Show all publications