To Örebro University

oru.seÖrebro universitets publikasjoner
Endre søk
Link to record
Permanent link

Direct link
Magnusson, Martin, ProfessorORCID iD iconorcid.org/0000-0001-8658-2985
Publikasjoner (10 av 96) Visa alla publikasjoner
Araiza-Illan, D., Baum, K., Beebee, H., Chatila, R., Moth-Lund Christensen, S., Coghlan, S., . . . Yang, Y. (2025). A Roadmap for Responsible Robotics: Promoting Human Agency and Collaborative Efforts. IEEE robotics & automation magazine
Åpne denne publikasjonen i ny fane eller vindu >>A Roadmap for Responsible Robotics: Promoting Human Agency and Collaborative Efforts
Vise andre…
2025 (engelsk)Inngår i: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223XArtikkel i tidsskrift (Fagfellevurdert) Epub ahead of print
Abstract [en]

This document presents the outcomes of the Dagstuhl Seminar "Roadmap for Responsible Robotics," held in September 2023 at the Leibniz Center for Informatics, Schloss Dagstuhl, Germany. The seminar brought together researchers from the fields of robotics, computer science, social and cognitive sciences, and philosophy with the aim of charting a path toward improving responsibility in robotic systems. Through intensive interdisciplinary discussions centered on the various values at stake as robotics increasingly integrates into human life, the participants identified key priorities to guide future research and regulatory efforts. The resulting road map outlines actionable steps to ensure that robotic systems coevolve with human societies, promoting human agency and humane values rather than undermining them. Designed for diverse stakeholders-researchers, policy makers, industry leaders, practitioners, nongovernmental organizations (NGOs), and civil society groups-this road map provides a foundation for collaborative efforts toward responsible robotics.

sted, utgiver, år, opplag, sider
IEEE, 2025
Emneord
Robots, Ethics, Robot sensing systems, Service robots, Artificial intelligence, Law, Safety, Stakeholders, Automation, Philosophical considerations
HSV kategori
Identifikatorer
urn:nbn:se:oru:diva-125333 (URN)10.1109/MRA.2025.3620148 (DOI)001616324500001 ()
Tilgjengelig fra: 2025-12-01 Laget: 2025-12-01 Sist oppdatert: 2025-12-01bibliografisk kontrollert
Stracca, E., Rudenko, A., Palmieri, L., Salaris, P., Castri, L., Mazzi, N., . . . Lilienthal, A. J. (2025). DARKO-Nav: Hierarchical Risk and Context-Aware Robot Navigation in Complex Intralogistic Environments. In: Marco Huber; Alexander Verl; Werner Kraus (Ed.), European Robotics Forum 2025: Boosting the Synergies between Robotics and AI for a Stronger Europe. Paper presented at 16th European Robotics Forum-ERF-Annual, Stuttgart, Germany, March 25-27, 2025 (pp. 155-161). Springer, 36
Åpne denne publikasjonen i ny fane eller vindu >>DARKO-Nav: Hierarchical Risk and Context-Aware Robot Navigation in Complex Intralogistic Environments
Vise andre…
2025 (engelsk)Inngår i: European Robotics Forum 2025: Boosting the Synergies between Robotics and AI for a Stronger Europe / [ed] Marco Huber; Alexander Verl; Werner Kraus, Springer, 2025, Vol. 36, s. 155-161Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

We propose a flexible hierarchical navigation stack for a mobile robot in complex dynamic environments. Addressing the growing need for reliable navigation in real-world scenarios, where dynamic agents and environmental uncertainties pose significant challenges, our solution decomposes this complexity into task planning, navigation, control, and safe velocity components. In contrast to the prior art, our system at every level incorporates diverse contextual information about the environment, anticipates navigation risks and proactively avoids collisions with dynamic agents.

sted, utgiver, år, opplag, sider
Springer, 2025
Serie
Springer Proceedings in Advanced Robotics (SPAR), ISSN 2511-1256, E-ISSN 2511-1264 ; Vol. 36
Emneord
navigation in dynamic environments, risk-aware path planning, predictive collision avoidance, intralogistics
HSV kategori
Identifikatorer
urn:nbn:se:oru:diva-123553 (URN)10.1007/978-3-031-89471-8_24 (DOI)001553155000024 ()9783031895746 (ISBN)9783031894701 (ISBN)9783031894718 (ISBN)
Konferanse
16th European Robotics Forum-ERF-Annual, Stuttgart, Germany, March 25-27, 2025
Forskningsfinansiär
EU, Horizon 2020, 101017274 (DARKO)
Tilgjengelig fra: 2025-09-10 Laget: 2025-09-10 Sist oppdatert: 2025-09-10bibliografisk kontrollert
Schreiter, T., Rüppel, J. V., Hazra, R., Rudenko, A., Magnusson, M. & Lilienthal, A. J. (2025). Evaluating Efficiency and Engagement in Scripted and LLM-Enhanced Human-Robot Interactions. In: 2025 20th ACM IEEE International Conference on Human Robot Interaction (HRI): . Paper presented at 20th International Conference on Human Robot Interaction (HRI 2025), Melbourne, Australia, March 4-6, 2025 (pp. 1608-1612). IEEE
Åpne denne publikasjonen i ny fane eller vindu >>Evaluating Efficiency and Engagement in Scripted and LLM-Enhanced Human-Robot Interactions
Vise andre…
2025 (engelsk)Inngår i: 2025 20th ACM IEEE International Conference on Human Robot Interaction (HRI), IEEE , 2025, s. 1608-1612Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

To achieve natural and intuitive interaction with people, HRI frameworks combine a wide array of methods for human perception, intention communication, human-aware navigation and collaborative action. In practice, when encountering unpredictable behavior of people or unexpected states of the environment, these frameworks may lack the ability to dynamically recognize such states, adapt and recover to resume the interaction. Large Language Models (LLMs), owing to their advanced reasoning capabilities and context retention, present a promising solution for enhancing robot adaptability. This potential, however, may not directly translate to improved interaction metrics. This paper considers a representative interaction with an industrial robot involving approach, instruction, and object manipulation, implemented in two conditions: (1) fully scripted and (2) including LLM-enhanced responses. We use gaze tracking and questionnaires to measure the participants' task efficiency, engagement, and robot perception. The results indicate higher SUbjective ratings for the LLM condition, but objective metrics show that the scripted condition performs comparably, particularly in efficiency and focus during simple tasks. We also note that the scripted condition may have an edge over LLM-enhanced responses in terms of response latency and energy consumption, especially for trivial and repetitive interactions.

sted, utgiver, år, opplag, sider
IEEE, 2025
Serie
ACM/IEEE International Conference on Human-Robot Interaction (HRI), ISSN 2167-2121, E-ISSN 2167-2148
Emneord
Human-Robot Interaction, AI-Enabled Robotics
HSV kategori
Identifikatorer
urn:nbn:se:oru:diva-124901 (URN)10.1109/HRI61500.2025.10974124 (DOI)001492540600219 ()9798350378948 (ISBN)9798350378931 (ISBN)
Konferanse
20th International Conference on Human Robot Interaction (HRI 2025), Melbourne, Australia, March 4-6, 2025
Forskningsfinansiär
EU, Horizon 2020, 101017274
Tilgjengelig fra: 2025-11-11 Laget: 2025-11-11 Sist oppdatert: 2025-11-11bibliografisk kontrollert
Rudenko, A., Zhu, Y., Almeida, T. R., Schreiter, T., Castri, L., Belotto, N., . . . Lilienthal, A. J. (2025). Hierarchical System to Predict Human Motion and Intentions for Efficient and Safe Human-Robot Interaction in Industrial Environments. In: 1st German Robotics Conference: . Paper presented at 1st German Robotics Conference, Nuremberg, Germany, March 13-15, 2025.
Åpne denne publikasjonen i ny fane eller vindu >>Hierarchical System to Predict Human Motion and Intentions for Efficient and Safe Human-Robot Interaction in Industrial Environments
Vise andre…
2025 (engelsk)Inngår i: 1st German Robotics Conference, 2025Konferansepaper, Poster (with or without abstract) (Fagfellevurdert)
Abstract [en]

In this paper we present a hierarchical motion and intent prediction system prototype, designed to efficiently operate in complex environments while safely handling risks arising from diverse and uncertain human motion and activities. Our system uses an array of advanced cues to describe human motion and activities, including generalized motion patterns, full-body poses, heterogeneous agent types and causal contextual factors that influence human behavior.

HSV kategori
Forskningsprogram
Datavetenskap
Identifikatorer
urn:nbn:se:oru:diva-119603 (URN)
Konferanse
1st German Robotics Conference, Nuremberg, Germany, March 13-15, 2025
Forskningsfinansiär
Wallenberg AI, Autonomous Systems and Software Program (WASP)EU, Horizon 2020, 101017274
Tilgjengelig fra: 2025-02-28 Laget: 2025-02-28 Sist oppdatert: 2025-03-03bibliografisk kontrollert
Shih-Min, Y., Magnusson, M., Stork, J. A. & Stoyanov, T. (2025). KEA: Keeping Exploration Alive by Proactively Coordinating Exploration Strategies. In: : . Paper presented at Forty-second International Conference on Machine Learning (ICML 2025), Vancouver, Canada, July 13-19, 2025.
Åpne denne publikasjonen i ny fane eller vindu >>KEA: Keeping Exploration Alive by Proactively Coordinating Exploration Strategies
2025 (engelsk)Konferansepaper, Poster (with or without abstract) (Fagfellevurdert)
Abstract [en]

Soft Actor-Critic (SAC) has achieved notable success in continuous control tasks but struggles in sparse reward settings, where infrequent rewards make efficient exploration challenging. While novelty-based exploration methods address this issue by encouraging the agent to explore novel states, they are not trivial to apply to SAC. In particular, managing the interaction between novelty-based exploration and SAC’s stochastic policy can lead to inefficient exploration and redundant sample collection. In this paper, we propose KEA (Keeping Exploration Alive) which tackles the inefficiencies in balancing exploration strategies when combining SAC with novelty-based exploration. KEA integrates a novelty-augmented SAC with a standard SAC agent, proactively coordinated via a switching mechanism. This coordination allows the agent to maintain stochasticity in high-novelty regions, enhancing exploration efficiency and reducing repeated sample collection. We first analyze this potential issue in a 2D navigation task, and then evaluate KEA on the DeepSea hard-exploration benchmark as well as sparse reward control tasks from the DeepMind Control Suite. Compared to state-of-the-art novelty-based exploration baselines, our experiments show that KEA significantly improves learning efficiency and robustness in sparse reward setups.

Emneord
Reinforcement Learning, Novelty-based Exploration, Soft Actor-Critic, Sparse reward
HSV kategori
Forskningsprogram
Datavetenskap
Identifikatorer
urn:nbn:se:oru:diva-124954 (URN)
Konferanse
Forty-second International Conference on Machine Learning (ICML 2025), Vancouver, Canada, July 13-19, 2025
Prosjekter
DARKO
Forskningsfinansiär
Wallenberg AI, Autonomous Systems and Software Program (WASP)EU, Horizon 2020
Merknad

This work has received funding from the EU’s Horizon 2020 research and innovation programme under grant agreement No 101017274, and was supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.

Tilgjengelig fra: 2025-11-12 Laget: 2025-11-12 Sist oppdatert: 2025-11-13bibliografisk kontrollert
Zhu, Y., Rudenko, A., Kucner, T. P., Lilienthal, A. J. & Magnusson, M. (2025). Long-Term Human Motion Prediction Using Spatio-Temporal Maps of Dynamics. IEEE Robotics and Automation Letters, 10(11), 12229-12236
Åpne denne publikasjonen i ny fane eller vindu >>Long-Term Human Motion Prediction Using Spatio-Temporal Maps of Dynamics
Vise andre…
2025 (engelsk)Inngår i: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 10, nr 11, s. 12229-12236Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Long-term human motion prediction (LHMP) is important for the safe and efficient operation of autonomous robots and vehicles in environments shared with humans. Accurate predictions are important for applications including motion planning, tracking, human-robot interaction, and safety monitoring. In this letter, we exploit Maps of Dynamics (MoDs), which encode spatial or spatio-temporal motion patterns as environment features, to achieve LHMP for horizons of up to 60 seconds. We propose an MoD-informed LHMP framework that supports various types of MoDs and includes a ranking method to output the most likely predicted trajectory, improving practical utility in robotics. Further, a time-conditioned MoD is introduced to capture motion patterns that vary across different times of day. We evaluate MoD-LHMP instantiated with three types of MoDs. Experiments on two real-world datasets show that MoD-informed method outperforms learning-based ones, with up to 50% improvement in average displacement error, and the time-conditioned variant achieves the highest accuracy overall.

sted, utgiver, år, opplag, sider
IEEE, 2025
Emneord
Trajectory, Dynamics, Hidden Markov models, Predictive models, Robots, Prediction algorithms, Accuracy, Vehicle dynamics, Tracking, Pedestrians, Human detection and tracking, human and humanoid motion analysis and synthesis, probability and statistical methods, human-aware motion planning
HSV kategori
Identifikatorer
urn:nbn:se:oru:diva-124979 (URN)10.1109/LRA.2025.3619831 (DOI)001598761800006 ()
Forskningsfinansiär
EU, Horizon 2020, 101017274EU, Horizon 2020, 101070596
Tilgjengelig fra: 2025-11-13 Laget: 2025-11-13 Sist oppdatert: 2025-11-13bibliografisk kontrollert
Schreiter, T., Rudenko, A., Rüppel, J. V., Magnusson, M. & Lilienthal, A. J. (2025). Multimodal Interaction and Intention Communication for Industrial Robots. In: 1st German Robotics Conference: . Paper presented at 1st German Robotics Conference, Nuremberg, Germany, March 13-15, 2025.
Åpne denne publikasjonen i ny fane eller vindu >>Multimodal Interaction and Intention Communication for Industrial Robots
Vise andre…
2025 (engelsk)Inngår i: 1st German Robotics Conference, 2025Konferansepaper, Poster (with or without abstract) (Fagfellevurdert)
Abstract [en]

Successful adoption of industrial robots will strongly depend on their ability to safely and efficiently operate in human environments, engage in natural communication, understand their users, and express intentions intuitively while avoiding unnecessary distractions. To achieve this advanced level of Human-Robot Interaction (HRI), robots need to acquire and incorporate knowledge of their users’ tasks and environment and adopt multimodal communication approaches with expressive cues that combine speech, movement, gazes, and other modalities. This paper presents several methods to design, enhance, and evaluate expressive HRI systems for non-humanoid industrial robots. We present the concept of a small anthropomorphic robot communicating as a proxy for its non-humanoid host, such as a forklift. We developed a multimodal and LLM-enhanced communication framework for this robot and evaluated it in several lab experiments, using gaze tracking and motion capture to quantify how users perceive the robot and measure the task progress

HSV kategori
Forskningsprogram
Datavetenskap
Identifikatorer
urn:nbn:se:oru:diva-119604 (URN)
Konferanse
1st German Robotics Conference, Nuremberg, Germany, March 13-15, 2025
Forskningsfinansiär
EU, Horizon 2020, 101017274
Tilgjengelig fra: 2025-02-28 Laget: 2025-02-28 Sist oppdatert: 2025-03-03bibliografisk kontrollert
Mock, A., Magnusson, M. & Hertzberg, J. (2025). RadaRays: Real-Time Simulation of Rotating FMCW Radar for Mobile Robotics via Hardware-Accelerated Ray Tracing. IEEE Robotics and Automation Letters, 10(3), 2470-2477
Åpne denne publikasjonen i ny fane eller vindu >>RadaRays: Real-Time Simulation of Rotating FMCW Radar for Mobile Robotics via Hardware-Accelerated Ray Tracing
2025 (engelsk)Inngår i: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 10, nr 3, s. 2470-2477Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

RadaRays allows for the accurate modeling and simulation of rotating FMCW radar sensors in complex environments, including the simulation of reflection, refraction, and scattering of radar waves. Our software is able to handle large numbers of objects and materials in real-time, making it suitable for use in a variety of mobile robotics applications. We demonstrate the effectiveness of RadaRays through a series of experiments and show that it can more accurately reproduce the behavior of FMCW radar sensors in a variety of environments, compared to the ray casting-based lidar-like simulations that are commonly used in simulators for autonomous driving such as CARLA. Our experiments additionally serve as a valuable reference point for researchers to evaluate their own radar simulations. By using RadaRays, developers can significantly reduce the time and cost associated with prototyping and testing FMCW radar-based algorithms. We also provide a Gazebo plugin that makes our work accessible to the mobile robotics community.

sted, utgiver, år, opplag, sider
IEEE, 2025
Emneord
Radar, Radar imaging, Robots, Spaceborne radar, Ray tracing, Meteorological radar, Real-time systems, Radar scattering, Radar antennas, Radar cross-sections, Simulation and animation, range sensing, software tools for robot programming, SLAM, collision avoidance
HSV kategori
Identifikatorer
urn:nbn:se:oru:diva-119270 (URN)10.1109/LRA.2025.3531689 (DOI)001411912800005 ()2-s2.0-85216089114 (Scopus ID)
Merknad

This work was supported in part by the Ministry of Science and Culture of Lower Saxony and in part by the VolkswagenStiftung through DFKI Niedersachsen (DFKI NI). 

Tilgjengelig fra: 2025-02-17 Laget: 2025-02-17 Sist oppdatert: 2025-02-17bibliografisk kontrollert
Swaminathan, C. S., Kucner, T. P., Lilienthal, A. J. & Magnusson, M. (2025). Sampling functions for global motion planning using Maps of Dynamics for mobile robots. Robotics and Autonomous Systems, 194, Article ID 105117.
Åpne denne publikasjonen i ny fane eller vindu >>Sampling functions for global motion planning using Maps of Dynamics for mobile robots
2025 (engelsk)Inngår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 194, artikkel-id 105117Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Motion planning for mobile robots in dynamic environments shared with people is challenging. In a typical hierarchical planning framework, robots use a global planner for overall path-finding and a local planner for immediate adjustments to the path. Without considering the typical patterns of motion of dynamic entities, the global planner might generate paths that lead the robot into highly congested areas, where it is forced to wait, replan or manoeuvre around dynamic obstacles. Maps of Dynamics (MoDs) are a way to mitigate these issues. MoDs represent patterns of motion exhibited by moving entities in the environment, using probabilistic models. The use of MoDs in cost functions for motion planning enables a robot to plan motions that consider the motion patterns encoded in the MoDs. In previous work, it has been shown that the use of MoDs in the cost function helps generate more efficient paths, i.e., paths that lead to the robot and pedestrians spending less time waiting for each other. It has also been shown that using MoDs in the sampling step of sampling-based motion planning is beneficial to a mobile robot since it can result in reduced computation time by explicitly guiding the sampling process using information encoded in MoDs. However, existing work on the use of MoDs in the sampling process is limited. Correspondingly, an analysis of the performance of sampling heuristics for MoDs is also largely lacking. Since such an analysis is crucial to understand the effectiveness of MoDs in a practical setting, we ask the research question: can we obtain reasonably low-cost solutions using sampling-based motion planners that consider the flow of dynamic entities, in a reasonable amount of time. In this paper, we propose substantial improvements to two existing sampling heuristics: the Dijkstra-graph sampling (DGS), previously restricted to a specific type of MoD is extended to use any MoD; and the intensity-map (normalized number of observations of dynamic entities in each grid cell) is utilized more effectively by using importance sampling instead of rejection sampling. We show that an ellipsoidal heuristic can also be used with MoDs. We experimentally validate several sampling heuristics on two different sampling-based motion planners and present a comprehensive evaluation (52800 runs) of their performance on real-world data from densely populated environments. We conclude that reasonably cost solutions can be quickly obtained using a combination of the sampling heuristics within practically feasible time limits. Using the RRT* planner with our proposed MoD-aware, Dijkstra-graph-based heuristic yields approximate to 5%, %10 and 12% higher success rates after 2, 4 and 8 s of planning respectively, compared to uniform sampling, the baseline.

sted, utgiver, år, opplag, sider
Elsevier, 2025
Emneord
Motion planning, Path planning, Dynamic environments, Pedestrian, Maps of dynamics, Human motion patterns, Sampling-based motion planning
HSV kategori
Identifikatorer
urn:nbn:se:oru:diva-124397 (URN)10.1016/j.robot.2025.105117 (DOI)001582650000003 ()
Forskningsfinansiär
EU, Horizon 2020, 101017274EU, Horizon 2020, 101070596Swedish National Infrastructure for Computing (SNIC)Swedish Research Council, 2022-06725Swedish Research Council, 2018-05973
Merknad

This work has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreements 101017274 (DARKO) and 101070596 (euRobin). The computations/data handling were/was enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS) and the Swedish National Infrastructure for Computing (SNIC) at Umeå University partially funded by the Swedish Research Council through grant agreements no. 2022-06725 and no. 2018-05973.

Tilgjengelig fra: 2025-10-14 Laget: 2025-10-14 Sist oppdatert: 2025-10-14bibliografisk kontrollert
Sun, S., Mielle, M., Lilienthal, A. J. & Magnusson, M. (2024). 3QFP: Efficient neural implicit surface reconstruction using Tri-Quadtrees and Fourier feature Positional encoding. In: : . Paper presented at 2024 IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan, May 13-17, 2024. IEEE
Åpne denne publikasjonen i ny fane eller vindu >>3QFP: Efficient neural implicit surface reconstruction using Tri-Quadtrees and Fourier feature Positional encoding
2024 (engelsk)Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Neural implicit surface representations are currently receiving a lot of interest as a means to achieve high-fidelity surface reconstruction at a low memory cost, compared to traditional explicit representations. However, state-of-the-art methods still struggle with excessive memory usage and non-smooth surfaces. This is particularly problematic in large-scale applications with sparse inputs, as is common in robotics use cases. To address these issues, we first introduce a sparse structure, tri-quadtrees, which represents the environment using learnable features stored in three planar quadtree projections. Secondly, we concatenate the learnable features with a Fourier feature positional encoding. The combined features are then decoded into signed distance values through a small multi-layer perceptron. We demonstrate that this approach facilitates smoother reconstruction with a higher completion ratio with fewer holes. Compared to two recent baselines, one implicit and one explicit, our approach requires only 10%–50% as much memory, while achieving competitive quality. The code is released on https://github.com/ljjTYJR/3QFP.

sted, utgiver, år, opplag, sider
IEEE, 2024
Serie
IEEE International Conference on Robotics and Automation (ICRA), ISSN 1050-4729, E-ISSN 2577-087X
HSV kategori
Identifikatorer
urn:nbn:se:oru:diva-117117 (URN)10.1109/ICRA57147.2024.10610338 (DOI)001294576203025 ()2-s2.0-85202450420 (Scopus ID)9798350384574 (ISBN)9798350384581 (ISBN)
Konferanse
2024 IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan, May 13-17, 2024
Forskningsfinansiär
EU, Horizon 2020, 101017274
Tilgjengelig fra: 2024-10-30 Laget: 2024-10-30 Sist oppdatert: 2025-02-09bibliografisk kontrollert
Organisasjoner
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0001-8658-2985