To Örebro University

oru.seÖrebro University Publications
Change search
Link to record
Permanent link

Direct link
Magnusson, Martin, ProfessorORCID iD iconorcid.org/0000-0001-8658-2985
Publications (10 of 98) Show all publications
Araiza-Illan, D., Baum, K., Beebee, H., Chatila, R., Moth-Lund Christensen, S., Coghlan, S., . . . Yang, Y. (2025). A Roadmap for Responsible Robotics: Promoting Human Agency and Collaborative Efforts. IEEE robotics & automation magazine, 32(4), 12-24
Open this publication in new window or tab >>A Roadmap for Responsible Robotics: Promoting Human Agency and Collaborative Efforts
Show others...
2025 (English)In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 32, no 4, p. 12-24Article in journal (Refereed) Published
Abstract [en]

This document presents the outcomes of the Dagstuhl Seminar "Roadmap for Responsible Robotics," held in September 2023 at the Leibniz Center for Informatics, Schloss Dagstuhl, Germany. The seminar brought together researchers from the fields of robotics, computer science, social and cognitive sciences, and philosophy with the aim of charting a path toward improving responsibility in robotic systems. Through intensive interdisciplinary discussions centered on the various values at stake as robotics increasingly integrates into human life, the participants identified key priorities to guide future research and regulatory efforts. The resulting road map outlines actionable steps to ensure that robotic systems coevolve with human societies, promoting human agency and humane values rather than undermining them. Designed for diverse stakeholders-researchers, policy makers, industry leaders, practitioners, nongovernmental organizations (NGOs), and civil society groups-this road map provides a foundation for collaborative efforts toward responsible robotics.

Place, publisher, year, edition, pages
IEEE, 2025
Keywords
Robots, Ethics, Robot sensing systems, Service robots, Artificial intelligence, Law, Safety, Stakeholders, Automation, Philosophical considerations
National Category
Robotics and automation
Identifiers
urn:nbn:se:oru:diva-125333 (URN)10.1109/MRA.2025.3620148 (DOI)001616324500001 ()
Available from: 2025-12-01 Created: 2025-12-01 Last updated: 2026-01-07Bibliographically approved
Stracca, E., Rudenko, A., Palmieri, L., Salaris, P., Castri, L., Mazzi, N., . . . Lilienthal, A. J. (2025). DARKO-Nav: Hierarchical Risk and Context-Aware Robot Navigation in Complex Intralogistic Environments. In: Marco Huber; Alexander Verl; Werner Kraus (Ed.), European Robotics Forum 2025: Boosting the Synergies between Robotics and AI for a Stronger Europe. Paper presented at 16th European Robotics Forum-ERF-Annual, Stuttgart, Germany, March 25-27, 2025 (pp. 155-161). Springer, 36
Open this publication in new window or tab >>DARKO-Nav: Hierarchical Risk and Context-Aware Robot Navigation in Complex Intralogistic Environments
Show others...
2025 (English)In: European Robotics Forum 2025: Boosting the Synergies between Robotics and AI for a Stronger Europe / [ed] Marco Huber; Alexander Verl; Werner Kraus, Springer, 2025, Vol. 36, p. 155-161Conference paper, Published paper (Refereed)
Abstract [en]

We propose a flexible hierarchical navigation stack for a mobile robot in complex dynamic environments. Addressing the growing need for reliable navigation in real-world scenarios, where dynamic agents and environmental uncertainties pose significant challenges, our solution decomposes this complexity into task planning, navigation, control, and safe velocity components. In contrast to the prior art, our system at every level incorporates diverse contextual information about the environment, anticipates navigation risks and proactively avoids collisions with dynamic agents.

Place, publisher, year, edition, pages
Springer, 2025
Series
Springer Proceedings in Advanced Robotics (SPAR), ISSN 2511-1256, E-ISSN 2511-1264 ; Vol. 36
Keywords
navigation in dynamic environments, risk-aware path planning, predictive collision avoidance, intralogistics
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-123553 (URN)10.1007/978-3-031-89471-8_24 (DOI)001553155000024 ()9783031895746 (ISBN)9783031894701 (ISBN)9783031894718 (ISBN)
Conference
16th European Robotics Forum-ERF-Annual, Stuttgart, Germany, March 25-27, 2025
Funder
EU, Horizon 2020, 101017274 (DARKO)
Available from: 2025-09-10 Created: 2025-09-10 Last updated: 2025-09-10Bibliographically approved
Schreiter, T., Rüppel, J. V., Hazra, R., Rudenko, A., Magnusson, M. & Lilienthal, A. J. (2025). Evaluating Efficiency and Engagement in Scripted and LLM-Enhanced Human-Robot Interactions. In: 2025 20th ACM IEEE International Conference on Human Robot Interaction (HRI): . Paper presented at 20th International Conference on Human Robot Interaction (HRI 2025), Melbourne, Australia, March 4-6, 2025 (pp. 1608-1612). IEEE
Open this publication in new window or tab >>Evaluating Efficiency and Engagement in Scripted and LLM-Enhanced Human-Robot Interactions
Show others...
2025 (English)In: 2025 20th ACM IEEE International Conference on Human Robot Interaction (HRI), IEEE , 2025, p. 1608-1612Conference paper, Published paper (Refereed)
Abstract [en]

To achieve natural and intuitive interaction with people, HRI frameworks combine a wide array of methods for human perception, intention communication, human-aware navigation and collaborative action. In practice, when encountering unpredictable behavior of people or unexpected states of the environment, these frameworks may lack the ability to dynamically recognize such states, adapt and recover to resume the interaction. Large Language Models (LLMs), owing to their advanced reasoning capabilities and context retention, present a promising solution for enhancing robot adaptability. This potential, however, may not directly translate to improved interaction metrics. This paper considers a representative interaction with an industrial robot involving approach, instruction, and object manipulation, implemented in two conditions: (1) fully scripted and (2) including LLM-enhanced responses. We use gaze tracking and questionnaires to measure the participants' task efficiency, engagement, and robot perception. The results indicate higher SUbjective ratings for the LLM condition, but objective metrics show that the scripted condition performs comparably, particularly in efficiency and focus during simple tasks. We also note that the scripted condition may have an edge over LLM-enhanced responses in terms of response latency and energy consumption, especially for trivial and repetitive interactions.

Place, publisher, year, edition, pages
IEEE, 2025
Series
ACM/IEEE International Conference on Human-Robot Interaction (HRI), ISSN 2167-2121, E-ISSN 2167-2148
Keywords
Human-Robot Interaction, AI-Enabled Robotics
National Category
Human Computer Interaction Computer Sciences
Identifiers
urn:nbn:se:oru:diva-124901 (URN)10.1109/HRI61500.2025.10974124 (DOI)001492540600219 ()9798350378948 (ISBN)9798350378931 (ISBN)
Conference
20th International Conference on Human Robot Interaction (HRI 2025), Melbourne, Australia, March 4-6, 2025
Funder
EU, Horizon 2020, 101017274
Available from: 2025-11-11 Created: 2025-11-11 Last updated: 2025-11-11Bibliographically approved
Zhu, Y., Rudenko, A., Palmieri, L., Heuer, L., Lilienthal, A. & Magnusson, M. (2025). Fast Online Learning of CLiFF-Maps in Changing Environments. In: Ott, C (Ed.), IEEE International Conference on Robotics and Automation: Proceedings. Paper presented at 2025 IEEE International Conference on Robotics and Automation (ICRA 2025), Atlanta, USA, May 19-23, 2025 (pp. 10424-10431). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>Fast Online Learning of CLiFF-Maps in Changing Environments
Show others...
2025 (English)In: IEEE International Conference on Robotics and Automation: Proceedings / [ed] Ott, C, Institute of Electrical and Electronics Engineers Inc. , 2025, p. 10424-10431Conference paper, Published paper (Refereed)
Abstract [en]

Maps of dynamics are effective representations of motion patterns learned from prior observations, with recent research demonstrating their ability to enhance various downstream tasks such as human-aware robot navigation, long-term human motion prediction, and robot localization. Current advancements have primarily concentrated on methods for learning maps of human flow in environments where the flow is static, i.e., not assumed to change over time. In this paper we propose an online update method of the CLiFF-map (an advanced map of dynamics type that models motion patterns as velocity and orientation mixtures) to actively detect and adapt to human flow changes. As new observations are collected, our goal is to update a CLiFF-map to effectively and accurately integrate them, while retaining relevant historic motion patterns. The proposed online update method maintains a probabilistic representation in each observed location, updating parameters by continuously tracking sufficient statistics. In experiments using both synthetic and real-world datasets, we show that our method is able to maintain accurate representations of human motion dynamics, contributing to high performance flow-compliant planning downstream tasks, while being orders of magnitude faster than the comparable baselines. 

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2025
Series
IEEE International Conference on Robotics and Automation (ICRA), ISSN 1050-4729, E-ISSN 2577-087X
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-126324 (URN)10.1109/ICRA55743.2025.11127602 (DOI)2-s2.0-105016527745 (Scopus ID)9798331541392 (ISBN)9798331541408 (ISBN)
Conference
2025 IEEE International Conference on Robotics and Automation (ICRA 2025), Atlanta, USA, May 19-23, 2025
Funder
EU, Horizon 2020, 101017274 (DARKO)
Available from: 2026-01-15 Created: 2026-01-15 Last updated: 2026-01-20Bibliographically approved
Rüppel, J. V., Rudenko, A., Schreiter, T., Magnusson, M. & Lilienthal, A. (2025). Gaze-supported Large Language Model Framework for Bi-directional Human-Robot Interaction. In: 34th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN): . Paper presented at 34th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN 2025), Eindhoven, The Netherlands, August 25-29, 2025.
Open this publication in new window or tab >>Gaze-supported Large Language Model Framework for Bi-directional Human-Robot Interaction
Show others...
2025 (English)In: 34th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN), 2025Conference paper, Published paper (Refereed)
Abstract [en]

The rapid development of Large Language Models (LLMs) creates an exciting potential for flexible, general knowledge-driven Human-Robot Interaction (HRI) systems for assistive robots. Existing HRI systems demonstrate great progress in interpreting and following user instructions, action generation, and robot task solving. On the other hand, bi-directional, multimodal, and context-aware support of the user in collaborative tasks still remains an open challenge. In this paper, we present a gaze- and speech-informed interface to the assistive robot, which is able to perceive the working environment from multiple vision inputs and support the dynamic user in their tasks. Our system is designed to be modular and transferable to adapt to diverse tasks and robots, and it is real-time capable due to the language-based interaction state representation and fast on-board perception modules. Its development was supported by multiple public dissemination events, contributing important considerations for improved robustness and user experience. Furthermore, in a lab study, we compare the performance and user ratings of our system with those of a traditional scripted HRI pipeline. Our findings indicate that an LLM-based approach enhances adaptability and marginally improves user engagement and task execution metrics but may produce redundant output, while a scripted pipeline is well suited for more straightforward tasks. 

Keywords
Multimodal Interaction and Conversational Skills, Cooperation and Collaboration in Human-Robot Teams, Non-verbal Cues and Expressiveness
National Category
Robotics and automation
Identifiers
urn:nbn:se:oru:diva-126093 (URN)
Conference
34th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN 2025), Eindhoven, The Netherlands, August 25-29, 2025
Funder
EU, Horizon 2020, 101017274 (DARKO)
Available from: 2026-01-11 Created: 2026-01-11 Last updated: 2026-01-12Bibliographically approved
Rudenko, A., Zhu, Y., Almeida, T. R., Schreiter, T., Castri, L., Belotto, N., . . . Lilienthal, A. J. (2025). Hierarchical System to Predict Human Motion and Intentions for Efficient and Safe Human-Robot Interaction in Industrial Environments. In: 1st German Robotics Conference: . Paper presented at 1st German Robotics Conference, Nuremberg, Germany, March 13-15, 2025.
Open this publication in new window or tab >>Hierarchical System to Predict Human Motion and Intentions for Efficient and Safe Human-Robot Interaction in Industrial Environments
Show others...
2025 (English)In: 1st German Robotics Conference, 2025Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

In this paper we present a hierarchical motion and intent prediction system prototype, designed to efficiently operate in complex environments while safely handling risks arising from diverse and uncertain human motion and activities. Our system uses an array of advanced cues to describe human motion and activities, including generalized motion patterns, full-body poses, heterogeneous agent types and causal contextual factors that influence human behavior.

National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-119603 (URN)
Conference
1st German Robotics Conference, Nuremberg, Germany, March 13-15, 2025
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)EU, Horizon 2020, 101017274
Available from: 2025-02-28 Created: 2025-02-28 Last updated: 2025-03-03Bibliographically approved
Shih-Min, Y., Magnusson, M., Stork, J. A. & Stoyanov, T. (2025). KEA: Keeping Exploration Alive by Proactively Coordinating Exploration Strategies. In: Proceedings of Machine Learning Research: . Paper presented at Forty-second International Conference on Machine Learning (ICML 2025), Vancouver, Canada, July 13-19, 2025 (pp. 70927-70942). , 267
Open this publication in new window or tab >>KEA: Keeping Exploration Alive by Proactively Coordinating Exploration Strategies
2025 (English)In: Proceedings of Machine Learning Research, 2025, Vol. 267, p. 70927-70942Conference paper, Published paper (Refereed)
Abstract [en]

Soft Actor-Critic (SAC) has achieved notable success in continuous control tasks but struggles in sparse reward settings, where infrequent rewards make efficient exploration challenging. While novelty-based exploration methods address this issue by encouraging the agent to explore novel states, they are not trivial to apply to SAC. In particular, managing the interaction between novelty-based exploration and SAC’s stochastic policy can lead to inefficient exploration and redundant sample collection. In this paper, we propose KEA (Keeping Exploration Alive) which tackles the inefficiencies in balancing exploration strategies when combining SAC with novelty-based exploration. KEA integrates a novelty-augmented SAC with a standard SAC agent, proactively coordinated via a switching mechanism. This coordination allows the agent to maintain stochasticity in high-novelty regions, enhancing exploration efficiency and reducing repeated sample collection. We first analyze this potential issue in a 2D navigation task, and then evaluate KEA on the DeepSea hard-exploration benchmark as well as sparse reward control tasks from the DeepMind Control Suite. Compared to state-of-the-art novelty-based exploration baselines, our experiments show that KEA significantly improves learning efficiency and robustness in sparse reward setups.

Series
Proceeding of machine learning research, ISSN 2640-3498 ; 267
Keywords
Reinforcement Learning, Novelty-based Exploration, Soft Actor-Critic, Sparse reward
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-124954 (URN)2-s2.0-105023635702 (Scopus ID)
Conference
Forty-second International Conference on Machine Learning (ICML 2025), Vancouver, Canada, July 13-19, 2025
Projects
DARKO
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)EU, Horizon 2020
Note

This work has received funding from the EU’s Horizon 2020 research and innovation programme under grant agreement No 101017274, and was supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.

Available from: 2025-11-12 Created: 2025-11-12 Last updated: 2026-01-16Bibliographically approved
Zhu, Y., Rudenko, A., Kucner, T. P., Lilienthal, A. J. & Magnusson, M. (2025). Long-Term Human Motion Prediction Using Spatio-Temporal Maps of Dynamics. IEEE Robotics and Automation Letters, 10(11), 12229-12236
Open this publication in new window or tab >>Long-Term Human Motion Prediction Using Spatio-Temporal Maps of Dynamics
Show others...
2025 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 10, no 11, p. 12229-12236Article in journal (Refereed) Published
Abstract [en]

Long-term human motion prediction (LHMP) is important for the safe and efficient operation of autonomous robots and vehicles in environments shared with humans. Accurate predictions are important for applications including motion planning, tracking, human-robot interaction, and safety monitoring. In this letter, we exploit Maps of Dynamics (MoDs), which encode spatial or spatio-temporal motion patterns as environment features, to achieve LHMP for horizons of up to 60 seconds. We propose an MoD-informed LHMP framework that supports various types of MoDs and includes a ranking method to output the most likely predicted trajectory, improving practical utility in robotics. Further, a time-conditioned MoD is introduced to capture motion patterns that vary across different times of day. We evaluate MoD-LHMP instantiated with three types of MoDs. Experiments on two real-world datasets show that MoD-informed method outperforms learning-based ones, with up to 50% improvement in average displacement error, and the time-conditioned variant achieves the highest accuracy overall.

Place, publisher, year, edition, pages
IEEE, 2025
Keywords
Trajectory, Dynamics, Hidden Markov models, Predictive models, Robots, Prediction algorithms, Accuracy, Vehicle dynamics, Tracking, Pedestrians, Human detection and tracking, human and humanoid motion analysis and synthesis, probability and statistical methods, human-aware motion planning
National Category
Robotics and automation
Identifiers
urn:nbn:se:oru:diva-124979 (URN)10.1109/LRA.2025.3619831 (DOI)001598761800006 ()
Funder
EU, Horizon 2020, 101017274EU, Horizon 2020, 101070596
Available from: 2025-11-13 Created: 2025-11-13 Last updated: 2025-11-13Bibliographically approved
Schreiter, T., Rudenko, A., Rüppel, J. V., Magnusson, M. & Lilienthal, A. J. (2025). Multimodal Interaction and Intention Communication for Industrial Robots. In: 1st German Robotics Conference: . Paper presented at 1st German Robotics Conference, Nuremberg, Germany, March 13-15, 2025.
Open this publication in new window or tab >>Multimodal Interaction and Intention Communication for Industrial Robots
Show others...
2025 (English)In: 1st German Robotics Conference, 2025Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

Successful adoption of industrial robots will strongly depend on their ability to safely and efficiently operate in human environments, engage in natural communication, understand their users, and express intentions intuitively while avoiding unnecessary distractions. To achieve this advanced level of Human-Robot Interaction (HRI), robots need to acquire and incorporate knowledge of their users’ tasks and environment and adopt multimodal communication approaches with expressive cues that combine speech, movement, gazes, and other modalities. This paper presents several methods to design, enhance, and evaluate expressive HRI systems for non-humanoid industrial robots. We present the concept of a small anthropomorphic robot communicating as a proxy for its non-humanoid host, such as a forklift. We developed a multimodal and LLM-enhanced communication framework for this robot and evaluated it in several lab experiments, using gaze tracking and motion capture to quantify how users perceive the robot and measure the task progress

National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-119604 (URN)
Conference
1st German Robotics Conference, Nuremberg, Germany, March 13-15, 2025
Funder
EU, Horizon 2020, 101017274
Available from: 2025-02-28 Created: 2025-02-28 Last updated: 2025-03-03Bibliographically approved
Mock, A., Magnusson, M. & Hertzberg, J. (2025). RadaRays: Real-Time Simulation of Rotating FMCW Radar for Mobile Robotics via Hardware-Accelerated Ray Tracing. IEEE Robotics and Automation Letters, 10(3), 2470-2477
Open this publication in new window or tab >>RadaRays: Real-Time Simulation of Rotating FMCW Radar for Mobile Robotics via Hardware-Accelerated Ray Tracing
2025 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 10, no 3, p. 2470-2477Article in journal (Refereed) Published
Abstract [en]

RadaRays allows for the accurate modeling and simulation of rotating FMCW radar sensors in complex environments, including the simulation of reflection, refraction, and scattering of radar waves. Our software is able to handle large numbers of objects and materials in real-time, making it suitable for use in a variety of mobile robotics applications. We demonstrate the effectiveness of RadaRays through a series of experiments and show that it can more accurately reproduce the behavior of FMCW radar sensors in a variety of environments, compared to the ray casting-based lidar-like simulations that are commonly used in simulators for autonomous driving such as CARLA. Our experiments additionally serve as a valuable reference point for researchers to evaluate their own radar simulations. By using RadaRays, developers can significantly reduce the time and cost associated with prototyping and testing FMCW radar-based algorithms. We also provide a Gazebo plugin that makes our work accessible to the mobile robotics community.

Place, publisher, year, edition, pages
IEEE, 2025
Keywords
Radar, Radar imaging, Robots, Spaceborne radar, Ray tracing, Meteorological radar, Real-time systems, Radar scattering, Radar antennas, Radar cross-sections, Simulation and animation, range sensing, software tools for robot programming, SLAM, collision avoidance
National Category
Robotics and automation Computer Sciences
Identifiers
urn:nbn:se:oru:diva-119270 (URN)10.1109/LRA.2025.3531689 (DOI)001411912800005 ()2-s2.0-85216089114 (Scopus ID)
Note

This work was supported in part by the Ministry of Science and Culture of Lower Saxony and in part by the VolkswagenStiftung through DFKI Niedersachsen (DFKI NI). 

Available from: 2025-02-17 Created: 2025-02-17 Last updated: 2025-02-17Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-8658-2985

Search in DiVA

Show all publications