To Örebro University

oru.seÖrebro University Publications
Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 324) Show all publications
Stracca, E., Rudenko, A., Palmieri, L., Salaris, P., Castri, L., Mazzi, N., . . . Lilienthal, A. J. (2025). DARKO-Nav: Hierarchical Risk and Context-Aware Robot Navigation in Complex Intralogistic Environments. In: Marco Huber; Alexander Verl; Werner Kraus (Ed.), European Robotics Forum 2025: Boosting the Synergies between Robotics and AI for a Stronger Europe. Paper presented at 16th European Robotics Forum-ERF-Annual, Stuttgart, Germany, March 25-27, 2025 (pp. 155-161). Springer, 36
Open this publication in new window or tab >>DARKO-Nav: Hierarchical Risk and Context-Aware Robot Navigation in Complex Intralogistic Environments
Show others...
2025 (English)In: European Robotics Forum 2025: Boosting the Synergies between Robotics and AI for a Stronger Europe / [ed] Marco Huber; Alexander Verl; Werner Kraus, Springer, 2025, Vol. 36, p. 155-161Conference paper, Published paper (Refereed)
Abstract [en]

We propose a flexible hierarchical navigation stack for a mobile robot in complex dynamic environments. Addressing the growing need for reliable navigation in real-world scenarios, where dynamic agents and environmental uncertainties pose significant challenges, our solution decomposes this complexity into task planning, navigation, control, and safe velocity components. In contrast to the prior art, our system at every level incorporates diverse contextual information about the environment, anticipates navigation risks and proactively avoids collisions with dynamic agents.

Place, publisher, year, edition, pages
Springer, 2025
Series
Springer Proceedings in Advanced Robotics (SPAR), ISSN 2511-1256, E-ISSN 2511-1264 ; Vol. 36
Keywords
navigation in dynamic environments, risk-aware path planning, predictive collision avoidance, intralogistics
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-123553 (URN)10.1007/978-3-031-89471-8_24 (DOI)001553155000024 ()9783031895746 (ISBN)9783031894701 (ISBN)9783031894718 (ISBN)
Conference
16th European Robotics Forum-ERF-Annual, Stuttgart, Germany, March 25-27, 2025
Funder
EU, Horizon 2020, 101017274 (DARKO)
Available from: 2025-09-10 Created: 2025-09-10 Last updated: 2025-09-10Bibliographically approved
Schindler, M., Simon, A. L., Baumanns, L. & Lilienthal, A. J. (2025). Eye-tracking research in mathematics and statistics education: recent developments and future trends. A systematic literature review. ZDM - the International Journal on Mathematics Education
Open this publication in new window or tab >>Eye-tracking research in mathematics and statistics education: recent developments and future trends. A systematic literature review
2025 (English)In: ZDM - the International Journal on Mathematics Education, ISSN 1863-9690, E-ISSN 1863-9704Article, review/survey (Refereed) Epub ahead of print
Abstract [en]

Eye tracking is gaining significance in mathematics education research at a tremendous speed. For the discipline to grow, it is essential to monitor, structure, and synthesize the research in this rapidly evolving field, which calls for a systematic literature review. However, a comprehensive and systematic review does not exist for the research for the past five years. This is a profound gap considering the dynamics of the field, which is fueled by technological advancements in hard- and software and the increasing usability and availability of eye-tracking systems. The aim of this paper is to provide a comprehensive and systematic literature review on eye-tracking research in mathematics and statistics education published in the past five years. Using a systematic database search, we identified and reviewed 116 eye-tracking studies published between 2019 and the first quarter of 2024. We found that the studies addressed a wide range of topics in all relevant curriculum content areas as well as a multitude of phenomena, including teacher-student interaction and digital learning. Interestingly, the studies increasingly involved school students, partially in authentic classroom settings. We also found that the majority of the papers referred to a theoretical framework or made assumptions about the (domain-specific) interpretation of eye movements explicit. As a further important trend, probably still in its infancy, we observed the use of AI techniques for data analysis purposes, which allows for qualitative insights despite bigger numbers of participants. Our paper provides an overview and detailed insights into trends, of which many have not been visible in earlier review studies.

Place, publisher, year, edition, pages
Springer, 2025
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-121389 (URN)10.1007/s11858-025-01699-8 (DOI)001494915600001 ()2-s2.0-105006438599 (Scopus ID)
Note

Open Access funding enabled and organized by Projekt DEAL.

Available from: 2025-06-09 Created: 2025-06-09 Last updated: 2025-06-09Bibliographically approved
Rudenko, A., Zhu, Y., Almeida, T. R., Schreiter, T., Castri, L., Belotto, N., . . . Lilienthal, A. J. (2025). Hierarchical System to Predict Human Motion and Intentions for Efficient and Safe Human-Robot Interaction in Industrial Environments. In: 1st German Robotics Conference: . Paper presented at 1st German Robotics Conference, Nuremberg, Germany, March 13-15, 2025.
Open this publication in new window or tab >>Hierarchical System to Predict Human Motion and Intentions for Efficient and Safe Human-Robot Interaction in Industrial Environments
Show others...
2025 (English)In: 1st German Robotics Conference, 2025Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

In this paper we present a hierarchical motion and intent prediction system prototype, designed to efficiently operate in complex environments while safely handling risks arising from diverse and uncertain human motion and activities. Our system uses an array of advanced cues to describe human motion and activities, including generalized motion patterns, full-body poses, heterogeneous agent types and causal contextual factors that influence human behavior.

National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-119603 (URN)
Conference
1st German Robotics Conference, Nuremberg, Germany, March 13-15, 2025
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)EU, Horizon 2020, 101017274
Available from: 2025-02-28 Created: 2025-02-28 Last updated: 2025-03-03Bibliographically approved
Schindler, M., Shvarts, A. & Lilienthal, A. J. (2025). Introduction to eye tracking in mathematics education: interpretation, potential, and challenges. Educational Studies in Mathematics
Open this publication in new window or tab >>Introduction to eye tracking in mathematics education: interpretation, potential, and challenges
2025 (English)In: Educational Studies in Mathematics, ISSN 0013-1954, E-ISSN 1573-0816Article in journal (Refereed) Epub ahead of print
Place, publisher, year, edition, pages
Springer, 2025
National Category
Educational Sciences
Identifiers
urn:nbn:se:oru:diva-119897 (URN)10.1007/s10649-025-10393-1 (DOI)001434847000001 ()
Note

Open Access funding enabled and organized by Projekt DEAL.

Available from: 2025-03-17 Created: 2025-03-17 Last updated: 2025-03-17Bibliographically approved
Schreiter, T., Rudenko, A., Rüppel, J. V., Magnusson, M. & Lilienthal, A. J. (2025). Multimodal Interaction and Intention Communication for Industrial Robots. In: 1st German Robotics Conference: . Paper presented at 1st German Robotics Conference, Nuremberg, Germany, March 13-15, 2025.
Open this publication in new window or tab >>Multimodal Interaction and Intention Communication for Industrial Robots
Show others...
2025 (English)In: 1st German Robotics Conference, 2025Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

Successful adoption of industrial robots will strongly depend on their ability to safely and efficiently operate in human environments, engage in natural communication, understand their users, and express intentions intuitively while avoiding unnecessary distractions. To achieve this advanced level of Human-Robot Interaction (HRI), robots need to acquire and incorporate knowledge of their users’ tasks and environment and adopt multimodal communication approaches with expressive cues that combine speech, movement, gazes, and other modalities. This paper presents several methods to design, enhance, and evaluate expressive HRI systems for non-humanoid industrial robots. We present the concept of a small anthropomorphic robot communicating as a proxy for its non-humanoid host, such as a forklift. We developed a multimodal and LLM-enhanced communication framework for this robot and evaluated it in several lab experiments, using gaze tracking and motion capture to quantify how users perceive the robot and measure the task progress

National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-119604 (URN)
Conference
1st German Robotics Conference, Nuremberg, Germany, March 13-15, 2025
Funder
EU, Horizon 2020, 101017274
Available from: 2025-02-28 Created: 2025-02-28 Last updated: 2025-03-03Bibliographically approved
Wiedemann, T., Scheffler, M., Shutin, D. & Lilienthal, A. J. (2025). Physics-informed robotic airflow exploration and mapping with a swarm of mobile robots. The international journal of robotics research
Open this publication in new window or tab >>Physics-informed robotic airflow exploration and mapping with a swarm of mobile robots
2025 (English)In: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176Article in journal (Refereed) Epub ahead of print
Abstract [en]

Airflow is the key transport mechanism for airborne substances like gas or particulate matter. It is of great interest in many applications ranging from evacuation planning to analyzing indoor ventilation systems. However, accurately determining a spatial map of the airflow is difficult and time-consuming since environmental parameters and boundary conditions are often unknown. This work introduces a novel adaptive sampling strategy for mobile robots. The strategy allows multiple mobile robots with anemometers to autonomously collect airflow measurements and generate a two-dimensional spatial map of the airflow field. Using a Domain-knowledge Assisted Exploration approach, the robots respond in real-time to the measurements already taken and determine the most informative locations online for further measurements. We incorporate the Navier-Stokes Partial Differential Equations to fuse the collected data with model assumptions. By casting the airflow model into a probabilistic framework, we can quantify uncertainties in the airflow field and develop an intelligent, uncertainty-driven exploration strategy inspired by optimal experimental design principles. This strategy combines an estimated uncertainty map with a rapidly exploring random tree path planner. Additionally, using the Navier-Stokes equations allows us to interpolate spatially between measurements in a physics-informed way, enabling us to construct a more accurate airflow map. We implemented and evaluated the proposed concept in simulations and experiments in a laboratory environment, where five mobile robots explore artificially generated airflow fields. The results indicate that our approach can correctly estimate the airflow and show that the proposed adaptive exploration strategy gathers information more efficiently than a predefined sampling pattern.

Place, publisher, year, edition, pages
Sage Publications, 2025
Keywords
Robotic exploration, airflow mapping, Navier-Stokes equations, multi-robot system, uncertainty-driven exploration, physics-informed robotics
National Category
Robotics and automation Computer Sciences
Identifiers
urn:nbn:se:oru:diva-120909 (URN)10.1177/02783649251329421 (DOI)001468815100001 ()2-s2.0-105002978350 (Scopus ID)
Funder
EU, Horizon Europe, 101093003
Note

This work was supported by the EU Project TEMA. The TEMA project has received funding from the European Commission under HORIZON EUROPE (HORIZON Research and Innovation Actions) under Grant Agreement 101093003 (HORIZON-CL4-2022-DATA-01-01).

Available from: 2025-05-06 Created: 2025-05-06 Last updated: 2025-05-06Bibliographically approved
Swaminathan, C. S., Kucner, T. P., Lilienthal, A. J. & Magnusson, M. (2025). Sampling functions for global motion planning using Maps of Dynamics for mobile robots. Robotics and Autonomous Systems, 194, Article ID 105117.
Open this publication in new window or tab >>Sampling functions for global motion planning using Maps of Dynamics for mobile robots
2025 (English)In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 194, article id 105117Article in journal (Refereed) Published
Abstract [en]

Motion planning for mobile robots in dynamic environments shared with people is challenging. In a typical hierarchical planning framework, robots use a global planner for overall path-finding and a local planner for immediate adjustments to the path. Without considering the typical patterns of motion of dynamic entities, the global planner might generate paths that lead the robot into highly congested areas, where it is forced to wait, replan or manoeuvre around dynamic obstacles. Maps of Dynamics (MoDs) are a way to mitigate these issues. MoDs represent patterns of motion exhibited by moving entities in the environment, using probabilistic models. The use of MoDs in cost functions for motion planning enables a robot to plan motions that consider the motion patterns encoded in the MoDs. In previous work, it has been shown that the use of MoDs in the cost function helps generate more efficient paths, i.e., paths that lead to the robot and pedestrians spending less time waiting for each other. It has also been shown that using MoDs in the sampling step of sampling-based motion planning is beneficial to a mobile robot since it can result in reduced computation time by explicitly guiding the sampling process using information encoded in MoDs. However, existing work on the use of MoDs in the sampling process is limited. Correspondingly, an analysis of the performance of sampling heuristics for MoDs is also largely lacking. Since such an analysis is crucial to understand the effectiveness of MoDs in a practical setting, we ask the research question: can we obtain reasonably low-cost solutions using sampling-based motion planners that consider the flow of dynamic entities, in a reasonable amount of time. In this paper, we propose substantial improvements to two existing sampling heuristics: the Dijkstra-graph sampling (DGS), previously restricted to a specific type of MoD is extended to use any MoD; and the intensity-map (normalized number of observations of dynamic entities in each grid cell) is utilized more effectively by using importance sampling instead of rejection sampling. We show that an ellipsoidal heuristic can also be used with MoDs. We experimentally validate several sampling heuristics on two different sampling-based motion planners and present a comprehensive evaluation (52800 runs) of their performance on real-world data from densely populated environments. We conclude that reasonably cost solutions can be quickly obtained using a combination of the sampling heuristics within practically feasible time limits. Using the RRT* planner with our proposed MoD-aware, Dijkstra-graph-based heuristic yields approximate to 5%, %10 and 12% higher success rates after 2, 4 and 8 s of planning respectively, compared to uniform sampling, the baseline.

Place, publisher, year, edition, pages
Elsevier, 2025
Keywords
Motion planning, Path planning, Dynamic environments, Pedestrian, Maps of dynamics, Human motion patterns, Sampling-based motion planning
National Category
Artificial Intelligence Computer Sciences
Identifiers
urn:nbn:se:oru:diva-124397 (URN)10.1016/j.robot.2025.105117 (DOI)001582650000003 ()
Funder
EU, Horizon 2020, 101017274EU, Horizon 2020, 101070596Swedish National Infrastructure for Computing (SNIC)Swedish Research Council, 2022-06725Swedish Research Council, 2018-05973
Note

This work has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreements 101017274 (DARKO) and 101070596 (euRobin). The computations/data handling were/was enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS) and the Swedish National Infrastructure for Computing (SNIC) at Umeå University partially funded by the Swedish Research Council through grant agreements no. 2022-06725 and no. 2018-05973.

Available from: 2025-10-14 Created: 2025-10-14 Last updated: 2025-10-14Bibliographically approved
Almeida, T., Schreiter, T., Rudenko, A., Palmieri, L., Stork, J. A. & Lilienthal, A. J. (2025). THÖR-MAGNI Act: Actions for Human Motion Modeling in Robot-Shared Industrial Spaces. In: 20th edition of the ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 20th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2025), Melbourne, Australia, March 4-6, 2025.
Open this publication in new window or tab >>THÖR-MAGNI Act: Actions for Human Motion Modeling in Robot-Shared Industrial Spaces
Show others...
2025 (English)In: 20th edition of the ACM/IEEE International Conference on Human-Robot Interaction, 2025Conference paper, Published paper (Refereed)
Abstract [en]

Accurate human activity and trajectory prediction are crucial for ensuring safe and reliable human-robot interactions in dynamic environments, such as industrial settings, with mobile robots. Datasets with fine-grained action labels for moving people in industrial environments with mobile robots are scarce, as most existing datasets focus on social navigation in public spaces. This paper introduces the THÖR-MAGNI Act dataset, a substantial extension of the THÖR-MAGNI dataset, which captures participant movements alongside robots in diverse semantic and spatial contexts. THÖR-MAGNI Act provides 8.3 hours of manually labeled participant actions derived from egocentric videos recorded via eye-tracking glasses. These actions, aligned with the provided THÖR-MAGNI motion cues, follow a long-tailed distribution with diversified acceleration, velocity, and navigation distance profiles. We demonstrate the utility of THÖR-MAGNI Act for two tasks: action-conditioned trajectory prediction and joint action and trajectory prediction. We propose two efficient transformer-based models that outperform the baselines to address these tasks. These results underscore the potential of THÖR-MAGNI Act to develop predictive models for enhanced human-robot interaction in complex environments.

Keywords
human motion dataset, human motion modeling, human activity prediction
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-119601 (URN)
Conference
20th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2025), Melbourne, Australia, March 4-6, 2025
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)EU, Horizon 2020, 101017274
Available from: 2025-02-28 Created: 2025-02-28 Last updated: 2025-03-03Bibliographically approved
Sun, S., Mielle, M., Lilienthal, A. J. & Magnusson, M. (2024). 3QFP: Efficient neural implicit surface reconstruction using Tri-Quadtrees and Fourier feature Positional encoding. In: : . Paper presented at 2024 IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan, May 13-17, 2024. IEEE
Open this publication in new window or tab >>3QFP: Efficient neural implicit surface reconstruction using Tri-Quadtrees and Fourier feature Positional encoding
2024 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Neural implicit surface representations are currently receiving a lot of interest as a means to achieve high-fidelity surface reconstruction at a low memory cost, compared to traditional explicit representations. However, state-of-the-art methods still struggle with excessive memory usage and non-smooth surfaces. This is particularly problematic in large-scale applications with sparse inputs, as is common in robotics use cases. To address these issues, we first introduce a sparse structure, tri-quadtrees, which represents the environment using learnable features stored in three planar quadtree projections. Secondly, we concatenate the learnable features with a Fourier feature positional encoding. The combined features are then decoded into signed distance values through a small multi-layer perceptron. We demonstrate that this approach facilitates smoother reconstruction with a higher completion ratio with fewer holes. Compared to two recent baselines, one implicit and one explicit, our approach requires only 10%–50% as much memory, while achieving competitive quality. The code is released on https://github.com/ljjTYJR/3QFP.

Place, publisher, year, edition, pages
IEEE, 2024
Series
IEEE International Conference on Robotics and Automation (ICRA), ISSN 1050-4729, E-ISSN 2577-087X
National Category
Robotics and automation
Identifiers
urn:nbn:se:oru:diva-117117 (URN)10.1109/ICRA57147.2024.10610338 (DOI)001294576203025 ()2-s2.0-85202450420 (Scopus ID)9798350384574 (ISBN)9798350384581 (ISBN)
Conference
2024 IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan, May 13-17, 2024
Funder
EU, Horizon 2020, 101017274
Available from: 2024-10-30 Created: 2024-10-30 Last updated: 2025-02-09Bibliographically approved
Alhashimi, A., Adolfsson, D., Andreasson, H., Lilienthal, A. & Magnusson, M. (2024). BFAR: improving radar odometry estimation using a bounded false alarm rate detector. Autonomous Robots, 48(8), Article ID 29.
Open this publication in new window or tab >>BFAR: improving radar odometry estimation using a bounded false alarm rate detector
Show others...
2024 (English)In: Autonomous Robots, ISSN 0929-5593, E-ISSN 1573-7527, Vol. 48, no 8, article id 29Article in journal (Refereed) Published
Abstract [en]

This work introduces a novel detector, bounded false-alarm rate (BFAR), for distinguishing true detections from noise in radar data, leading to improved accuracy in radar odometry estimation. Scanning frequency-modulated continuous wave (FMCW) radars can serve as valuable tools for localization and mapping under low visibility conditions. However, they tend to yield a higher level of noise in comparison to the more commonly employed lidars, thereby introducing additional challenges to the detection process. We propose a new radar target detector called BFAR which uses an affine transformation of the estimated noise level compared to the classical constant false-alarm rate (CFAR) detector. This transformation employs learned parameters that minimize the error in odometry estimation. Conceptually, BFAR can be viewed as an optimized blend of CFAR and fixed-level thresholding designed to minimize odometry estimation error. The strength of this approach lies in its simplicity. Only a single parameter needs to be learned from a training dataset when the affine transformation scale parameter is maintained. Compared to ad-hoc detectors, BFAR has the advantage of a specified upper-bound for the false-alarm probability, and better noise handling than CFAR. Repeatability tests show that BFAR yields highly repeatable detections with minimal redundancy. We have conducted simulations to compare the detection and false-alarm probabilities of BFAR with those of three baselines in non-homogeneous noise and varying target sizes. The results show that BFAR outperforms the other detectors. Moreover, We apply BFAR to the use case of radar odometry, and adapt a recent odometry pipeline, replacing its original conservative filtering with BFAR. In this way, we reduce the translation/rotation odometry errors/100 m from 1.3%/0.4◦ to 1.12%/0.38◦, and from 1.62%/0.57◦ to 1.21%/0.32◦, improving translation error by 14.2% and 25% on Oxford and Mulran public data sets, respectively.

Place, publisher, year, edition, pages
Springer, 2024
Keywords
Radar, CFAR, Odometry, FMCW
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:oru:diva-117575 (URN)10.1007/s10514-024-10176-2 (DOI)001358908800001 ()2-s2.0-85209565335 (Scopus ID)
Funder
Örebro University
Available from: 2024-12-05 Created: 2024-12-05 Last updated: 2025-02-07Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-0217-9326

Search in DiVA

Show all publications