To Örebro University

oru.seÖrebro University Publications
Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 298) Show all publications
Baumanns, L., Pitta-Pantazi, D., Demosthenous, E., Lilienthal, A. J., Christou, C. & Schindler, M. (2024). Pattern-Recognition Processes of First-Grade Students: An Explorative Eye-Tracking Study. International Journal of Science and Mathematics Education
Open this publication in new window or tab >>Pattern-Recognition Processes of First-Grade Students: An Explorative Eye-Tracking Study
Show others...
2024 (English)In: International Journal of Science and Mathematics Education, ISSN 1571-0068, E-ISSN 1573-1774Article in journal (Refereed) Epub ahead of print
Abstract [en]

Recognizing patterns is an essential skill in early mathematics education. However, first graders often have difficulties with tasks such as extending patterns of the form ABCABC. Studies show that this pattern-recognition ability is a good predictor of later pre-algebraic skills and mathematical achievement in general, or the development of mathematical difficulties on the other hand. To be able to foster children's pattern-recognition ability, it is crucial to investigate and understand their pattern-recognition processes early on. However, only a few studies have investigated the processes used to recognize patterns and how these processes are adapted to different patterns. These studies used external observations or relied on children's self-reports, yet young students often lack the ability to properly report their strategies. This paper presents the results of an empirical study using eye-tracking technology to investigate the pattern-recognition processes of 22 first-grade students. In particular, we investigated students with and without the risk of developing mathematical difficulties. The analyses of the students' eye movements reveal that the students used four different processes to recognize patterns-a finding that refines knowledge about pattern-recognition processes from previous research. In addition, we found that for patterns with different units of repeat (i.e. ABABAB versus ABCABCABC), the pattern-recognition processes used differed significantly for students at risk of developing mathematical difficulties but not for students without such risk. Our study contributes to a better understanding of the pattern-recognition processes of first-grade students, laying the foundation for enhanced, targeted support, especially for students at risk of developing mathematical difficulties.

Place, publisher, year, edition, pages
Springer, 2024
Keywords
Pattern recognition, Eye tracking, Mathematical difficulties, First-grade students
National Category
Educational Sciences
Identifiers
urn:nbn:se:oru:diva-111446 (URN)10.1007/s10763-024-10441-x (DOI)001148710900002 ()
Note

Open Access funding enabled and organized by Projekt DEAL. This publication has received funding from the Erasmus + grant program of the European Union under grant agreement No 2020–1-DE03-KA201-077597.

Available from: 2024-02-08 Created: 2024-02-08 Last updated: 2024-02-08Bibliographically approved
Pitta-Pantazi, D., Demosthenous, E., Schindler, M., Lilienthal, A. J. & Christou, C. (2024). Structure sense in students' quantity comparison and repeating pattern extension tasks: an eye-tracking study with first graders. Educational Studies in Mathematics
Open this publication in new window or tab >>Structure sense in students' quantity comparison and repeating pattern extension tasks: an eye-tracking study with first graders
Show others...
2024 (English)In: Educational Studies in Mathematics, ISSN 0013-1954, E-ISSN 1573-0816Article in journal (Refereed) Epub ahead of print
Abstract [en]

There is growing evidence that the ability to perceive structure is essential for students' mathematical development. Looking at students' structure sense in basic numerical and patterning tasks seems promising for understanding how these tasks set the foundation for the development of later mathematical skills. Previous studies have shown how students use structure sense in enumeration tasks. However, little is known about students' use of structure sense in other early mathematical tasks. The main aim of this study is to investigate the ways in which structure sense is manifested in first-grade students' work across tasks, in quantity comparison and repeating pattern extension tasks. We investigated students' strategies in quantity comparison and pattern extension tasks and how students employ structure sense. We conducted an eye-tracking study with 21 first-grade students, which provided novel insights into commonalities among strategies for these types of tasks. We found that for both tasks, quantity comparison and repeating pattern extension tasks, strategies can be distinguished into those employing structure sense and serial strategies.

Place, publisher, year, edition, pages
Springer, 2024
Keywords
Eye tracking, Quantity comparison, Repeating pattern extension, Structure sense, Serial strategies
National Category
Educational Sciences
Identifiers
urn:nbn:se:oru:diva-111445 (URN)10.1007/s10649-023-10290-5 (DOI)001142887800001 ()
Available from: 2024-02-08 Created: 2024-02-08 Last updated: 2024-02-08Bibliographically approved
Schreiter, T., Morillo-Mendez, L., Chadalavada, R. T., Rudenko, A., Billing, E., Magnusson, M., . . . Lilienthal, A. J. (2023). Advantages of Multimodal versus Verbal-Only Robot-to-Human Communication with an Anthropomorphic Robotic Mock Driver. In: 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN): Proceedings. Paper presented at 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Busan, South Korea, August 28-31, 2023 (pp. 293-300). IEEE
Open this publication in new window or tab >>Advantages of Multimodal versus Verbal-Only Robot-to-Human Communication with an Anthropomorphic Robotic Mock Driver
Show others...
2023 (English)In: 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN): Proceedings, IEEE, 2023, p. 293-300Conference paper, Published paper (Refereed)
Abstract [en]

Robots are increasingly used in shared environments with humans, making effective communication a necessity for successful human-robot interaction. In our work, we study a crucial component: active communication of robot intent. Here, we present an anthropomorphic solution where a humanoid robot communicates the intent of its host robot acting as an "Anthropomorphic Robotic Mock Driver" (ARMoD). We evaluate this approach in two experiments in which participants work alongside a mobile robot on various tasks, while the ARMoD communicates a need for human attention, when required, or gives instructions to collaborate on a joint task. The experiments feature two interaction styles of the ARMoD: a verbal-only mode using only speech and a multimodal mode, additionally including robotic gaze and pointing gestures to support communication and register intent in space. Our results show that the multimodal interaction style, including head movements and eye gaze as well as pointing gestures, leads to more natural fixation behavior. Participants naturally identified and fixated longer on the areas relevant for intent communication, and reacted faster to instructions in collaborative tasks. Our research further indicates that the ARMoD intent communication improves engagement and social interaction with mobile robots in workplace settings.

Place, publisher, year, edition, pages
IEEE, 2023
Series
IEEE RO-MAN, ISSN 1944-9445, E-ISSN 1944-9437
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-110873 (URN)10.1109/RO-MAN57019.2023.10309629 (DOI)001108678600042 ()9798350336702 (ISBN)9798350336719 (ISBN)
Conference
32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Busan, South Korea, August 28-31, 2023
Funder
EU, Horizon 2020, 101017274 (DARKO)
Available from: 2024-01-22 Created: 2024-01-22 Last updated: 2024-01-22Bibliographically approved
Zhu, Y., Rudenko, A., Kucner, T., Palmieri, L., Arras, K., Lilienthal, A. & Magnusson, M. (2023). CLiFF-LHMP: Using Spatial Dynamics Patterns for Long-Term Human Motion Prediction. In: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 01-05 October 2023, Detroit, MI, USA: . Paper presented at 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023), Detroit, MI, USA, October 1-5, 2023 (pp. 3795-3802). IEEE
Open this publication in new window or tab >>CLiFF-LHMP: Using Spatial Dynamics Patterns for Long-Term Human Motion Prediction
Show others...
2023 (English)In: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 01-05 October 2023, Detroit, MI, USA, IEEE, 2023, p. 3795-3802Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

Human motion prediction is important for mobile service robots and intelligent vehicles to operate safely and smoothly around people. The more accurate predictions are, particularly over extended periods of time, the better a system can, e.g., assess collision risks and plan ahead. In this paper, we propose to exploit maps of dynamics (MoDs, a class of general representations of place-dependent spatial motion patterns, learned from prior observations) for long-term human motion prediction (LHMP). We present a new MoD-informed human motion prediction approach, named CLiFF-LHMP, which is data efficient, explainable, and insensitive to errors from an upstream tracking system. Our approach uses CLiFF -map, a specific MoD trained with human motion data recorded in the same environment. We bias a constant velocity prediction with samples from the CLiFF-map to generate multi-modal trajectory predictions. In two public datasets we show that this algorithm outperforms the state of the art for predictions over very extended periods of time, achieving 45 % more accurate prediction performance at 50s compared to the baseline.

Place, publisher, year, edition, pages
IEEE, 2023
Series
IEEE International Conference on Intelligent Robots and Systems. Proceedings, ISSN 2153-0858, E-ISSN 2153-0866
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-111183 (URN)10.1109/IROS55552.2023.10342031 (DOI)2-s2.0-85182524296 (Scopus ID)9781665491914 (ISBN)9781665491907 (ISBN)
Conference
2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023), Detroit, MI, USA, October 1-5, 2023
Projects
DARKO
Funder
EU, Horizon 2020, 101017274
Available from: 2024-01-29 Created: 2024-01-29 Last updated: 2024-01-29Bibliographically approved
Adolfsson, D., Magnusson, M., Alhashimi, A., Lilienthal, A. & Andreasson, H. (2023). Lidar-Level Localization With Radar? The CFEAR Approach to Accurate, Fast, and Robust Large-Scale Radar Odometry in Diverse Environments. IEEE Transactions on robotics, 39(2), 1476-1495
Open this publication in new window or tab >>Lidar-Level Localization With Radar? The CFEAR Approach to Accurate, Fast, and Robust Large-Scale Radar Odometry in Diverse Environments
Show others...
2023 (English)In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 39, no 2, p. 1476-1495Article in journal (Refereed) Published
Abstract [en]

This article presents an accurate, highly efficient, and learning-free method for large-scale odometry estimation using spinning radar, empirically found to generalize well across very diverse environments—outdoors, from urban to woodland, and indoors in warehouses and mines—without changing parameters. Our method integrates motion compensation within a sweep with one-to-many scan registration that minimizes distances between nearby oriented surface points and mitigates outliers with a robust loss function. Extending our previous approach conservative filtering for efficient and accurate radar odometry (CFEAR), we present an in-depth investigation on a wider range of datasets, quantifying the importance of filtering, resolution, registration cost and loss functions, keyframe history, and motion compensation. We present a new solving strategy and configuration that overcomes previous issues with sparsity and bias, and improves our state-of-the-art by 38%, thus, surprisingly, outperforming radar simultaneous localization and mapping (SLAM) and approaching lidar SLAM. The most accurate configuration achieves 1.09% error at 5 Hz on the Oxford benchmark, and the fastest achieves 1.79% error at 160 Hz.

Place, publisher, year, edition, pages
IEEE, 2023
Keywords
Radar, Sensors, Spinning, Azimuth, Simultaneous localization and mapping, Estimation, Location awareness, Localization, radar odometry, range sensing, SLAM
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems) Robotics
Research subject
Computer and Systems Science; Computer Science
Identifiers
urn:nbn:se:oru:diva-103116 (URN)10.1109/tro.2022.3221302 (DOI)000912778500001 ()2-s2.0-85144032264 (Scopus ID)
Available from: 2023-01-16 Created: 2023-01-16 Last updated: 2023-10-18
Gupta, H., Lilienthal, A., Andreasson, H. & Kurtser, P. (2023). NDT-6D for color registration in agri-robotic applications. Journal of Field Robotics, 40(6), 1603-1619
Open this publication in new window or tab >>NDT-6D for color registration in agri-robotic applications
2023 (English)In: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 40, no 6, p. 1603-1619Article in journal (Refereed) Published
Abstract [en]

Registration of point cloud data containing both depth and color information is critical for a variety of applications, including in-field robotic plant manipulation, crop growth modeling, and autonomous navigation. However, current state-of-the-art registration methods often fail in challenging agricultural field conditions due to factors such as occlusions, plant density, and variable illumination. To address these issues, we propose the NDT-6D registration method, which is a color-based variation of the Normal Distribution Transform (NDT) registration approach for point clouds. Our method computes correspondences between pointclouds using both geometric and color information and minimizes the distance between these correspondences using only the three-dimensional (3D) geometric dimensions. We evaluate the method using the GRAPES3D data set collected with a commercial-grade RGB-D sensor mounted on a mobile platform in a vineyard. Results show that registration methods that only rely on depth information fail to provide quality registration for the tested data set. The proposed color-based variation outperforms state-of-the-art methods with a root mean square error (RMSE) of 1.1-1.6 cm for NDT-6D compared with 1.1-2.3 cm for other color-information-based methods and 1.2-13.7 cm for noncolor-information-based methods. The proposed method is shown to be robust against noises using the TUM RGBD data set by artificially adding noise present in an outdoor scenario. The relative pose error (RPE) increased similar to 14% for our method compared to an increase of similar to 75% for the best-performing registration method. The obtained average accuracy suggests that the NDT-6D registration methods can be used for in-field precision agriculture applications, for example, crop detection, size-based maturity estimation, and growth modeling.

Place, publisher, year, edition, pages
John Wiley & Sons, 2023
Keywords
agricultural robotics, color pointcloud, in-field sensing, machine perception, RGB-D registration, stereo IR, vineyard
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-106131 (URN)10.1002/rob.22194 (DOI)000991774400001 ()2-s2.0-85159844423 (Scopus ID)
Funder
EU, Horizon 2020
Available from: 2023-06-01 Created: 2023-06-01 Last updated: 2023-11-28Bibliographically approved
Gupta, H., Andreasson, H., Magnusson, M., Julier, S. & Lilienthal, A. J. (2023). Revisiting Distribution-Based Registration Methods. In: Marques, L.; Markovic, I. (Ed.), 2023 European Conference on Mobile Robots (ECMR): . Paper presented at 11th European Conference on Mobile Robots (ECMR 2023), Coimbra, Portugal, September 4-7, 2023 (pp. 43-48). IEEE
Open this publication in new window or tab >>Revisiting Distribution-Based Registration Methods
Show others...
2023 (English)In: 2023 European Conference on Mobile Robots (ECMR) / [ed] Marques, L.; Markovic, I., IEEE , 2023, p. 43-48Conference paper, Published paper (Refereed)
Abstract [en]

Normal Distribution Transformation (NDT) registration is a fast, learning-free point cloud registration algorithm that works well in diverse environments. It uses the compact NDT representation to represent point clouds or maps as a spatial probability function that models the occupancy likelihood in an environment. However, because of the grid discretization in NDT maps, the global minima of the registration cost function do not always correlate to ground truth, particularly for rotational alignment. In this study, we examined the NDT registration cost function in-depth. We evaluated three modifications (Student-t likelihood function, inflated covariance/heavily broadened likelihood curve, and overlapping grid cells) that aim to reduce the negative impact of discretization in classical NDT registration. The first NDT modification improves likelihood estimates for matching the distributions of small population sizes; the second modification reduces discretization artifacts by broadening the likelihood tails through covariance inflation; and the third modification achieves continuity by creating the NDT representations with overlapping grid cells (without increasing the total number of cells). We used the Pomerleau Dataset evaluation protocol for our experiments and found significant improvements compared to the classic NDT D2D registration approach (27.7% success rate) using the registration cost functions "heavily broadened likelihood NDT" (HBL-NDT) (34.7% success rate) and "overlapping grid cells NDT" (OGC-NDT) (33.5% success rate). However, we could not observe a consistent improvement using the Student-t likelihood-based registration cost function (22.2% success rate) over the NDT P2D registration cost function (23.7% success rate). A comparative analysis with other state-of-art registration algorithms is also presented in this work. We found that HBL-NDT worked best for easy initial pose difficulties scenarios making it suitable for consecutive point cloud registration in SLAM application.

Place, publisher, year, edition, pages
IEEE, 2023
Series
European Conference on Mobile Robots, ISSN 2639-7919, E-ISSN 2767-8733
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-109681 (URN)10.1109/ECMR59166.2023.10256416 (DOI)001082260500007 ()2-s2.0-8517439971 (Scopus ID)9798350307047 (ISBN)9798350307054 (ISBN)
Conference
11th European Conference on Mobile Robots (ECMR 2023), Coimbra, Portugal, September 4-7, 2023
Funder
EU, Horizon 2020, 858101
Available from: 2023-11-15 Created: 2023-11-15 Last updated: 2023-11-15Bibliographically approved
Gupta, H., Andreasson, H., Lilienthal, A. J. & Kurtser, P. (2023). Robust Scan Registration for Navigation in Forest Environment Using Low-Resolution LiDAR Sensors. Sensors, 23(10), Article ID 4736.
Open this publication in new window or tab >>Robust Scan Registration for Navigation in Forest Environment Using Low-Resolution LiDAR Sensors
2023 (English)In: Sensors, E-ISSN 1424-8220, Vol. 23, no 10, article id 4736Article in journal (Refereed) Published
Abstract [en]

Automated forest machines are becoming important due to human operators' complex and dangerous working conditions, leading to a labor shortage. This study proposes a new method for robust SLAM and tree mapping using low-resolution LiDAR sensors in forestry conditions. Our method relies on tree detection to perform scan registration and pose correction using only low-resolution LiDAR sensors (16Ch, 32Ch) or narrow field of view Solid State LiDARs without additional sensory modalities like GPS or IMU. We evaluate our approach on three datasets, including two private and one public dataset, and demonstrate improved navigation accuracy, scan registration, tree localization, and tree diameter estimation compared to current approaches in forestry machine automation. Our results show that the proposed method yields robust scan registration using detected trees, outperforming generalized feature-based registration algorithms like Fast Point Feature Histogram, with an above 3 m reduction in RMSE for the 16Chanel LiDAR sensor. For Solid-State LiDAR the algorithm achieves a similar RMSE of 3.7 m. Additionally, our adaptive pre-processing and heuristic approach to tree detection increased the number of detected trees by 13% compared to the current approach of using fixed radius search parameters for pre-processing. Our automated tree trunk diameter estimation method yields a mean absolute error of 4.3 cm (RSME = 6.5 cm) for the local map and complete trajectory maps.

Place, publisher, year, edition, pages
MDPI, 2023
Keywords
tree segmentation, LiDAR mapping, forest inventory, SLAM, forestry robotics, scan registration
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:oru:diva-106315 (URN)10.3390/s23104736 (DOI)000997887900001 ()37430655 (PubMedID)2-s2.0-85160406537 (Scopus ID)
Funder
EU, Horizon 2020, 858101
Available from: 2023-06-19 Created: 2023-06-19 Last updated: 2023-07-12Bibliographically approved
Kucner, T. P., Magnusson, M., Mghames, S., Palmieri, L., Verdoja, F., Swaminathan, C. S., . . . Lilienthal, A. J. (2023). Survey of maps of dynamics for mobile robots. The international journal of robotics research, 42(11), 977-1006
Open this publication in new window or tab >>Survey of maps of dynamics for mobile robots
Show others...
2023 (English)In: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 42, no 11, p. 977-1006Article in journal (Refereed) Published
Abstract [en]

Robotic mapping provides spatial information for autonomous agents. Depending on the tasks they seek to enable, the maps created range from simple 2D representations of the environment geometry to complex, multilayered semantic maps. This survey article is about maps of dynamics (MoDs), which store semantic information about typical motion patterns in a given environment. Some MoDs use trajectories as input, and some can be built from short, disconnected observations of motion. Robots can use MoDs, for example, for global motion planning, improved localization, or human motion prediction. Accounting for the increasing importance of maps of dynamics, we present a comprehensive survey that organizes the knowledge accumulated in the field and identifies promising directions for future work. Specifically, we introduce field-specific vocabulary, summarize existing work according to a novel taxonomy, and describe possible applications and open research problems. We conclude that the field is mature enough, and we expect that maps of dynamics will be increasingly used to improve robot performance in real-world use cases. At the same time, the field is still in a phase of rapid development where novel contributions could significantly impact this research area.

Place, publisher, year, edition, pages
Sage Publications, 2023
Keywords
mapping, maps of dynamics, localization and mapping, acceptability and trust, human-robot interaction, human-aware motion planning
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-107930 (URN)10.1177/02783649231190428 (DOI)001042374800001 ()2-s2.0-85166946627 (Scopus ID)
Funder
EU, Horizon 2020, 101017274
Note

Funding agencies:

Czech Ministry of Education by OP VVV CZ.02.1.01/0.0/0.0/16 019/0000765

Business Finland 9249/31/2021

 

Available from: 2023-08-30 Created: 2023-08-30 Last updated: 2024-01-03Bibliographically approved
Molina, S., Mannucci, A., Magnusson, M., Adolfsson, D., Andreasson, H., Hamad, M., . . . Lilienthal, A. J. (2023). The ILIAD Safety Stack: Human-Aware Infrastructure-Free Navigation of Industrial Mobile Robots. IEEE robotics & automation magazine
Open this publication in new window or tab >>The ILIAD Safety Stack: Human-Aware Infrastructure-Free Navigation of Industrial Mobile Robots
Show others...
2023 (English)In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223XArticle in journal (Refereed) Epub ahead of print
Abstract [en]

Current intralogistics services require keeping up with e-commerce demands, reducing delivery times and waste, and increasing overall flexibility. As a consequence, the use of automated guided vehicles (AGVs) and, more recently, autonomous mobile robots (AMRs) for logistics operations is steadily increasing.

Place, publisher, year, edition, pages
IEEE, 2023
Keywords
Robots, Safety, Navigation, Mobile robots, Human-robot interaction, Hidden Markov models, Trajectory
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-108145 (URN)10.1109/MRA.2023.3296983 (DOI)001051249900001 ()
Funder
EU, Horizon 2020, 732737
Available from: 2023-09-14 Created: 2023-09-14 Last updated: 2024-01-02Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-0217-9326

Search in DiVA

Show all publications