To Örebro University

oru.seÖrebro universitets publikasjoner
Endre søk
Link to record
Permanent link

Direct link
Publikasjoner (10 av 16) Visa alla publikasjoner
Banaee, H., Klügl, F., Novakazi, F. & Lowry, S. (2024). Intention Recognition and Communication for Human-Robot Collaboration. In: Ericson P., Khairova N., De Vos M. (Ed.), CEUR Workshop Proceedings: . Paper presented at 3rd International Conference on Hybrid Human-Artificial Intelligence, HHAI-WS 2024, Malmo 10-11 June 2024 (pp. 101-108). CEUR-WS, 3825
Åpne denne publikasjonen i ny fane eller vindu >>Intention Recognition and Communication for Human-Robot Collaboration
2024 (engelsk)Inngår i: CEUR Workshop Proceedings / [ed] Ericson P., Khairova N., De Vos M., CEUR-WS , 2024, Vol. 3825, s. 101-108Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Human-robot collaboration follows rigid processes, in order to ensure safe interactions. In case of deviations from predetermined tasks, typically, processes come to a halt. This position paper proposes a conceptual framework for intention recognition and communication, enabling a higher granularity of understanding of intentions to facilitate more efficient and safe human-robot collaboration, especially in events of deviations from expected behaviour.

sted, utgiver, år, opplag, sider
CEUR-WS, 2024
Emneord
human-robot collaboration, human-robot communication, intention granularity, Intention recognition, Social robots, Conceptual frameworks, High granularity, Position papers, Microrobots
HSV kategori
Identifikatorer
urn:nbn:se:oru:diva-118435 (URN)2-s2.0-85210319239 (Scopus ID)
Konferanse
3rd International Conference on Hybrid Human-Artificial Intelligence, HHAI-WS 2024, Malmo 10-11 June 2024
Tilgjengelig fra: 2025-01-14 Laget: 2025-01-14 Sist oppdatert: 2025-01-14bibliografisk kontrollert
Norinder, U. & Lowry, S. (2023). Predicting Larch Casebearer damage with confidence using Yolo network models and conformal prediction. Remote Sensing Letters, 14(10), 1023-1035
Åpne denne publikasjonen i ny fane eller vindu >>Predicting Larch Casebearer damage with confidence using Yolo network models and conformal prediction
2023 (engelsk)Inngår i: Remote Sensing Letters, ISSN 2150-704X, E-ISSN 2150-7058, Vol. 14, nr 10, s. 1023-1035Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

This investigation shows that successful forecasting models for monitoring forest health status with respect to Larch Casebearer damages can be derived using a combination of a confidence predictor framework (Conformal Prediction) in combination with a deep learning architecture (Yolo v5). A confidence predictor framework can predict the current types of diseases used to develop the model and also provide indication of new, unseen, types or degrees of disease. The user of the models is also, at the same time, provided with reliable predictions and a well-established applicability domain for the model where such reliable predictions can and cannot be expected. Furthermore, the framework gracefully handles class imbalances without explicit over- or under-sampling or category weighting which may be of crucial importance in cases of highly imbalanced datasets. The present approach also provides indication of when insufficient information has been provided as input to the model at the level of accuracy (reliability) need by the user to make subsequent decisions based on the model predictions.

sted, utgiver, år, opplag, sider
Taylor & Francis, 2023
Emneord
Yolo network, Larch Casebearer moth, conformal prediction, forest health, tree damage
HSV kategori
Identifikatorer
urn:nbn:se:oru:diva-108845 (URN)10.1080/2150704X.2023.2258460 (DOI)001071044000001 ()2-s2.0-85171885925 (Scopus ID)
Forskningsfinansiär
Swedish Research Council, 2018-03807
Tilgjengelig fra: 2023-10-10 Laget: 2023-10-10 Sist oppdatert: 2024-01-16bibliografisk kontrollert
Kurtser, P. & Lowry, S. (2023). RGB-D datasets for robotic perception in site-specific agricultural operations: A survey. Computers and Electronics in Agriculture, 212, Article ID 108035.
Åpne denne publikasjonen i ny fane eller vindu >>RGB-D datasets for robotic perception in site-specific agricultural operations: A survey
2023 (engelsk)Inngår i: Computers and Electronics in Agriculture, ISSN 0168-1699, E-ISSN 1872-7107, Vol. 212, artikkel-id 108035Artikkel, forskningsoversikt (Fagfellevurdert) Published
Abstract [en]

Fusing color (RGB) images and range or depth (D) data in the form of RGB-D or multi-sensory setups is a relatively new but rapidly growing modality for many agricultural tasks. RGB-D data have potential to provide valuable information for many agricultural tasks that rely on perception, but collection of appropriate data and suitable ground truth information can be challenging and labor-intensive, and high-quality publicly available datasets are rare. This paper presents a survey of the existing RGB-D datasets available for agricultural robotics, and summarizes key trends and challenges in this research field. It evaluates the relative advantages of the commonly used sensors, and how the hardware can affect the characteristics of the data collected. It also analyzes the role of RGB-D data in the most common vision-based machine learning tasks applied to agricultural robotic operations: visual recognition, object detection, and semantic segmentation, and compares and contrasts methods that utilize 2-D and 3-D perceptual data.

sted, utgiver, år, opplag, sider
Elsevier, 2023
Emneord
3D perception, Color point clouds, Datasets, Computer vision, Agricultural robotics
HSV kategori
Identifikatorer
urn:nbn:se:oru:diva-108413 (URN)10.1016/j.compag.2023.108035 (DOI)001059437100001 ()2-s2.0-85172469543 (Scopus ID)
Tilgjengelig fra: 2023-09-26 Laget: 2023-09-26 Sist oppdatert: 2023-12-08bibliografisk kontrollert
Andreasson, H., Larsson, J. & Lowry, S. (2022). A Local Planner for Accurate Positioning for a Multiple Steer-and-Drive Unit Vehicle Using Non-Linear Optimization. Sensors, 22(7), Article ID 2588.
Åpne denne publikasjonen i ny fane eller vindu >>A Local Planner for Accurate Positioning for a Multiple Steer-and-Drive Unit Vehicle Using Non-Linear Optimization
2022 (engelsk)Inngår i: Sensors, E-ISSN 1424-8220, Vol. 22, nr 7, artikkel-id 2588Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

This paper presents a local planning approach that is targeted for pseudo-omnidirectional vehicles: that is, vehicles that can drive sideways and rotate on the spot. This local planner—MSDU–is based on optimal control and formulates a non-linear optimization problem formulation that exploits the omni-motion capabilities of the vehicle to drive the vehicle to the goal in a smooth and efficient manner while avoiding obstacles and singularities. MSDU is designed for a real platform for mobile manipulation where one key function is the capability to drive in narrow and confined areas. The real-world evaluations show that MSDU planned paths that were smoother and more accurate than a comparable local path planner Timed Elastic Band (TEB), with a mean (translational, angular) error for MSDU of (0.0028 m, 0.0010 rad) compared to (0.0033 m, 0.0038 rad) for TEB. MSDU also generated paths that were consistently shorter than TEB, with a mean (translational, angular) distance traveled of (0.6026 m, 1.6130 rad) for MSDU compared to (0.7346 m, 3.7598 rad) for TEB.

sted, utgiver, år, opplag, sider
MDPI, 2022
Emneord
local planning, optimal control, obstacle avoidance
HSV kategori
Forskningsprogram
Datavetenskap
Identifikatorer
urn:nbn:se:oru:diva-98510 (URN)10.3390/s22072588 (DOI)000781087300001 ()35408204 (PubMedID)2-s2.0-85127034496 (Scopus ID)
Forskningsfinansiär
Swedish Research Council Formas, 2019-02264
Tilgjengelig fra: 2022-04-07 Laget: 2022-04-07 Sist oppdatert: 2025-02-09bibliografisk kontrollert
Kucner, T. P., Luperto, M., Lowry, S., Magnusson, M. & Lilienthal, A. (2021). Robust Frequency-Based Structure Extraction. In: 2021 IEEE International Conference on Robotics and Automation (ICRA): . Paper presented at IEEE International Conference on Robotics and Automation (ICRA 2021), Xi'an, China, May 30 - June 5, 2021 (pp. 1715-1721). IEEE
Åpne denne publikasjonen i ny fane eller vindu >>Robust Frequency-Based Structure Extraction
Vise andre…
2021 (engelsk)Inngår i: 2021 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2021, s. 1715-1721Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

State of the art mapping algorithms can produce high-quality maps. However, they are still vulnerable to clutter and outliers which can affect map quality and in consequence hinder the performance of a robot, and further map processing for semantic understanding of the environment. This paper presents ROSE, a method for building-level structure detection in robotic maps. ROSE exploits the fact that indoor environments usually contain walls and straight-line elements along a limited set of orientations. Therefore metric maps often have a set of dominant directions. ROSE extracts these directions and uses this information to segment the map into structure and clutter through filtering the map in the frequency domain (an approach substantially underutilised in the mapping applications). Removing the clutter in this way makes wall detection (e.g. using the Hough transform) more robust. Our experiments demonstrate that (1) the application of ROSE for decluttering can substantially improve structural feature retrieval (e.g., walls) in cluttered environments, (2) ROSE can successfully distinguish between clutter and structure in the map even with substantial amount of noise and (3) ROSE can numerically assess the amount of structure in the map.

sted, utgiver, år, opplag, sider
IEEE, 2021
Serie
IEEE International Conference on Robotics and Automation (ICRA), ISSN 1050-4729, E-ISSN 2577-087X
Emneord
Mapping, semantic understanding, indoor environments
HSV kategori
Forskningsprogram
Datavetenskap
Identifikatorer
urn:nbn:se:oru:diva-97000 (URN)10.1109/ICRA48506.2021.9561381 (DOI)000765738801089 ()2-s2.0-85118997794 (Scopus ID)9781728190778 (ISBN)9781728190785 (ISBN)
Konferanse
IEEE International Conference on Robotics and Automation (ICRA 2021), Xi'an, China, May 30 - June 5, 2021
Prosjekter
ILIAD
Forskningsfinansiär
EU, Horizon 2020, 732737
Tilgjengelig fra: 2022-01-31 Laget: 2022-01-31 Sist oppdatert: 2025-02-09bibliografisk kontrollert
Adolfsson, D., Lowry, S., Magnusson, M., Lilienthal, A. J. & Andreasson, H. (2019). A Submap per Perspective: Selecting Subsets for SuPer Mapping that Afford Superior Localization Quality. In: 2019 European Conference on Mobile Robots (ECMR): . Paper presented at European Conference on Mobile Robotics (ECMR), Prague, Czech Republic, September 4-6, 2019. IEEE
Åpne denne publikasjonen i ny fane eller vindu >>A Submap per Perspective: Selecting Subsets for SuPer Mapping that Afford Superior Localization Quality
Vise andre…
2019 (engelsk)Inngår i: 2019 European Conference on Mobile Robots (ECMR), IEEE, 2019Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

This paper targets high-precision robot localization. We address a general problem for voxel-based map representations that the expressiveness of the map is fundamentally limited by the resolution since integration of measurements taken from different perspectives introduces imprecisions, and thus reduces localization accuracy.We propose SuPer maps that contain one Submap per Perspective representing a particular view of the environment. For localization, a robot then selects the submap that best explains the environment from its perspective. We propose SuPer mapping as an offline refinement step between initial SLAM and deploying autonomous robots for navigation. We evaluate the proposed method on simulated and real-world data that represent an important use case of an industrial scenario with high accuracy requirements in an repetitive environment. Our results demonstrate a significantly improved localization accuracy, up to 46% better compared to localization in global maps, and up to 25% better compared to alternative submapping approaches.

sted, utgiver, år, opplag, sider
IEEE, 2019
HSV kategori
Forskningsprogram
Datavetenskap
Identifikatorer
urn:nbn:se:oru:diva-79739 (URN)10.1109/ECMR.2019.8870941 (DOI)000558081900037 ()2-s2.0-85074443858 (Scopus ID)978-1-7281-3605-9 (ISBN)
Konferanse
European Conference on Mobile Robotics (ECMR), Prague, Czech Republic, September 4-6, 2019
Forskningsfinansiär
EU, Horizon 2020, 732737Knowledge Foundation
Tilgjengelig fra: 2020-02-03 Laget: 2020-02-03 Sist oppdatert: 2024-01-02bibliografisk kontrollert
Lowry, S. (2019). Similarity criteria: evaluating perceptual change for visual localization. In: 2019 European Conference on Mobile Robots (ECMR): . Paper presented at ECMR 2019 : 9th European Conference on Mobile Robots, Prague, Czech Republic, September 4-6, 2019. IEEE, Article ID 8870962.
Åpne denne publikasjonen i ny fane eller vindu >>Similarity criteria: evaluating perceptual change for visual localization
2019 (engelsk)Inngår i: 2019 European Conference on Mobile Robots (ECMR), IEEE, 2019, artikkel-id 8870962Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Visual localization systems may operate in environments that exhibit considerable perceptual change. This paper proposes a method of evaluating the degree of appearance change using a similarity criteria based on comparing the subspaces spanned by the principal components of the observed image descriptors. We propose two criteria - θmin measures the minimum angle between subspaces and Stotal measures the total similarity between the subspaces. These criteria are introspective - they evaluate the performance of the image descriptor using nothing more than the image descriptor itself. Furthermore, we demonstrate that these similarity criteria reflect the ability of the image descriptor to perform visual localization successfully, thus allowing a measure of quality control on the localization output.

sted, utgiver, år, opplag, sider
IEEE, 2019
HSV kategori
Forskningsprogram
Datavetenskap
Identifikatorer
urn:nbn:se:oru:diva-79686 (URN)10.1109/ECMR.2019.8870962 (DOI)000558081900057 ()2-s2.0-85074423644 (Scopus ID)978-1-7281-3605-9 (ISBN)
Konferanse
ECMR 2019 : 9th European Conference on Mobile Robots, Prague, Czech Republic, September 4-6, 2019
Forskningsfinansiär
Swedish Research Council, 2018-03807
Tilgjengelig fra: 2020-02-03 Laget: 2020-02-03 Sist oppdatert: 2020-09-16bibliografisk kontrollert
Adolfsson, D., Lowry, S. & Andreasson, H. (2018). Improving Localisation Accuracy using Submaps in warehouses. In: : . Paper presented at IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Workshop on Robotics for Logistics in Warehouses and Environments Shared with Humans, Madrid, Spain, October 5, 2018.
Åpne denne publikasjonen i ny fane eller vindu >>Improving Localisation Accuracy using Submaps in warehouses
2018 (engelsk)Konferansepaper, Oral presentation with published abstract (Annet vitenskapelig)
Abstract [en]

This paper presents a method for localisation in hybrid metric-topological maps built using only local information that is, only measurements that were captured by the robot when it was in a nearby location. The motivation is that observations are typically range and viewpoint dependent and that a map a discrete map representation might not be able to explain the full structure within a voxel. The localisation system uses a method to select submap based on how frequently and where from each submap was updated. This allow the system to select the most descriptive submap, thereby improving the localisation and increasing performance by up to 40%.

HSV kategori
Forskningsprogram
Datavetenskap
Identifikatorer
urn:nbn:se:oru:diva-71844 (URN)
Konferanse
IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Workshop on Robotics for Logistics in Warehouses and Environments Shared with Humans, Madrid, Spain, October 5, 2018
Prosjekter
Iliad
Tilgjengelig fra: 2019-01-28 Laget: 2019-01-28 Sist oppdatert: 2025-02-09bibliografisk kontrollert
Lowry, S. & Andreasson, H. (2018). Lightweight, Viewpoint-Invariant Visual Place Recognition in Changing Environments. IEEE Robotics and Automation Letters, 3(2), 957-964
Åpne denne publikasjonen i ny fane eller vindu >>Lightweight, Viewpoint-Invariant Visual Place Recognition in Changing Environments
2018 (engelsk)Inngår i: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 3, nr 2, s. 957-964Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

This paper presents a viewpoint-invariant place recognition algorithm which is robust to changing environments while requiring only a small memory footprint. It demonstrates that condition-invariant local features can be combined with Vectors of Locally Aggregated Descriptors (VLAD) to reduce high-dimensional representations of images to compact binary signatures while retaining place matching capability across visually dissimilar conditions. This system provides a speed-up of two orders of magnitude over direct feature matching, and outperforms a bag-of-visual-words approach with near-identical computation speed and memory footprint. The experimental results show that single-image place matching from non-aligned images can be achieved in visually changing environments with as few as 256 bits (32 bytes) per image.

sted, utgiver, år, opplag, sider
Institute of Electrical and Electronics Engineers (IEEE), 2018
Emneord
Visual-based navigation, recognition, localization
HSV kategori
Forskningsprogram
Datavetenskap
Identifikatorer
urn:nbn:se:oru:diva-64652 (URN)10.1109/LRA.2018.2793308 (DOI)000424646100015 ()2-s2.0-85063309880 (Scopus ID)
Merknad

Funding Agency:

Semantic Robots Research Profile - Swedish Knowledge Foundation

Tilgjengelig fra: 2018-01-30 Laget: 2018-01-30 Sist oppdatert: 2025-02-07bibliografisk kontrollert
Lowry, S. & Andreasson, H. (2018). LOGOS: Local geometric support for high-outlier spatial verification. In: : . Paper presented at IEEE International Conference of Robotics and Automation (ICRA 2018), Brisbane, Australia, May 21-25, 2018 (pp. 7262-7269). IEEE Computer Society
Åpne denne publikasjonen i ny fane eller vindu >>LOGOS: Local geometric support for high-outlier spatial verification
2018 (engelsk)Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

This paper presents LOGOS, a method of spatial verification for visual localization that is robust in the presence of a high proportion of outliers. LOGOS uses scale and orientation information from local neighbourhoods of features to determine which points are likely to be inliers. The inlier points can be used for secondary localization verification and pose estimation. LOGOS is demonstrated on a number of benchmark localization datasets and outperforms RANSAC as a method of outlier removal and localization verification in scenarios that require robustness to many outliers.

sted, utgiver, år, opplag, sider
IEEE Computer Society, 2018
HSV kategori
Identifikatorer
urn:nbn:se:oru:diva-68446 (URN)000446394505077 ()
Konferanse
IEEE International Conference of Robotics and Automation (ICRA 2018), Brisbane, Australia, May 21-25, 2018
Merknad

Funding Agency:

Semantic Robots Research Profile - Swedish Knowledge Foundation (KKS)

Tilgjengelig fra: 2018-08-13 Laget: 2018-08-13 Sist oppdatert: 2018-10-22bibliografisk kontrollert
Organisasjoner
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0003-3788-499X