To Örebro University

oru.seÖrebro University Publications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 15) Show all publications
Norinder, U. & Lowry, S. (2023). Predicting Larch Casebearer damage with confidence using Yolo network models and conformal prediction. Remote Sensing Letters, 14(10), 1023-1035
Open this publication in new window or tab >>Predicting Larch Casebearer damage with confidence using Yolo network models and conformal prediction
2023 (English)In: Remote Sensing Letters, ISSN 2150-704X, E-ISSN 2150-7058, Vol. 14, no 10, p. 1023-1035Article in journal (Refereed) Published
Abstract [en]

This investigation shows that successful forecasting models for monitoring forest health status with respect to Larch Casebearer damages can be derived using a combination of a confidence predictor framework (Conformal Prediction) in combination with a deep learning architecture (Yolo v5). A confidence predictor framework can predict the current types of diseases used to develop the model and also provide indication of new, unseen, types or degrees of disease. The user of the models is also, at the same time, provided with reliable predictions and a well-established applicability domain for the model where such reliable predictions can and cannot be expected. Furthermore, the framework gracefully handles class imbalances without explicit over- or under-sampling or category weighting which may be of crucial importance in cases of highly imbalanced datasets. The present approach also provides indication of when insufficient information has been provided as input to the model at the level of accuracy (reliability) need by the user to make subsequent decisions based on the model predictions.

Place, publisher, year, edition, pages
Taylor & Francis, 2023
Keywords
Yolo network, Larch Casebearer moth, conformal prediction, forest health, tree damage
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-108845 (URN)10.1080/2150704X.2023.2258460 (DOI)001071044000001 ()2-s2.0-85171885925 (Scopus ID)
Funder
Swedish Research Council, 2018-03807
Available from: 2023-10-10 Created: 2023-10-10 Last updated: 2024-01-16Bibliographically approved
Kurtser, P. & Lowry, S. (2023). RGB-D datasets for robotic perception in site-specific agricultural operations: A survey. Computers and Electronics in Agriculture, 212, Article ID 108035.
Open this publication in new window or tab >>RGB-D datasets for robotic perception in site-specific agricultural operations: A survey
2023 (English)In: Computers and Electronics in Agriculture, ISSN 0168-1699, E-ISSN 1872-7107, Vol. 212, article id 108035Article, review/survey (Refereed) Published
Abstract [en]

Fusing color (RGB) images and range or depth (D) data in the form of RGB-D or multi-sensory setups is a relatively new but rapidly growing modality for many agricultural tasks. RGB-D data have potential to provide valuable information for many agricultural tasks that rely on perception, but collection of appropriate data and suitable ground truth information can be challenging and labor-intensive, and high-quality publicly available datasets are rare. This paper presents a survey of the existing RGB-D datasets available for agricultural robotics, and summarizes key trends and challenges in this research field. It evaluates the relative advantages of the commonly used sensors, and how the hardware can affect the characteristics of the data collected. It also analyzes the role of RGB-D data in the most common vision-based machine learning tasks applied to agricultural robotic operations: visual recognition, object detection, and semantic segmentation, and compares and contrasts methods that utilize 2-D and 3-D perceptual data.

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
3D perception, Color point clouds, Datasets, Computer vision, Agricultural robotics
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-108413 (URN)10.1016/j.compag.2023.108035 (DOI)001059437100001 ()2-s2.0-85172469543 (Scopus ID)
Available from: 2023-09-26 Created: 2023-09-26 Last updated: 2023-12-08Bibliographically approved
Andreasson, H., Larsson, J. & Lowry, S. (2022). A Local Planner for Accurate Positioning for a Multiple Steer-and-Drive Unit Vehicle Using Non-Linear Optimization. Sensors, 22(7), Article ID 2588.
Open this publication in new window or tab >>A Local Planner for Accurate Positioning for a Multiple Steer-and-Drive Unit Vehicle Using Non-Linear Optimization
2022 (English)In: Sensors, E-ISSN 1424-8220, Vol. 22, no 7, article id 2588Article in journal (Refereed) Published
Abstract [en]

This paper presents a local planning approach that is targeted for pseudo-omnidirectional vehicles: that is, vehicles that can drive sideways and rotate on the spot. This local planner—MSDU–is based on optimal control and formulates a non-linear optimization problem formulation that exploits the omni-motion capabilities of the vehicle to drive the vehicle to the goal in a smooth and efficient manner while avoiding obstacles and singularities. MSDU is designed for a real platform for mobile manipulation where one key function is the capability to drive in narrow and confined areas. The real-world evaluations show that MSDU planned paths that were smoother and more accurate than a comparable local path planner Timed Elastic Band (TEB), with a mean (translational, angular) error for MSDU of (0.0028 m, 0.0010 rad) compared to (0.0033 m, 0.0038 rad) for TEB. MSDU also generated paths that were consistently shorter than TEB, with a mean (translational, angular) distance traveled of (0.6026 m, 1.6130 rad) for MSDU compared to (0.7346 m, 3.7598 rad) for TEB.

Place, publisher, year, edition, pages
MDPI, 2022
Keywords
local planning, optimal control, obstacle avoidance
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-98510 (URN)10.3390/s22072588 (DOI)000781087300001 ()35408204 (PubMedID)2-s2.0-85127034496 (Scopus ID)
Funder
Swedish Research Council Formas, 2019-02264
Available from: 2022-04-07 Created: 2022-04-07 Last updated: 2022-04-20Bibliographically approved
Kucner, T. P., Luperto, M., Lowry, S., Magnusson, M. & Lilienthal, A. (2021). Robust Frequency-Based Structure Extraction. In: 2021 IEEE International Conference on Robotics and Automation (ICRA): . Paper presented at IEEE International Conference on Robotics and Automation (ICRA 2021), Xi'an, China, May 30 - June 5, 2021 (pp. 1715-1721). IEEE
Open this publication in new window or tab >>Robust Frequency-Based Structure Extraction
Show others...
2021 (English)In: 2021 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2021, p. 1715-1721Conference paper, Published paper (Refereed)
Abstract [en]

State of the art mapping algorithms can produce high-quality maps. However, they are still vulnerable to clutter and outliers which can affect map quality and in consequence hinder the performance of a robot, and further map processing for semantic understanding of the environment. This paper presents ROSE, a method for building-level structure detection in robotic maps. ROSE exploits the fact that indoor environments usually contain walls and straight-line elements along a limited set of orientations. Therefore metric maps often have a set of dominant directions. ROSE extracts these directions and uses this information to segment the map into structure and clutter through filtering the map in the frequency domain (an approach substantially underutilised in the mapping applications). Removing the clutter in this way makes wall detection (e.g. using the Hough transform) more robust. Our experiments demonstrate that (1) the application of ROSE for decluttering can substantially improve structural feature retrieval (e.g., walls) in cluttered environments, (2) ROSE can successfully distinguish between clutter and structure in the map even with substantial amount of noise and (3) ROSE can numerically assess the amount of structure in the map.

Place, publisher, year, edition, pages
IEEE, 2021
Series
IEEE International Conference on Robotics and Automation (ICRA), ISSN 1050-4729, E-ISSN 2577-087X
Keywords
Mapping, semantic understanding, indoor environments
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-97000 (URN)10.1109/ICRA48506.2021.9561381 (DOI)000765738801089 ()2-s2.0-85118997794 (Scopus ID)9781728190778 (ISBN)9781728190785 (ISBN)
Conference
IEEE International Conference on Robotics and Automation (ICRA 2021), Xi'an, China, May 30 - June 5, 2021
Projects
ILIAD
Funder
EU, Horizon 2020, 732737
Available from: 2022-01-31 Created: 2022-01-31 Last updated: 2022-04-25Bibliographically approved
Adolfsson, D., Lowry, S., Magnusson, M., Lilienthal, A. J. & Andreasson, H. (2019). A Submap per Perspective: Selecting Subsets for SuPer Mapping that Afford Superior Localization Quality. In: 2019 European Conference on Mobile Robots (ECMR): . Paper presented at European Conference on Mobile Robotics (ECMR), Prague, Czech Republic, September 4-6, 2019. IEEE
Open this publication in new window or tab >>A Submap per Perspective: Selecting Subsets for SuPer Mapping that Afford Superior Localization Quality
Show others...
2019 (English)In: 2019 European Conference on Mobile Robots (ECMR), IEEE, 2019Conference paper, Published paper (Refereed)
Abstract [en]

This paper targets high-precision robot localization. We address a general problem for voxel-based map representations that the expressiveness of the map is fundamentally limited by the resolution since integration of measurements taken from different perspectives introduces imprecisions, and thus reduces localization accuracy.We propose SuPer maps that contain one Submap per Perspective representing a particular view of the environment. For localization, a robot then selects the submap that best explains the environment from its perspective. We propose SuPer mapping as an offline refinement step between initial SLAM and deploying autonomous robots for navigation. We evaluate the proposed method on simulated and real-world data that represent an important use case of an industrial scenario with high accuracy requirements in an repetitive environment. Our results demonstrate a significantly improved localization accuracy, up to 46% better compared to localization in global maps, and up to 25% better compared to alternative submapping approaches.

Place, publisher, year, edition, pages
IEEE, 2019
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-79739 (URN)10.1109/ECMR.2019.8870941 (DOI)000558081900037 ()2-s2.0-85074443858 (Scopus ID)978-1-7281-3605-9 (ISBN)
Conference
European Conference on Mobile Robotics (ECMR), Prague, Czech Republic, September 4-6, 2019
Funder
EU, Horizon 2020, 732737Knowledge Foundation
Available from: 2020-02-03 Created: 2020-02-03 Last updated: 2024-01-02Bibliographically approved
Lowry, S. (2019). Similarity criteria: evaluating perceptual change for visual localization. In: 2019 European Conference on Mobile Robots (ECMR): . Paper presented at ECMR 2019 : 9th European Conference on Mobile Robots, Prague, Czech Republic, September 4-6, 2019. IEEE, Article ID 8870962.
Open this publication in new window or tab >>Similarity criteria: evaluating perceptual change for visual localization
2019 (English)In: 2019 European Conference on Mobile Robots (ECMR), IEEE, 2019, article id 8870962Conference paper, Published paper (Refereed)
Abstract [en]

Visual localization systems may operate in environments that exhibit considerable perceptual change. This paper proposes a method of evaluating the degree of appearance change using a similarity criteria based on comparing the subspaces spanned by the principal components of the observed image descriptors. We propose two criteria - θmin measures the minimum angle between subspaces and Stotal measures the total similarity between the subspaces. These criteria are introspective - they evaluate the performance of the image descriptor using nothing more than the image descriptor itself. Furthermore, we demonstrate that these similarity criteria reflect the ability of the image descriptor to perform visual localization successfully, thus allowing a measure of quality control on the localization output.

Place, publisher, year, edition, pages
IEEE, 2019
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-79686 (URN)10.1109/ECMR.2019.8870962 (DOI)000558081900057 ()2-s2.0-85074423644 (Scopus ID)978-1-7281-3605-9 (ISBN)
Conference
ECMR 2019 : 9th European Conference on Mobile Robots, Prague, Czech Republic, September 4-6, 2019
Funder
Swedish Research Council, 2018-03807
Available from: 2020-02-03 Created: 2020-02-03 Last updated: 2020-09-16Bibliographically approved
Adolfsson, D., Lowry, S. & Andreasson, H. (2018). Improving Localisation Accuracy using Submaps in warehouses. In: : . Paper presented at IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Workshop on Robotics for Logistics in Warehouses and Environments Shared with Humans, Madrid, Spain, October 5, 2018.
Open this publication in new window or tab >>Improving Localisation Accuracy using Submaps in warehouses
2018 (English)Conference paper, Oral presentation with published abstract (Other academic)
Abstract [en]

This paper presents a method for localisation in hybrid metric-topological maps built using only local information that is, only measurements that were captured by the robot when it was in a nearby location. The motivation is that observations are typically range and viewpoint dependent and that a map a discrete map representation might not be able to explain the full structure within a voxel. The localisation system uses a method to select submap based on how frequently and where from each submap was updated. This allow the system to select the most descriptive submap, thereby improving the localisation and increasing performance by up to 40%.

National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-71844 (URN)
Conference
IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Workshop on Robotics for Logistics in Warehouses and Environments Shared with Humans, Madrid, Spain, October 5, 2018
Projects
Iliad
Available from: 2019-01-28 Created: 2019-01-28 Last updated: 2024-01-02Bibliographically approved
Lowry, S. & Andreasson, H. (2018). Lightweight, Viewpoint-Invariant Visual Place Recognition in Changing Environments. IEEE Robotics and Automation Letters, 3(2), 957-964
Open this publication in new window or tab >>Lightweight, Viewpoint-Invariant Visual Place Recognition in Changing Environments
2018 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 3, no 2, p. 957-964Article in journal (Refereed) Published
Abstract [en]

This paper presents a viewpoint-invariant place recognition algorithm which is robust to changing environments while requiring only a small memory footprint. It demonstrates that condition-invariant local features can be combined with Vectors of Locally Aggregated Descriptors (VLAD) to reduce high-dimensional representations of images to compact binary signatures while retaining place matching capability across visually dissimilar conditions. This system provides a speed-up of two orders of magnitude over direct feature matching, and outperforms a bag-of-visual-words approach with near-identical computation speed and memory footprint. The experimental results show that single-image place matching from non-aligned images can be achieved in visually changing environments with as few as 256 bits (32 bytes) per image.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2018
Keywords
Visual-based navigation, recognition, localization
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-64652 (URN)10.1109/LRA.2018.2793308 (DOI)000424646100015 ()2-s2.0-85063309880 (Scopus ID)
Note

Funding Agency:

Semantic Robots Research Profile - Swedish Knowledge Foundation

Available from: 2018-01-30 Created: 2018-01-30 Last updated: 2024-01-17Bibliographically approved
Lowry, S. & Andreasson, H. (2018). LOGOS: Local geometric support for high-outlier spatial verification. In: : . Paper presented at IEEE International Conference of Robotics and Automation (ICRA 2018), Brisbane, Australia, May 21-25, 2018 (pp. 7262-7269). IEEE Computer Society
Open this publication in new window or tab >>LOGOS: Local geometric support for high-outlier spatial verification
2018 (English)Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents LOGOS, a method of spatial verification for visual localization that is robust in the presence of a high proportion of outliers. LOGOS uses scale and orientation information from local neighbourhoods of features to determine which points are likely to be inliers. The inlier points can be used for secondary localization verification and pose estimation. LOGOS is demonstrated on a number of benchmark localization datasets and outperforms RANSAC as a method of outlier removal and localization verification in scenarios that require robustness to many outliers.

Place, publisher, year, edition, pages
IEEE Computer Society, 2018
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-68446 (URN)000446394505077 ()
Conference
IEEE International Conference of Robotics and Automation (ICRA 2018), Brisbane, Australia, May 21-25, 2018
Note

Funding Agency:

Semantic Robots Research Profile - Swedish Knowledge Foundation (KKS)

Available from: 2018-08-13 Created: 2018-08-13 Last updated: 2018-10-22Bibliographically approved
Lowry, S. & Milford, M. (2016). Supervised and Unsupervised Linear Learning Techniques for Visual Place Recognition in Changing Environments. IEEE Transactions on robotics, 32(3), 600-613
Open this publication in new window or tab >>Supervised and Unsupervised Linear Learning Techniques for Visual Place Recognition in Changing Environments
2016 (English)In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 32, no 3, p. 600-613Article in journal (Refereed) Published
Abstract [en]

This paper investigates the application of linear learning techniques to the place recognition problem. We present two learning methods, a supervised change prediction technique based on linear regression and an unsupervised change removal technique based on principal component analysis, and investigate how the performance of each is affected by the choice of training data. We show that the change prediction technique presented here succeeds only if it is provided with appropriate and adequate training data, which can be challenging for a mobile robotic system operating in an uncontrolled environment. In contrast, change removal can improve place recognition performance even when trained with as few as 100 samples. This paper shows that change removal can be combined with a number of different image descriptors and can improve performance across a range of different appearance conditions.

Place, publisher, year, edition, pages
IEEE, 2016
Keywords
Changing environments, learning about change, linear regression, principal component analysis (PCA), visual place recognition.
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-50431 (URN)10.1109/TRO.2016.2545711 (DOI)000378528900009 ()2-s2.0-84968764258 (Scopus ID)
Note

Funding Agencies:

Australian Research Council FT140101229

Microsoft Research Faculty Fellowship

Available from: 2016-05-26 Created: 2016-05-26 Last updated: 2018-01-10Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-3788-499X

Search in DiVA

Show all publications