To Örebro University

oru.seÖrebro University Publications
Change search
Refine search result
1 - 10 of 10
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Adolfsson, Daniel
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Alhashimi, Anas
    Örebro University, Örebro, Sweden; Computer Engineering Department, University of Baghdad, Baghdad, Iraq.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lidar-Level Localization With Radar? The CFEAR Approach to Accurate, Fast, and Robust Large-Scale Radar Odometry in Diverse Environments2023In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 39, no 2, p. 1476-1495Article in journal (Refereed)
    Abstract [en]

    This article presents an accurate, highly efficient, and learning-free method for large-scale odometry estimation using spinning radar, empirically found to generalize well across very diverse environments—outdoors, from urban to woodland, and indoors in warehouses and mines—without changing parameters. Our method integrates motion compensation within a sweep with one-to-many scan registration that minimizes distances between nearby oriented surface points and mitigates outliers with a robust loss function. Extending our previous approach conservative filtering for efficient and accurate radar odometry (CFEAR), we present an in-depth investigation on a wider range of datasets, quantifying the importance of filtering, resolution, registration cost and loss functions, keyframe history, and motion compensation. We present a new solving strategy and configuration that overcomes previous issues with sparsity and bias, and improves our state-of-the-art by 38%, thus, surprisingly, outperforming radar simultaneous localization and mapping (SLAM) and approaching lidar SLAM. The most accurate configuration achieves 1.09% error at 5 Hz on the Oxford benchmark, and the fastest achieves 1.79% error at 160 Hz.

    Download full text (pdf)
    Lidar-level localization with radar? The CFEAR approach to accurate, fast and robust large-scale radar odometry in diverse environments
  • 2.
    Frese, Udo
    et al.
    University of Bremen.
    Larsson, Per
    NamaTec AB.
    Duckett, Tom
    Örebro University, Department of Technology.
    A multilevel relaxation algorithm for simultaneous localisation and mapping2005In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 21, no 2, p. 196-207Article in journal (Refereed)
    Abstract [en]

    This paper addresses the problem of simultaneous localisation and mapping (SLAM) by a mobile robot. An incremental SLAM algorithm is introduced that is derived from multigrid methods used for solving partial differential equations. The approach improves on the performance of previous relaxation methods for robot mapping because it optimizes the map at multiple levels of resolution. The resulting algorithm has an update time that is linear in the number of estimated features for typical indoor environments, even when closing very large loops, and offers advantages in handling non-linearities compared to other SLAM algorithms. Experimental comparisons with alternative algorithms using two well-known data sets and mapping results on a real robot are also presented

  • 3.
    Hang, Kaiyu
    et al.
    Computer Vision and Active Perception Laboratory, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Li, Miao
    Learning Algorithms and Systems Laboratory, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
    Stork, Johannes Andreas
    Computer Vision and Active Perception Laboratory, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Bekiroglu, Yasemin
    Department of Mechanical Engineering, School of Engineering, University of Birmingham, Birmingham, UK.
    Pokorny, Florian T.
    Computer Vision and Active Perception Laboratory, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Billard, Aude
    Learning Algorithms and Systems Laboratory, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
    Kragic, Danica
    Computer Vision and Active Perception Laboratory, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Hierarchical fingertip space: A unified framework for grasp planning and in-hand grasp adaptation2016In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 32, no 4, p. 960-972Article in journal (Refereed)
    Abstract [en]

    We present a unified framework for grasp planning and in-hand grasp adaptation using visual, tactile, and proprioceptive feedback. The main objective of the proposed framework is to enable fingertip grasping by addressing problems of changed weight of the object, slippage, and external disturbances. For this purpose we introduce the Hierarchical Fingertip Space as a representation enabling optimization for both efficient grasp synthesis and online finger gaiting. Grasp synthesis is followed by a grasp adaptation step that consists of both grasp force adaptation through impedance control and regrasping/finger gaiting when the former is not sufficient. Experimental evaluation is conducted on an Allegro hand mounted on a Kuka LWR arm.

  • 4.
    Liao, Qianfang
    et al.
    Örebro University, School of Science and Technology.
    Sun, Da
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    FuzzyPSReg: Strategies of Fuzzy Cluster-based Point Set Registration2022In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 38, no 4, p. 2632-2651Article in journal (Refereed)
    Abstract [en]

    This paper studies the fuzzy cluster-based point set registration (FuzzyPSReg). First, we propose a new metric based on Gustafson-Kessel (GK) fuzzy clustering to measure the alignment of two point clouds.  Unlike the metric based on fuzzy c-means (FCM) clustering in our previous work, the GK-based metric includes orientation properties of the point clouds, thereby providing more information for registration. We then develop the registration quality assessment of the GK-based metric, which is more sensitive to small misalignments than that of the FCM-based metric. Next, by effectively combining the two metrics, we design two FuzzyPSReg strategies with global optimization: i). \textit{FuzzyPSReg-SS}, which extends our previous work and aligns two similar-sized point clouds with greatly improved efficiency; ii). \textit{FuzzyPSReg-O2S}, which aligns two point clouds with a relatively large difference in size and can be used to estimate the pose of an object in a scene. In the experiment, we use different point clouds to test and compare the proposed method with state-of-the-art registration approaches. The results demonstrate the advantages and effectiveness of our method.

    Download full text (pdf)
    FuzzyPSReg: Strategies of Fuzzy Cluster-based Point Set Registration
  • 5.
    Lowry, Stephanie
    et al.
    Örebro University, School of Science and Technology.
    Milford, Michael
    Queensland University of Technology, Brisbane, Australia.
    Supervised and Unsupervised Linear Learning Techniques for Visual Place Recognition in Changing Environments2016In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 32, no 3, p. 600-613Article in journal (Refereed)
    Abstract [en]

    This paper investigates the application of linear learning techniques to the place recognition problem. We present two learning methods, a supervised change prediction technique based on linear regression and an unsupervised change removal technique based on principal component analysis, and investigate how the performance of each is affected by the choice of training data. We show that the change prediction technique presented here succeeds only if it is provided with appropriate and adequate training data, which can be challenging for a mobile robotic system operating in an uncontrolled environment. In contrast, change removal can improve place recognition performance even when trained with as few as 100 samples. This paper shows that change removal can be combined with a number of different image descriptors and can improve performance across a range of different appearance conditions.

  • 6.
    Lowry, Stephanie
    et al.
    Örebro University, School of Science and Technology.
    Sunderhauf, Niko
    The Australian Centre for Robotic Vision, School of Electrical Engineering and Computer Science, Queensland University of Technology, Brisbane, Australia.
    Newman, Paul
    The Mobile Robotics Group, Department of Engineering Science, University of Oxford, Oxford, U.K..
    Leonard, John
    The Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, USA.
    Cox, David
    The Department of Molecular and Cellular Biology, the School of Engineering and Applied Science, and the Center for Brain Science, Harvard University, Cambridge, USA.
    Corke, Peter
    The Australian Centre for Robotic Vision, School of Electrical Engineering and Computer Science, Queensland University of Technology, Brisbane, Australia.
    Milford, Michael
    The Australian Centre for Robotic Vision, School of Electrical Engineering and Computer Science, Queensland University of Technology, Brisbane, Australia.
    Visual Place Recognition: A Survey2016In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 32, no 1, p. 1-19Article in journal (Refereed)
    Abstract [en]

    Visual place recognition is a challenging problem due to the vast range of ways in which the appearance of real-world places can vary. In recent years, improvements in visual sensing capabilities, an ever-increasing focus on long-term mobile robot autonomy, and the ability to draw on state-of-the-art research in other disciplines - particularly recognition in computer vision and animal navigation in neuroscience - have all contributed to significant advances in visual place recognition systems. This paper presents a survey of the visual place recognition research landscape. We start by introducing the concepts behind place recognition - the role of place recognition in the animal kingdom, how a "place" is defined in a robotics context, and the major components of a place recognition system. Long-term robot operations have revealed that changing appearance can be a significant factor in visual place recognition failure; therefore, we discuss how place recognition solutions can implicitly or explicitly account for appearance change within the environment. Finally, we close with a discussion on the future of visual place recognition, in particular with respect to the rapid advances being made in the related fields of deep learning, semantic scene understanding, and video description.

  • 7.
    Mannucci, Anna
    et al.
    Örebro University, School of Science and Technology. Research Center “E. Piaggio”, University of Pisa, Pisa, Italy.
    Caporale, Danilo
    Research Center “E. Piaggio”, University of Pisa, Pisa, Italy.
    Pallottino, Lucia
    Research Center “E. Piaggio”, University of Pisa, Pisa, Italy.
    On Null Space-Based Inverse Kinematics Techniques for Fleet Management: Toward Time-Varying Task Activation2021In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 37, no 1, p. 257-274Article in journal (Refereed)
    Abstract [en]

    Multirobot fleets play an important role in industrial logistics, surveillance, and exploration applications. A wide literature exists on the topic, both resorting to reactive (i.e. collision avoidance) and to deliberative (i.e. motion planning) techniques. In this work, null space-based inverse kinematics (NSB-IK) methods are applied to the problem of fleet management. Several NSB-IK approaches existing in the literature are reviewed, and compared with a reverse priority approach, which originated in manipulator control, and is here applied for the first time to the considered problem. All NSB-IK approaches are here described in a unified formalism, which allows (i) to encode the property of each controller into a set of seven main key features, (ii) to study possible new control laws with an opportune choice of these parameters. Furthermore, motivated by the envisioned application scenario, we tackle the problem of task-switching activation. Leveraging on the iCAT TPC technique Simetti and Casalino, 2016, in this article, we propose a method to obtain continuity in the control in face of activation or deactivation of tasks, and subtasks by defining suitable damped projection operators. The proposed approaches are evaluated formally, and via simulations. Performances with respect to standard methods are compared considering a specific case study for multivehicles management.

  • 8.
    Mannucci, Anna
    et al.
    Örebro University, School of Science and Technology. Research Center “E. Piaggio,” University of Pisa, Italy; Dipartimento di Ingegneria dell’Informazione, University of Pisa, Pisa, Italy.
    Pallottino, Lucia
    Research Center “E. Piaggio,” University of Pisa, Pisa, Italy; Dipartimento di Ingegneria dell’Informazione, University of Pisa, Pisa, Italy .
    Pecora, Federico
    Örebro University, School of Science and Technology.
    On Provably Safe and Live Multirobot Coordination with Online Goal Posting2021In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 37, no 6, p. 1973-1991Article in journal (Refereed)
    Abstract [en]

    A standing challenge in multirobot systems is to realize safe and efficient motion planning and coordination methods that are capable of accounting for uncertainties and contingencies. The challenge is rendered harder by the fact that robots may be heterogeneous and that their plans may be posted asynchronously. Most existing approaches require constraints on the infrastructureor unrealistic assumptions on robot models. In this article, we propose a centralized, loosely-coupled supervisory controller that overcomes these limitations. The approach responds to newly posed constraints and uncertainties during trajectory execution, ensuring at all times that planned robot trajectories remain kinodynamically feasible, that the fleet is in a safe state, and that there are no deadlocks or livelocks. This is achieved without the need for hand-coded rules, fixed robot priorities, or environment modification. We formally state all relevant properties of robot behavior in the most general terms possible, without assuming particular robot models or environments, and provide both formal and empirical proof that the proposed fleet control algorithms guarantee safety and liveness.

  • 9.
    Sun, Da
    et al.
    Örebro University, School of Science and Technology.
    Liao, Qianfang
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Single Master Bimanual Teleoperation System with Efficient Regulation2020In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 36, no 4, p. 1022-1037Article in journal (Refereed)
    Abstract [en]

    This paper proposes a new single master bimanual teleoperation (SMBT) system with an efficient position, orientation and force regulation strategy. Unlike many existing studies that solely support motion synchronization, the first contribution of the proposed work is to propose a solution for orientation regulation when several slave robots have differing motions. In other words, we propose a solution for self-regulated orientation for dual-arm robots. A second contribution in the paper allows the master with fewer degrees of freedom to control the slaves (with higher degrees of freedom), while the orientation of the slaves is self-regulated. The system further offers a novel force regulation that enables the slave robots to have a smooth and balanced robot-environment interaction with proper force directions. Finally, the proposed approach provides adequate force feedback about the environment to the operator and assists the operator in identifying different motion situations of the slaves. Our approach demonstrates that the forces from the slaves will not interrupt the operator’s perception of the environment. To validate the proposed system, experiments are conducted using a platform consisting of two 7-Degree of Freedom (DoF) slave robots and one 3-DoF master haptic device. The experiments demonstrated good results in terms of position, orientation and force regulation.

    Download full text (pdf)
    Single Master Bimanual Teleoperation System With Efficient Regulation
  • 10.
    Sun, Da
    et al.
    Örebro University, School of Science and Technology.
    Liao, Qianfang
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Type-2 Fuzzy Model-based Movement Primitives for Imitation Learning2022In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 38, no 4, p. 2462-2480Article in journal (Refereed)
    Abstract [en]

    Imitation learning is an important direction in the area of robot skill learning. It provides a user-friendly and straightforward solution to transfer human demonstrations to robots. In this article, we integrate fuzzy theory into imitation learning to develop a novel method called Type-2 Fuzzy Model-based Movement Primitives (T2FMP).In this method, a group of data-driven Type-2 fuzzy models are used to describe the input-output relationships of demonstrations. Based on the fuzzy models, T2FMP can efficiently reproduce the trajectory without high computational costs or cumbersome parameter settings. Besides, it can well handle the variation of the demonstrations and is robust to noise. In addition, we develop extensions that endow T2FMP with trajectory modulation and superposition to achieve real-time trajectory adaptation to various scenarios. Going beyond existing imitation learning methods, we further extend T2FMP to regulate the trajectory to avoid collisions in the environment that is unstructured, non-convex, and detected with noisy outliers. Several experiments are performed to validate the effectiveness of our method.

    Download full text (pdf)
    Type-2 Fuzzy Model-based Movement Primitives for Imitation Learning
1 - 10 of 10
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf