oru.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 51) Show all publications
Sun, D., Liao, Q., Stoyanov, T., Kiselev, A. & Loutfi, A. (2019). Bilateral telerobotic system using Type-2 fuzzy neural network based moving horizon estimation force observer for enhancement of environmental force compliance and human perception. Automatica, 106, 358-373
Open this publication in new window or tab >>Bilateral telerobotic system using Type-2 fuzzy neural network based moving horizon estimation force observer for enhancement of environmental force compliance and human perception
Show others...
2019 (English)In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 106, p. 358-373Article in journal (Refereed) Published
Abstract [en]

This paper firstly develops a novel force observer using Type-2 Fuzzy Neural Network (T2FNN)-based Moving Horizon Estimation (MHE) to estimate external force/torque information and simultaneously filter out the system disturbances. Then, by using the proposed force observer, a new bilateral teleoperation system is proposed that allows the slave industrial robot to be more compliant to the environment and enhances the situational awareness of the human operator by providing multi-level force feedback. Compared with existing force observer algorithms that highly rely on knowing exact mathematical models, the proposed force estimation strategy can derive more accurate external force/torque information of the robots with complex mechanism and with unknown dynamics. Applying the estimated force information, an external-force-regulated Sliding Mode Control (SMC) strategy with the support of machine vision is proposed to enhance the adaptability of the slave robot and the perception of the operator about various scenarios by virtue of the detected location of the task object. The proposed control system is validated by the experiment platform consisting of a universal robot (UR10), a haptic device and an RGB-D sensor.

Place, publisher, year, edition, pages
Pergamon Press, 2019
Keywords
Force estimation and control, Type-2 fuzzy neural network, Moving horizon estimation, Bilateral teleoperation, Machine vision
National Category
Control Engineering
Research subject
Computer and Systems Science
Identifiers
urn:nbn:se:oru:diva-74377 (URN)10.1016/j.automatica.2019.04.033 (DOI)000473380000041 ()2-s2.0-85065901728 (Scopus ID)
Funder
Swedish Research Council
Available from: 2019-05-23 Created: 2019-05-23 Last updated: 2019-07-24Bibliographically approved
Della Corte, B., Andreasson, H., Stoyanov, T. & Grisetti, G. (2019). Unified Motion-Based Calibration of Mobile Multi-Sensor Platforms With Time Delay Estimation. IEEE Robotics and Automation Letters, 4(2), 902-909
Open this publication in new window or tab >>Unified Motion-Based Calibration of Mobile Multi-Sensor Platforms With Time Delay Estimation
2019 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, no 2, p. 902-909Article in journal (Refereed) Published
Abstract [en]

The ability to maintain and continuously update geometric calibration parameters of a mobile platform is a key functionality for every robotic system. These parameters include the intrinsic kinematic parameters of the platform, the extrinsic parameters of the sensors mounted on it, and their time delays. In this letter, we present a unified pipeline for motion-based calibration of mobile platforms equipped with multiple heterogeneous sensors. We formulate a unified optimization problem to concurrently estimate the platform kinematic parameters, the sensors extrinsic parameters, and their time delays. We analyze the influence of the trajectory followed by the robot on the accuracy of the estimate. Our framework automatically selects appropriate trajectories to maximize the information gathered and to obtain a more accurate parameters estimate. In combination with that, our pipeline observes the parameters evolution in long-term operation to detect possible values change in the parameters set. The experiments conducted on real data show a smooth convergence along with the ability to detect changes in parameters value. We release an open-source version of our framework to the community.

Place, publisher, year, edition, pages
IEEE, 2019
Keywords
Calibration and Identification
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-72756 (URN)10.1109/LRA.2019.2892992 (DOI)000458182100012 ()
Note

Funding Agency:

Semantic Robots Research Profile - Swedish Knowledge Foundation (KKS) 

Available from: 2019-02-25 Created: 2019-02-25 Last updated: 2019-02-25Bibliographically approved
Canelhas, D. R., Stoyanov, T. & Lilienthal, A. J. (2018). A Survey of Voxel Interpolation Methods and an Evaluation of Their Impact on Volumetric Map-Based Visual Odometry. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA),: . Paper presented at IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, May 21-25, 2018 (pp. 6337-6343). IEEE Computer Society
Open this publication in new window or tab >>A Survey of Voxel Interpolation Methods and an Evaluation of Their Impact on Volumetric Map-Based Visual Odometry
2018 (English)In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA),, IEEE Computer Society, 2018, p. 6337-6343Conference paper, Published paper (Refereed)
Abstract [en]

Voxel volumes are simple to implement and lend themselves to many of the tools and algorithms available for 2D images. However, the additional dimension of voxels may be costly to manage in memory when mapping large spaces at high resolutions. While lowering the resolution and using interpolation is common work-around, in the literature we often find that authors either use trilinear interpolation or nearest neighbors and rarely any of the intermediate options. This paper presents a survey of geometric interpolation methods for voxel-based map representations. In particular we study the truncated signed distance field (TSDF) and the impact of using fewer than 8 samples to perform interpolation within a depth-camera pose tracking and mapping scenario. We find that lowering the number of samples fetched to perform the interpolation results in performance similar to the commonly used trilinear interpolation method, but leads to higher framerates. We also report that lower bit-depth generally leads to performance degradation, though not as much as may be expected, with voxels containing as few as 3 bits sometimes resulting in adequate estimation of camera trajectories.

Place, publisher, year, edition, pages
IEEE Computer Society, 2018
Keywords
Voxels, Compression, Interpolation, TSDF, Visual Odometry
National Category
Robotics Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-67850 (URN)000446394504116 ()
Conference
IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, May 21-25, 2018
Projects
H2020 ILIADH2020 Roblog
Funder
EU, Horizon 2020, 732737
Available from: 2018-07-11 Created: 2018-07-11 Last updated: 2018-10-22Bibliographically approved
Stoyanov, T., Krug, R., Kiselev, A., Sun, D. & Loutfi, A. (2018). Assisted Telemanipulation: A Stack-Of-Tasks Approach to Remote Manipulator Control. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, October 1-5, 2018 (pp. 6640-6645). IEEE Press
Open this publication in new window or tab >>Assisted Telemanipulation: A Stack-Of-Tasks Approach to Remote Manipulator Control
Show others...
2018 (English)In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE Press, 2018, p. 6640-6645Conference paper, Published paper (Refereed)
Abstract [en]

This article presents an approach for assisted teleoperation of a robot arm, formulated within a real-time stack-of-tasks (SoT) whole-body motion control framework. The approach leverages the hierarchical nature of the SoT framework to integrate operator commands with assistive tasks, such as joint limit and obstacle avoidance or automatic gripper alignment. Thereby some aspects of the teleoperation problem are delegated to the controller and carried out autonomously. The key contributions of this work are two-fold: the first is a method for unobtrusive integration of autonomy in a telemanipulation system; and the second is a user study evaluation of the proposed system in the context of teleoperated pick-and-place tasks. The proposed approach of assistive control was found to result in higher grasp success rates and shorter trajectories than achieved through manual control, without incurring additional cognitive load to the operator.

Place, publisher, year, edition, pages
IEEE Press, 2018
Series
IEEE International Conference on Intelligent Robots and Systems. Proceedings, ISSN 2153-0858, E-ISSN 2153-0866
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-71310 (URN)10.1109/IROS.2018.8594457 (DOI)000458872706014 ()978-1-5386-8094-0 (ISBN)978-1-5386-8095-7 (ISBN)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, October 1-5, 2018
Funder
Knowledge FoundationSwedish Foundation for Strategic Research
Available from: 2019-01-09 Created: 2019-01-09 Last updated: 2019-03-13Bibliographically approved
Lundell, J., Krug, R., Schaffernicht, E., Stoyanov, T. & Kyrki, V. (2018). Safe-To-Explore State Spaces: Ensuring Safe Exploration in Policy Search with Hierarchical Task Optimization. In: Asfour, T (Ed.), IEEE-RAS Conference on Humanoid Robots: . Paper presented at IEEE-RAS 18th Conference on Humanoid Robots (Humanoids 2018), Beijing, China, November 6-9, 2018 (pp. 132-138). IEEE
Open this publication in new window or tab >>Safe-To-Explore State Spaces: Ensuring Safe Exploration in Policy Search with Hierarchical Task Optimization
Show others...
2018 (English)In: IEEE-RAS Conference on Humanoid Robots / [ed] Asfour, T, IEEE, 2018, p. 132-138Conference paper, Published paper (Refereed)
Abstract [en]

Policy search reinforcement learning allows robots to acquire skills by themselves. However, the learning procedure is inherently unsafe as the robot has no a-priori way to predict the consequences of the exploratory actions it takes. Therefore, exploration can lead to collisions with the potential to harm the robot and/or the environment. In this work we address the safety aspect by constraining the exploration to happen in safe-to-explore state spaces. These are formed by decomposing target skills (e.g., grasping) into higher ranked sub-tasks (e.g., collision avoidance, joint limit avoidance) and lower ranked movement tasks (e.g., reaching). Sub-tasks are defined as concurrent controllers (policies) in different operational spaces together with associated Jacobians representing their joint-space mapping. Safety is ensured by only learning policies corresponding to lower ranked sub-tasks in the redundant null space of higher ranked ones. As a side benefit, learning in sub-manifolds of the state-space also facilitates sample efficiency. Reaching skills performed in simulation and grasping skills performed on a real robot validate the usefulness of the proposed approach.

Place, publisher, year, edition, pages
IEEE, 2018
Series
IEEE-RAS International Conference on Humanoid Robots, ISSN 2164-0572
Keywords
Sensorimotor learning, Grasping and Manipulation, Concept and strategy learning
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-71311 (URN)000458689700019 ()
Conference
IEEE-RAS 18th Conference on Humanoid Robots (Humanoids 2018), Beijing, China, November 6-9, 2018
Funder
Swedish Foundation for Strategic Research
Note

Funding Agency:

Academy of Finland  314180

Available from: 2019-01-09 Created: 2019-01-09 Last updated: 2019-03-01Bibliographically approved
Canelhas, D. R., Schaffernicht, E., Stoyanov, T., Lilienthal, A. & Davison, A. J. (2017). Compressed Voxel-Based Mapping Using Unsupervised Learning. Robotics, 6(3), Article ID 15.
Open this publication in new window or tab >>Compressed Voxel-Based Mapping Using Unsupervised Learning
Show others...
2017 (English)In: Robotics, E-ISSN 2218-6581, Vol. 6, no 3, article id 15Article in journal (Refereed) Published
Abstract [en]

In order to deal with the scaling problem of volumetric map representations, we propose spatially local methods for high-ratio compression of 3D maps, represented as truncated signed distance fields. We show that these compressed maps can be used as meaningful descriptors for selective decompression in scenarios relevant to robotic applications. As compression methods, we compare using PCA-derived low-dimensional bases to nonlinear auto-encoder networks. Selecting two application-oriented performance metrics, we evaluate the impact of different compression rates on reconstruction fidelity as well as to the task of map-aided ego-motion estimation. It is demonstrated that lossily reconstructed distance fields used as cost functions for ego-motion estimation can outperform the original maps in challenging scenarios from standard RGB-D (color plus depth) data sets due to the rejection of high-frequency noise content.

Place, publisher, year, edition, pages
Basel, Switzerland: MDPI AG, 2017
Keywords
3D mapping, TSDF, compression, dictionary learning, auto-encoder, denoising
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-64420 (URN)10.3390/robotics6030015 (DOI)000419218300002 ()2-s2.0-85030989493 (Scopus ID)
Note

Funding Agencies:

European Commission  FP7-ICT-270350 

H-ICT  732737 

Available from: 2018-01-19 Created: 2018-01-19 Last updated: 2018-01-19Bibliographically approved
Andreasson, H., Adolfsson, D., Stoyanov, T., Magnusson, M. & Lilienthal, A. (2017). Incorporating Ego-motion Uncertainty Estimates in Range Data Registration. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), Vancouver, Canada, September 24–28, 2017 (pp. 1389-1395). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Incorporating Ego-motion Uncertainty Estimates in Range Data Registration
Show others...
2017 (English)In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 1389-1395Conference paper, Published paper (Refereed)
Abstract [en]

Local scan registration approaches commonlyonly utilize ego-motion estimates (e.g. odometry) as aninitial pose guess in an iterative alignment procedure. Thispaper describes a new method to incorporate ego-motionestimates, including uncertainty, into the objective function of aregistration algorithm. The proposed approach is particularlysuited for feature-poor and self-similar environments,which typically present challenges to current state of theart registration algorithms. Experimental evaluation showssignificant improvements in accuracy when using data acquiredby Automatic Guided Vehicles (AGVs) in industrial productionand warehouse environments.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2017
Series
Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems, ISSN 2153-0858, E-ISSN 2153-0866
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-62803 (URN)10.1109/IROS.2017.8202318 (DOI)000426978201108 ()2-s2.0-85041958720 (Scopus ID)978-1-5386-2682-5 (ISBN)978-1-5386-2683-2 (ISBN)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), Vancouver, Canada, September 24–28, 2017
Projects
Semantic RobotsILIAD
Funder
Knowledge FoundationEU, Horizon 2020, 732737
Available from: 2017-11-24 Created: 2017-11-24 Last updated: 2018-04-09Bibliographically approved
Ahtiainen, J., Stoyanov, T. & Saarinen, J. (2017). Normal Distributions Transform Traversability Maps: LIDAR-Only Approach for Traversability Mapping in Outdoor Environments. Journal of Field Robotics, 34(3), 600-621
Open this publication in new window or tab >>Normal Distributions Transform Traversability Maps: LIDAR-Only Approach for Traversability Mapping in Outdoor Environments
2017 (English)In: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 34, no 3, p. 600-621Article in journal (Refereed) Published
Abstract [en]

Safe and reliable autonomous navigation in unstructured environments remains a challenge for field robots. In particular, operating on vegetated terrain is problematic, because simple purely geometric traversability analysis methods typically classify dense foliage as nontraversable. As traversing through vegetated terrain is often possible and even preferable in some cases (e.g., to avoid executing longer paths), more complex multimodal traversability analysis methods are necessary. In this article, we propose a three-dimensional (3D) traversability mapping algorithm for outdoor environments, able to classify sparsely vegetated areas as traversable, without compromising accuracy on other terrain types. The proposed normal distributions transform traversability mapping (NDT-TM) representation exploits 3D LIDAR sensor data to incrementally expand normal distributions transform occupancy (NDT-OM) maps. In addition to geometrical information, we propose to augment the NDT-OM representation with statistical data of the permeability and reflectivity of each cell. Using these additional features, we train a support-vector machine classifier to discriminate between traversable and nondrivable areas of the NDT-TM maps. We evaluate classifier performance on a set of challenging outdoor environments and note improvements over previous purely geometrical traversability analysis approaches.

Place, publisher, year, edition, pages
John Wiley & Sons, 2017
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-53368 (URN)10.1002/rob.21657 (DOI)000400272700008 ()2-s2.0-84971413791 (Scopus ID)
Note

Funding Agencies:

Finnish Society of Automation  

Finnish Funding Agency for Technology and Innovation (TEKES)  

Forum for Intelligent Machines (FIMA)  

Energy and Life Cycle Cost Efficient Machines (EFFIMA) research program 

Available from: 2016-11-02 Created: 2016-11-02 Last updated: 2018-01-13Bibliographically approved
Canelhas, D. R., Stoyanov, T. & Lilienthal, A. J. (2016). From Feature Detection in Truncated Signed Distance Fields to Sparse Stable Scene Graphs. IEEE Robotics and Automation Letters, 1(2), 1148-1155
Open this publication in new window or tab >>From Feature Detection in Truncated Signed Distance Fields to Sparse Stable Scene Graphs
2016 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, Vol. 1, no 2, p. 1148-1155Article in journal (Refereed) Published
Abstract [en]

With the increased availability of GPUs and multicore CPUs, volumetric map representations are an increasingly viable option for robotic applications. A particularly important representation is the truncated signed distance field (TSDF) that is at the core of recent advances in dense 3D mapping. However, there is relatively little literature exploring the characteristics of 3D feature detection in volumetric representations. In this paper we evaluate the performance of features extracted directly from a 3D TSDF representation. We compare the repeatability of Integral invariant features, specifically designed for volumetric images, to the 3D extensions of Harris and Shi & Tomasi corners. We also study the impact of different methods for obtaining gradients for their computation. We motivate our study with an example application for building sparse stable scene graphs, and present an efficient GPU-parallel algorithm to obtain the graphs, made possible by the combination of TSDF and 3D feature points. Our findings show that while the 3D extensions of 2D corner-detection perform as expected, integral invariants have shortcomings when applied to discrete TSDFs. We conclude with a discussion of the cause for these points of failure that sheds light on possible mitigation strategies.

Place, publisher, year, edition, pages
Piscataway, USA: Institute of Electrical and Electronics Engineers (IEEE), 2016
Keywords
Mapping, recognition
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-53369 (URN)10.1109/LRA.2016.2523555 (DOI)000413726900073 ()2-s2.0-84992291892 (Scopus ID)
Available from: 2016-11-02 Created: 2016-11-02 Last updated: 2018-03-09Bibliographically approved
Stoyanov, T., Krug, R., Muthusamy, R. & Kyrki, V. (2016). Grasp Envelopes: Extracting Constraints on Gripper Postures from Online Reconstructed 3D Models. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2016), Daejeong, Korea, October 9-14, 2016 (pp. 885-892). New York: Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Grasp Envelopes: Extracting Constraints on Gripper Postures from Online Reconstructed 3D Models
2016 (English)In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), New York: Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 885-892Conference paper, Published paper (Refereed)
Abstract [en]

Grasping systems that build upon meticulously planned hand postures rely on precise knowledge of object geometry, mass and frictional properties - assumptions which are often violated in practice. In this work, we propose an alternative solution to the problem of grasp acquisition in simple autonomous pick and place scenarios, by utilizing the concept of grasp envelopes: sets of constraints on gripper postures. We propose a fast method for extracting grasp envelopes for objects that fit within a known shape category, placed in an unknown environment. Our approach is based on grasp envelope primitives, which encode knowledge of human grasping strategies. We use environment models, reconstructed from noisy sensor observations, to refine the grasp envelope primitives and extract bounded envelopes of collision-free gripper postures. Also, we evaluate the envelope extraction procedure both in a stand alone fashion, as well as an integrated component of an autonomous picking system.

Place, publisher, year, edition, pages
New York: Institute of Electrical and Electronics Engineers (IEEE), 2016
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-53372 (URN)10.1109/IROS.2016.7759155 (DOI)000391921701009 ()978-1-5090-3762-9 (ISBN)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2016), Daejeong, Korea, October 9-14, 2016
Available from: 2016-11-02 Created: 2016-11-02 Last updated: 2018-07-17Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-6013-4874

Search in DiVA

Show all publications