oru.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 54) Show all publications
Sun, D., Kiselev, A., Liao, Q., Stoyanov, T. & Loutfi, A. (2020). A New Mixed Reality - based Teleoperation System for Telepresence and Maneuverability Enhancement. IEEE Transactions on Human-Machine Systems, 50(1), 55-67
Open this publication in new window or tab >>A New Mixed Reality - based Teleoperation System for Telepresence and Maneuverability Enhancement
Show others...
2020 (English)In: IEEE Transactions on Human-Machine Systems, ISSN 2168-2305, Vol. 50, no 1, p. 55-67Article in journal (Refereed) Published
Abstract [en]

Virtual Reality (VR) is regarded as a useful tool for teleoperation system that provides operators an immersive visual feedback on the robot and the environment. However, without any haptic feedback or physical constructions, VR-based teleoperation systems normally have poor maneuverability and may cause operational faults in some fine movements. In this paper, we employ Mixed Reality (MR), which combines real and virtual worlds, to develop a novel teleoperation system. New system design and control algorithms are proposed. For the system design, a MR interface is developed based on a virtual environment augmented with real-time data from the task space with a goal to enhance the operator’s visual perception. To allow the operator to be freely decoupled from the control loop and offload the operator’s burden, a new interaction proxy is proposed to control the robot. For the control algorithms, two control modes are introduced to improve long-distance movements and fine movements of the MR-based teleoperation. In addition, a set of fuzzy logic based methods are proposed to regulate the position, velocity and force of the robot in order to enhance the system maneuverability and deal with the potential operational faults. Barrier Lyapunov Function (BLF) and back-stepping methods are leveraged to design the control laws and simultaneously guarantee the system stability under state constraints.  Experiments conducted using a 6-Degree of Freedom (DoF) robotic arm prove the feasibility of the system.

Place, publisher, year, edition, pages
IEEE, 2020
Keywords
Force control, motion regulation, telerobotics, virtual reality
National Category
Robotics
Identifiers
urn:nbn:se:oru:diva-77829 (URN)10.1109/THMS.2019.2960676 (DOI)000508380700005 ()2-s2.0-85077905008 (Scopus ID)
Funder
Knowledge Foundation
Available from: 2019-11-11 Created: 2019-11-11 Last updated: 2020-03-10Bibliographically approved
Sun, D., Liao, Q., Stoyanov, T., Kiselev, A. & Loutfi, A. (2019). Bilateral telerobotic system using Type-2 fuzzy neural network based moving horizon estimation force observer for enhancement of environmental force compliance and human perception. Automatica, 106, 358-373
Open this publication in new window or tab >>Bilateral telerobotic system using Type-2 fuzzy neural network based moving horizon estimation force observer for enhancement of environmental force compliance and human perception
Show others...
2019 (English)In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 106, p. 358-373Article in journal (Refereed) Published
Abstract [en]

This paper firstly develops a novel force observer using Type-2 Fuzzy Neural Network (T2FNN)-based Moving Horizon Estimation (MHE) to estimate external force/torque information and simultaneously filter out the system disturbances. Then, by using the proposed force observer, a new bilateral teleoperation system is proposed that allows the slave industrial robot to be more compliant to the environment and enhances the situational awareness of the human operator by providing multi-level force feedback. Compared with existing force observer algorithms that highly rely on knowing exact mathematical models, the proposed force estimation strategy can derive more accurate external force/torque information of the robots with complex mechanism and with unknown dynamics. Applying the estimated force information, an external-force-regulated Sliding Mode Control (SMC) strategy with the support of machine vision is proposed to enhance the adaptability of the slave robot and the perception of the operator about various scenarios by virtue of the detected location of the task object. The proposed control system is validated by the experiment platform consisting of a universal robot (UR10), a haptic device and an RGB-D sensor.

Place, publisher, year, edition, pages
Pergamon Press, 2019
Keywords
Force estimation and control, Type-2 fuzzy neural network, Moving horizon estimation, Bilateral teleoperation, Machine vision
National Category
Control Engineering
Research subject
Computer and Systems Science
Identifiers
urn:nbn:se:oru:diva-74377 (URN)10.1016/j.automatica.2019.04.033 (DOI)000473380000041 ()2-s2.0-85065901728 (Scopus ID)
Funder
Swedish Research Council
Available from: 2019-05-23 Created: 2019-05-23 Last updated: 2019-11-13Bibliographically approved
Hoang, D.-C., Stoyanov, T. & Lilienthal, A. J. (2019). Object-RPE: Dense 3D Reconstruction and Pose Estimation with Convolutional Neural Networks for Warehouse Robots. In: 2019 European Conference on Mobile Robots, ECMR 2019: Proceedings. Paper presented at 2019 European Conference on Mobile Robots (ECMR), Prague, Czech Republic, 4-6 Sept, 2019. IEEE, Article ID 152970.
Open this publication in new window or tab >>Object-RPE: Dense 3D Reconstruction and Pose Estimation with Convolutional Neural Networks for Warehouse Robots
2019 (English)In: 2019 European Conference on Mobile Robots, ECMR 2019: Proceedings, IEEE, 2019, article id 152970Conference paper, Published paper (Refereed)
Abstract [en]

We present a system for accurate 3D instance-aware semantic reconstruction and 6D pose estimation, using an RGB-D camera. Our framework couples convolutional neural networks (CNNs) and a state-of-the-art dense Simultaneous Localisation and Mapping (SLAM) system, ElasticFusion, to achieve both high-quality semantic reconstruction as well as robust 6D pose estimation for relevant objects. The method presented in this paper extends a high-quality instance-aware semantic 3D Mapping system from previous work [1] by adding a 6D object pose estimator. While the main trend in CNN-based 6D pose estimation has been to infer object's position and orientation from single views of the scene, our approach explores performing pose estimation from multiple viewpoints, under the conjecture that combining multiple predictions can improve the robustness of an object detection system. The resulting system is capable of producing high-quality object-aware semantic reconstructions of room-sized environments, as well as accurately detecting objects and their 6D poses. The developed method has been verified through experimental validation on the YCB-Video dataset and a newly collected warehouse object dataset. Experimental results confirmed that the proposed system achieves improvements over state-of-the-art methods in terms of surface reconstruction and object pose prediction. Our code and video are available at https://sites.google.com/view/object-rpe.

Place, publisher, year, edition, pages
IEEE, 2019
National Category
Robotics
Identifiers
urn:nbn:se:oru:diva-78295 (URN)10.1109/ECMR.2019.8870927 (DOI)2-s2.0-85074398548 (Scopus ID)978-1-7281-3605-9 (ISBN)
Conference
2019 European Conference on Mobile Robots (ECMR), Prague, Czech Republic, 4-6 Sept, 2019
Available from: 2019-11-29 Created: 2019-11-29 Last updated: 2020-02-05Bibliographically approved
Gabellieri, C., Palleschi, A., Mannucci, A., Pierallini, M., Stefanini, E., Catalano, M. G., . . . Pallottino, L. (2019). Towards an Autonomous Unwrapping System for Intralogistics. IEEE Robotics and Automation Letters, 4(4), 4603-4610
Open this publication in new window or tab >>Towards an Autonomous Unwrapping System for Intralogistics
Show others...
2019 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, no 4, p. 4603-4610Article in journal (Refereed) Published
Abstract [en]

Warehouse logistics is a rapidly growing market for robots. However, one key procedure that has not received much attention is the unwrapping of pallets to prepare them for objects picking. In fact, to prevent the goods from falling and to protect them, pallets are normally wrapped in plastic when they enter the warehouse. Currently, unwrapping is mainly performed by human operators, due to the complexity of its planning and control phases. Autonomous solutions exist, but usually they are designed for specific situations, require a large footprint and are characterized by low flexibility. In this work, we propose a novel integrated robotic solution for autonomous plastic film removal relying on an impedance-controlled robot. The main contribution is twofold: on one side, a strategy to plan Cartesian impedance and trajectory to execute the cut without damaging the goods is discussed; on the other side, we present a cutting device that we designed for this purpose. The proposed solution presents the characteristics of high versatility and the need for a reduced footprint, due to the adopted technologies and the integration with a mobile base. Experimental results are shown to validate the proposed approach.

Place, publisher, year, edition, pages
IEEE, 2019
Keywords
Pallets, Wrapping, Robots, Plastics, Task analysis, Impedance, Surface impedance, Logistics, compliance and impedance control, industrial robots, automatic unwrapping
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-78007 (URN)10.1109/LRA.2019.2934710 (DOI)000494827600026 ()
Funder
EU, Horizon 2020, 732737
Note

Funding Agency:

Ministero dell' Istruzione, dell' Universita e della Ricerca (MIUR)

Available from: 2019-11-22 Created: 2019-11-22 Last updated: 2020-03-10Bibliographically approved
Della Corte, B., Andreasson, H., Stoyanov, T. & Grisetti, G. (2019). Unified Motion-Based Calibration of Mobile Multi-Sensor Platforms With Time Delay Estimation. IEEE Robotics and Automation Letters, 4(2), 902-909
Open this publication in new window or tab >>Unified Motion-Based Calibration of Mobile Multi-Sensor Platforms With Time Delay Estimation
2019 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, no 2, p. 902-909Article in journal (Refereed) Published
Abstract [en]

The ability to maintain and continuously update geometric calibration parameters of a mobile platform is a key functionality for every robotic system. These parameters include the intrinsic kinematic parameters of the platform, the extrinsic parameters of the sensors mounted on it, and their time delays. In this letter, we present a unified pipeline for motion-based calibration of mobile platforms equipped with multiple heterogeneous sensors. We formulate a unified optimization problem to concurrently estimate the platform kinematic parameters, the sensors extrinsic parameters, and their time delays. We analyze the influence of the trajectory followed by the robot on the accuracy of the estimate. Our framework automatically selects appropriate trajectories to maximize the information gathered and to obtain a more accurate parameters estimate. In combination with that, our pipeline observes the parameters evolution in long-term operation to detect possible values change in the parameters set. The experiments conducted on real data show a smooth convergence along with the ability to detect changes in parameters value. We release an open-source version of our framework to the community.

Place, publisher, year, edition, pages
IEEE, 2019
Keywords
Calibration and Identification
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-72756 (URN)10.1109/LRA.2019.2892992 (DOI)000458182100012 ()
Note

Funding Agency:

Semantic Robots Research Profile - Swedish Knowledge Foundation (KKS) 

Available from: 2019-02-25 Created: 2019-02-25 Last updated: 2019-02-25Bibliographically approved
Canelhas, D. R., Stoyanov, T. & Lilienthal, A. J. (2018). A Survey of Voxel Interpolation Methods and an Evaluation of Their Impact on Volumetric Map-Based Visual Odometry. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA),: . Paper presented at IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, May 21-25, 2018 (pp. 6337-6343). IEEE Computer Society
Open this publication in new window or tab >>A Survey of Voxel Interpolation Methods and an Evaluation of Their Impact on Volumetric Map-Based Visual Odometry
2018 (English)In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA),, IEEE Computer Society, 2018, p. 6337-6343Conference paper, Published paper (Refereed)
Abstract [en]

Voxel volumes are simple to implement and lend themselves to many of the tools and algorithms available for 2D images. However, the additional dimension of voxels may be costly to manage in memory when mapping large spaces at high resolutions. While lowering the resolution and using interpolation is common work-around, in the literature we often find that authors either use trilinear interpolation or nearest neighbors and rarely any of the intermediate options. This paper presents a survey of geometric interpolation methods for voxel-based map representations. In particular we study the truncated signed distance field (TSDF) and the impact of using fewer than 8 samples to perform interpolation within a depth-camera pose tracking and mapping scenario. We find that lowering the number of samples fetched to perform the interpolation results in performance similar to the commonly used trilinear interpolation method, but leads to higher framerates. We also report that lower bit-depth generally leads to performance degradation, though not as much as may be expected, with voxels containing as few as 3 bits sometimes resulting in adequate estimation of camera trajectories.

Place, publisher, year, edition, pages
IEEE Computer Society, 2018
Keywords
Voxels, Compression, Interpolation, TSDF, Visual Odometry
National Category
Robotics Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-67850 (URN)000446394504116 ()
Conference
IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, May 21-25, 2018
Projects
H2020 ILIADH2020 Roblog
Funder
EU, Horizon 2020, 732737
Available from: 2018-07-11 Created: 2018-07-11 Last updated: 2018-10-22Bibliographically approved
Stoyanov, T., Krug, R., Kiselev, A., Sun, D. & Loutfi, A. (2018). Assisted Telemanipulation: A Stack-Of-Tasks Approach to Remote Manipulator Control. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, October 1-5, 2018 (pp. 6640-6645). IEEE Press
Open this publication in new window or tab >>Assisted Telemanipulation: A Stack-Of-Tasks Approach to Remote Manipulator Control
Show others...
2018 (English)In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE Press, 2018, p. 6640-6645Conference paper, Published paper (Refereed)
Abstract [en]

This article presents an approach for assisted teleoperation of a robot arm, formulated within a real-time stack-of-tasks (SoT) whole-body motion control framework. The approach leverages the hierarchical nature of the SoT framework to integrate operator commands with assistive tasks, such as joint limit and obstacle avoidance or automatic gripper alignment. Thereby some aspects of the teleoperation problem are delegated to the controller and carried out autonomously. The key contributions of this work are two-fold: the first is a method for unobtrusive integration of autonomy in a telemanipulation system; and the second is a user study evaluation of the proposed system in the context of teleoperated pick-and-place tasks. The proposed approach of assistive control was found to result in higher grasp success rates and shorter trajectories than achieved through manual control, without incurring additional cognitive load to the operator.

Place, publisher, year, edition, pages
IEEE Press, 2018
Series
IEEE International Conference on Intelligent Robots and Systems. Proceedings, ISSN 2153-0858, E-ISSN 2153-0866
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-71310 (URN)10.1109/IROS.2018.8594457 (DOI)000458872706014 ()978-1-5386-8094-0 (ISBN)978-1-5386-8095-7 (ISBN)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, October 1-5, 2018
Funder
Knowledge FoundationSwedish Foundation for Strategic Research
Available from: 2019-01-09 Created: 2019-01-09 Last updated: 2019-03-13Bibliographically approved
Lundell, J., Krug, R., Schaffernicht, E., Stoyanov, T. & Kyrki, V. (2018). Safe-To-Explore State Spaces: Ensuring Safe Exploration in Policy Search with Hierarchical Task Optimization. In: Asfour, T (Ed.), IEEE-RAS Conference on Humanoid Robots: . Paper presented at IEEE-RAS 18th Conference on Humanoid Robots (Humanoids 2018), Beijing, China, November 6-9, 2018 (pp. 132-138). IEEE
Open this publication in new window or tab >>Safe-To-Explore State Spaces: Ensuring Safe Exploration in Policy Search with Hierarchical Task Optimization
Show others...
2018 (English)In: IEEE-RAS Conference on Humanoid Robots / [ed] Asfour, T, IEEE, 2018, p. 132-138Conference paper, Published paper (Refereed)
Abstract [en]

Policy search reinforcement learning allows robots to acquire skills by themselves. However, the learning procedure is inherently unsafe as the robot has no a-priori way to predict the consequences of the exploratory actions it takes. Therefore, exploration can lead to collisions with the potential to harm the robot and/or the environment. In this work we address the safety aspect by constraining the exploration to happen in safe-to-explore state spaces. These are formed by decomposing target skills (e.g., grasping) into higher ranked sub-tasks (e.g., collision avoidance, joint limit avoidance) and lower ranked movement tasks (e.g., reaching). Sub-tasks are defined as concurrent controllers (policies) in different operational spaces together with associated Jacobians representing their joint-space mapping. Safety is ensured by only learning policies corresponding to lower ranked sub-tasks in the redundant null space of higher ranked ones. As a side benefit, learning in sub-manifolds of the state-space also facilitates sample efficiency. Reaching skills performed in simulation and grasping skills performed on a real robot validate the usefulness of the proposed approach.

Place, publisher, year, edition, pages
IEEE, 2018
Series
IEEE-RAS International Conference on Humanoid Robots, ISSN 2164-0572
Keywords
Sensorimotor learning, Grasping and Manipulation, Concept and strategy learning
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-71311 (URN)000458689700019 ()
Conference
IEEE-RAS 18th Conference on Humanoid Robots (Humanoids 2018), Beijing, China, November 6-9, 2018
Funder
Swedish Foundation for Strategic Research
Note

Funding Agency:

Academy of Finland  314180

Available from: 2019-01-09 Created: 2019-01-09 Last updated: 2019-03-01Bibliographically approved
Canelhas, D. R., Schaffernicht, E., Stoyanov, T., Lilienthal, A. & Davison, A. J. (2017). Compressed Voxel-Based Mapping Using Unsupervised Learning. Robotics, 6(3), Article ID 15.
Open this publication in new window or tab >>Compressed Voxel-Based Mapping Using Unsupervised Learning
Show others...
2017 (English)In: Robotics, E-ISSN 2218-6581, Vol. 6, no 3, article id 15Article in journal (Refereed) Published
Abstract [en]

In order to deal with the scaling problem of volumetric map representations, we propose spatially local methods for high-ratio compression of 3D maps, represented as truncated signed distance fields. We show that these compressed maps can be used as meaningful descriptors for selective decompression in scenarios relevant to robotic applications. As compression methods, we compare using PCA-derived low-dimensional bases to nonlinear auto-encoder networks. Selecting two application-oriented performance metrics, we evaluate the impact of different compression rates on reconstruction fidelity as well as to the task of map-aided ego-motion estimation. It is demonstrated that lossily reconstructed distance fields used as cost functions for ego-motion estimation can outperform the original maps in challenging scenarios from standard RGB-D (color plus depth) data sets due to the rejection of high-frequency noise content.

Place, publisher, year, edition, pages
Basel, Switzerland: MDPI AG, 2017
Keywords
3D mapping, TSDF, compression, dictionary learning, auto-encoder, denoising
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-64420 (URN)10.3390/robotics6030015 (DOI)000419218300002 ()2-s2.0-85030989493 (Scopus ID)
Note

Funding Agencies:

European Commission  FP7-ICT-270350 

H-ICT  732737 

Available from: 2018-01-19 Created: 2018-01-19 Last updated: 2018-01-19Bibliographically approved
Andreasson, H., Adolfsson, D., Stoyanov, T., Magnusson, M. & Lilienthal, A. (2017). Incorporating Ego-motion Uncertainty Estimates in Range Data Registration. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), Vancouver, Canada, September 24–28, 2017 (pp. 1389-1395). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Incorporating Ego-motion Uncertainty Estimates in Range Data Registration
Show others...
2017 (English)In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 1389-1395Conference paper, Published paper (Refereed)
Abstract [en]

Local scan registration approaches commonlyonly utilize ego-motion estimates (e.g. odometry) as aninitial pose guess in an iterative alignment procedure. Thispaper describes a new method to incorporate ego-motionestimates, including uncertainty, into the objective function of aregistration algorithm. The proposed approach is particularlysuited for feature-poor and self-similar environments,which typically present challenges to current state of theart registration algorithms. Experimental evaluation showssignificant improvements in accuracy when using data acquiredby Automatic Guided Vehicles (AGVs) in industrial productionand warehouse environments.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2017
Series
Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems, ISSN 2153-0858, E-ISSN 2153-0866
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-62803 (URN)10.1109/IROS.2017.8202318 (DOI)000426978201108 ()2-s2.0-85041958720 (Scopus ID)978-1-5386-2682-5 (ISBN)978-1-5386-2683-2 (ISBN)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), Vancouver, Canada, September 24–28, 2017
Projects
Semantic RobotsILIAD
Funder
Knowledge FoundationEU, Horizon 2020, 732737
Available from: 2017-11-24 Created: 2017-11-24 Last updated: 2018-04-09Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-6013-4874

Search in DiVA

Show all publications