Till Örebro universitet

oru.seÖrebro universitets publikationer
Ändra sökning
Avgränsa sökresultatet
12 1 - 50 av 79
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Ahtiainen, Juhana
    et al.
    Department of Electrical Engineering and Automation, Aalto University, Espoo, Finland.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Saarinen, Jari
    GIM Ltd., Espoo, Finland.
    Normal Distributions Transform Traversability Maps: LIDAR-Only Approach for Traversability Mapping in Outdoor Environments2017Ingår i: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 34, nr 3, s. 600-621Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Safe and reliable autonomous navigation in unstructured environments remains a challenge for field robots. In particular, operating on vegetated terrain is problematic, because simple purely geometric traversability analysis methods typically classify dense foliage as nontraversable. As traversing through vegetated terrain is often possible and even preferable in some cases (e.g., to avoid executing longer paths), more complex multimodal traversability analysis methods are necessary. In this article, we propose a three-dimensional (3D) traversability mapping algorithm for outdoor environments, able to classify sparsely vegetated areas as traversable, without compromising accuracy on other terrain types. The proposed normal distributions transform traversability mapping (NDT-TM) representation exploits 3D LIDAR sensor data to incrementally expand normal distributions transform occupancy (NDT-OM) maps. In addition to geometrical information, we propose to augment the NDT-OM representation with statistical data of the permeability and reflectivity of each cell. Using these additional features, we train a support-vector machine classifier to discriminate between traversable and nondrivable areas of the NDT-TM maps. We evaluate classifier performance on a set of challenging outdoor environments and note improvements over previous purely geometrical traversability analysis approaches.

  • 2.
    Almqvist, Håkan
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Magnusson, Martin
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Improving Point-Cloud Accuracy from a Moving Platform in Field Operations2013Ingår i: 2013 IEEE International Conference on Robotics and Automation (ICRA), IEEE conference proceedings, 2013, s. 733-738Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper presents a method for improving the quality of distorted 3D point clouds made from a vehicle equipped with a laser scanner moving over uneven terrain. Existing methods that use 3D point-cloud data (for tasks such as mapping, localisation, and object detection) typically assume that each point cloud is accurate. For autonomous robots moving in rough terrain, it is often the case that the vehicle moves a substantial amount during the acquisition of one point cloud, in which case the data will be distorted. The method proposed in this paper is capable of increasing the accuracy of 3D point clouds, without assuming any specific features of the environment (such as planar walls), without resorting to a "stop-scan-go" approach, and without relying on specialised and expensive hardware. Each new point cloud is matched to the previous using normal-distribution-transform (NDT) registration, after which a mini-loop closure is performed with a local, per-scan, graph-based SLAM method. The proposed method increases the accuracy of both the measured platform trajectory and the point cloud. The method is validated on both real-world and simulated data.

  • 3.
    Andreasson, Henrik
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Adolfsson, Daniel
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Magnusson, Martin
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Incorporating Ego-motion Uncertainty Estimates in Range Data Registration2017Ingår i: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE), 2017, s. 1389-1395Konferensbidrag (Refereegranskat)
    Abstract [en]

    Local scan registration approaches commonlyonly utilize ego-motion estimates (e.g. odometry) as aninitial pose guess in an iterative alignment procedure. Thispaper describes a new method to incorporate ego-motionestimates, including uncertainty, into the objective function of aregistration algorithm. The proposed approach is particularlysuited for feature-poor and self-similar environments,which typically present challenges to current state of theart registration algorithms. Experimental evaluation showssignificant improvements in accuracy when using data acquiredby Automatic Guided Vehicles (AGVs) in industrial productionand warehouse environments.

  • 4.
    Andreasson, Henrik
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Bouguerra, Abdelbaki
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Cirillo, Marcello
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Dimitrov, Dimitar Nikolaev
    INRIA - Grenoble, Meylan, France.
    Driankov, Dimiter
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Karlsson, Lars
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Pecora, Federico
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Saarinen, Jari Pekka
    Örebro universitet, Institutionen för naturvetenskap och teknik. Aalto University, Espo, Finland .
    Sherikov, Aleksander
    Centre de recherche Grenoble Rhône-Alpes, Grenoble, France .
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Autonomous transport vehicles: where we are and what is missing2015Ingår i: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 22, nr 1, s. 64-75Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this article, we address the problem of realizing a complete efficient system for automated management of fleets of autonomous ground vehicles in industrial sites. We elicit from current industrial practice and the scientific state of the art the key challenges related to autonomous transport vehicles in industrial environments and relate them to enabling techniques in perception, task allocation, motion planning, coordination, collision prediction, and control. We propose a modular approach based on least commitment, which integrates all modules through a uniform constraint-based paradigm. We describe an instantiation of this system and present a summary of the results, showing evidence of increased flexibility at the control level to adapt to contingencies.

  • 5.
    Andreasson, Henrik
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Saarinen, Jari
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Cirillo, Marcello
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Drive the Drive: From Discrete Motion Plans to Smooth Drivable Trajectories2014Ingår i: Robotics, E-ISSN 2218-6581, Vol. 3, nr 4, s. 400-416Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Autonomous navigation in real-world industrial environments is a challenging task in many respects. One of the key open challenges is fast planning and execution of trajectories to reach arbitrary target positions and orientations with high accuracy and precision, while taking into account non-holonomic vehicle constraints. In recent years, lattice-based motion planners have been successfully used to generate kinematically and kinodynamically feasible motions for non-holonomic vehicles. However, the discretized nature of these algorithms induces discontinuities in both state and control space of the obtained trajectories, resulting in a mismatch between the achieved and the target end pose of the vehicle. As endpose accuracy is critical for the successful loading and unloading of cargo in typical industrial applications, automatically planned paths have not been widely adopted in commercial AGV systems. The main contribution of this paper is a path smoothing approach, which builds on the output of a lattice-based motion planner to generate smooth drivable trajectories for non-holonomic industrial vehicles. The proposed approach is evaluated in several industrially relevant scenarios and found to be both fast (less than 2 s per vehicle trajectory) and accurate (end-point pose errors below 0.01 m in translation and 0.005 radians in orientation).

    Ladda ner fulltext (pdf)
    fulltext
  • 6.
    Andreasson, Henrik
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Saarinen, Jari
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Cirillo, Marcello
    Örebro universitet, Institutionen för naturvetenskap och teknik. SCANIA AB, Södertälje, Sweden.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Fast, continuous state path smoothing to improve navigation accuracy2015Ingår i: IEEE International Conference on Robotics and Automation (ICRA), 2015, IEEE Computer Society, 2015, s. 662-669Konferensbidrag (Refereegranskat)
    Abstract [en]

    Autonomous navigation in real-world industrial environments is a challenging task in many respects. One of the key open challenges is fast planning and execution of trajectories to reach arbitrary target positions and orientations with high accuracy and precision, while taking into account non-holonomic vehicle constraints. In recent years, lattice-based motion planners have been successfully used to generate kinematically and kinodynamically feasible motions for non-holonomic vehicles. However, the discretized nature of these algorithms induces discontinuities in both state and control space of the obtained trajectories, resulting in a mismatch between the achieved and the target end pose of the vehicle. As endpose accuracy is critical for the successful loading and unloading of cargo in typical industrial applications, automatically planned paths have not be widely adopted in commercial AGV systems. The main contribution of this paper addresses this shortcoming by introducing a path smoothing approach, which builds on the output of a lattice-based motion planner to generate smooth drivable trajectories for non-holonomic industrial vehicles. In real world tests presented in this paper we demonstrate that the proposed approach is fast enough for online use (it computes trajectories faster than they can be driven) and highly accurate. In 100 repetitions we achieve mean end-point pose errors below 0.01 meters in translation and 0.002 radians in orientation. Even the maximum errors are very small: only 0.02 meters in translation and 0.008 radians in orientation.

  • 7.
    Andreasson, Henrik
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Real time registration of RGB-D data using local visual features and 3D-NDT registration2012Ingår i: Proc. of International Conference on Robotics and Automation (ICRA) Workshop on Semantic Perception, Mapping and Exploration (SPME), IEEE, 2012Konferensbidrag (Refereegranskat)
    Abstract [en]

    Recent increased popularity of RGB-D capable sensors in robotics has resulted in a surge of related RGBD registration methods. This paper presents several RGB-D registration algorithms based on combinations between local visual feature and geometric registration. Fast and accurate transformation refinement is obtained by using a recently proposed geometric registration algorithm, based on the Three-Dimensional Normal Distributions Transform (3D-NDT). Results obtained on standard data sets have demonstrated mean translational errors on the order of 1 cm and rotational errors bellow 1 degree, at frame processing rates of about 15 Hz.

  • 8.
    Bennetts, Victor Hernandez
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Schaffernicht, Erik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Trincavelli, Marco
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Robot Assisted Gas Tomography - Localizing Methane Leaks in Outdoor Environments2014Ingår i: 2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE conference proceedings, 2014, s. 6362-6367Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we present an inspection robot to produce gas distribution maps and localize gas sources in large outdoor environments. The robot is equipped with a 3D laser range finder and a remote gas sensor that returns integral concentration measurements. We apply principles of tomography to create a spatial gas distribution model from integral gas concentration measurements. The gas distribution algorithm is framed as a convex optimization problem and it models the mean distribution and the fluctuations of gases. This is important since gas dispersion is not an static phenomenon and furthermore, areas of high fluctuation can be correlated with the location of an emitting source. We use a compact surface representation created from the measurements of the 3D laser range finder with a state of the art mapping algorithm to get a very accurate localization and estimation of the path of the laser beams. In addition, a conic model for the beam of the remote gas sensor is introduced. We observe a substantial improvement in the gas source localization capabilities over previous state-of-the-art in our evaluation carried out in an open field environment.

  • 9. Birk, Andreas
    et al.
    Poppinga, Jann
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Nevatia, Yashodhan
    Planetary Exploration in USARSim: A Case Study including Real World Data from Mars2009Ingår i: RoboCup 2008: Robot Soccer World Cup XII / [ed] Volume editors: Luca Iocchi, Hitoshi Matsubara, Alfredo Weitzenfeld, Changjiu Zhou, Springer Berlin Heidelberg , 2009, s. 463-472Konferensbidrag (Refereegranskat)
    Abstract [en]

     Intelligent Mobile Robots are increasingly used in unstructured domains; one particularly challenging example for this is, planetary exploration. The preparation of according missions is highly non-trivial, especially as it is difficult to carry out realistic experiments without, very sophisticated infrastructures. In this paper, we argue that, the, Unified System for Automation and Robot Simulation (USARSim) offers interesting opportunities for research on planetary exploration by mobile robots. With the example of work on terrain classification, it, is shown how synthetic as well as real world data, from Mars call be used to test an algorithm's performance in USARSim. Concretely, experiments with an algorithm for the detection of negotiable ground oil a, planetary surface are presented. It is shown that the approach performs fast; and robust on planetary surfaces.

  • 10. Birk, Andreas
    et al.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Nevatia, Yashodhan
    Ambrus, Rares
    Poppinga, Jan
    Pathak, Kaustubh
    Terrain Classification for Autonomous Robot Mobility: from Safety, Security Rescue Robotics to Planetary Exploration2008Konferensbidrag (Refereegranskat)
  • 11.
    Canelhas, Daniel R.
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Schaffernicht, Erik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Davison, Andrew J.
    Department of Computing, Imperial College London, London, United Kingdom.
    Compressed Voxel-Based Mapping Using Unsupervised Learning2017Ingår i: Robotics, E-ISSN 2218-6581, Vol. 6, nr 3, artikel-id 15Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In order to deal with the scaling problem of volumetric map representations, we propose spatially local methods for high-ratio compression of 3D maps, represented as truncated signed distance fields. We show that these compressed maps can be used as meaningful descriptors for selective decompression in scenarios relevant to robotic applications. As compression methods, we compare using PCA-derived low-dimensional bases to nonlinear auto-encoder networks. Selecting two application-oriented performance metrics, we evaluate the impact of different compression rates on reconstruction fidelity as well as to the task of map-aided ego-motion estimation. It is demonstrated that lossily reconstructed distance fields used as cost functions for ego-motion estimation can outperform the original maps in challenging scenarios from standard RGB-D (color plus depth) data sets due to the rejection of high-frequency noise content.

  • 12.
    Canelhas, Daniel R.
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    From Feature Detection in Truncated Signed Distance Fields to Sparse Stable Scene Graphs2016Ingår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, Vol. 1, nr 2, s. 1148-1155Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    With the increased availability of GPUs and multicore CPUs, volumetric map representations are an increasingly viable option for robotic applications. A particularly important representation is the truncated signed distance field (TSDF) that is at the core of recent advances in dense 3D mapping. However, there is relatively little literature exploring the characteristics of 3D feature detection in volumetric representations. In this paper we evaluate the performance of features extracted directly from a 3D TSDF representation. We compare the repeatability of Integral invariant features, specifically designed for volumetric images, to the 3D extensions of Harris and Shi & Tomasi corners. We also study the impact of different methods for obtaining gradients for their computation. We motivate our study with an example application for building sparse stable scene graphs, and present an efficient GPU-parallel algorithm to obtain the graphs, made possible by the combination of TSDF and 3D feature points. Our findings show that while the 3D extensions of 2D corner-detection perform as expected, integral invariants have shortcomings when applied to discrete TSDFs. We conclude with a discussion of the cause for these points of failure that sheds light on possible mitigation strategies.

    Ladda ner fulltext (pdf)
    fulltext
  • 13.
    Canelhas, Daniel R.
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Improved local shape feature stability through dense model tracking2013Ingår i: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2013, s. 3203-3209Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this work we propose a method to effectively remove noise from depth images obtained with a commodity structured light sensor. The proposed approach fuses data into a consistent frame of reference over time, thus utilizing prior depth measurements and viewpoint information in the noise removal process. The effectiveness of the approach is compared to two state of the art, single-frame denoising methods in the context of feature descriptor matching and keypoint detection stability. To make more general statements about the effect of noise removal in these applications, we extend a method for evaluating local image gradient feature descriptors to the domain of 3D shape descriptors. We perform a comparative study of three classes of such descriptors: Normal Aligned Radial Features, Fast Point Feature Histograms and Depth Kernel Descriptors; and evaluate their performance on a real-world industrial application data set. We demonstrate that noise removal enabled by the dense map representation results in major improvements in matching across all classes of descriptors as well as having a substantial positive impact on keypoint detection reliability

  • 14.
    Canelhas, Daniel R.
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    SDF tracker: a parallel algorithm for on-line pose estimation and scene reconstruction from depth images2013Ingår i: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2013, s. 3671-3676Konferensbidrag (Refereegranskat)
    Abstract [en]

    Ego-motion estimation and environment mapping are two recurring problems in the field of robotics. In this work we propose a simple on-line method for tracking the pose of a depth camera in six degrees of freedom and simultaneously maintaining an updated 3D map, represented as a truncated signed distance function. The distance function representation implicitly encodes surfaces in 3D-space and is used directly to define a cost function for accurate registration of new data. The proposed algorithm is highly parallel and achieves good accuracy compared to state of the art methods. It is suitable for reconstructing single household items, workspace environments and small rooms at near real-time rates, making it practical for use on modern CPU hardware

  • 15.
    Canelhas, Daniel Ricão
    et al.
    Univrses AB, Strängnäs, Sweden.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    A Survey of Voxel Interpolation Methods and an Evaluation of Their Impact on Volumetric Map-Based Visual Odometry2018Ingår i: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA),, IEEE Computer Society, 2018, s. 6337-6343Konferensbidrag (Refereegranskat)
    Abstract [en]

    Voxel volumes are simple to implement and lend themselves to many of the tools and algorithms available for 2D images. However, the additional dimension of voxels may be costly to manage in memory when mapping large spaces at high resolutions. While lowering the resolution and using interpolation is common work-around, in the literature we often find that authors either use trilinear interpolation or nearest neighbors and rarely any of the intermediate options. This paper presents a survey of geometric interpolation methods for voxel-based map representations. In particular we study the truncated signed distance field (TSDF) and the impact of using fewer than 8 samples to perform interpolation within a depth-camera pose tracking and mapping scenario. We find that lowering the number of samples fetched to perform the interpolation results in performance similar to the commonly used trilinear interpolation method, but leads to higher framerates. We also report that lower bit-depth generally leads to performance degradation, though not as much as may be expected, with voxels containing as few as 3 bits sometimes resulting in adequate estimation of camera trajectories.

    Ladda ner fulltext (pdf)
    A Survey of Voxel Interpolation Methods and an Evaluation of Their Impact on Volumetric Map-Based Visual Odometry
  • 16. Carpin, Stefano
    et al.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Nevatia, Yashodhan
    Lewis, M.
    Wang, J.
    Quantitative Assessments of USARSim Accuracy2006Konferensbidrag (Refereegranskat)
  • 17.
    Charusta, Krzysztof
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Krug, Robert
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Dimitrov, Dimitar
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Iliev, Boyko
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Generation of independent contact regions on objects reconstructed from noisy real-world range data2012Ingår i: 2012 IEEE International Conference on Robotics and Automation (ICRA), IEEE conference proceedings, 2012, s. 1338-1344Konferensbidrag (Refereegranskat)
    Abstract [en]

    The synthesis and evaluation of multi-fingered grasps on complex objects is a challenging problem that has received much attention in the robotics community. Although several promising approaches have been developed, applications to real-world systems are limited to simple objects or gripper configurations. The paradigm of Independent Contact Regions (ICRs) has been proposed as a way to increase the tolerance to grasp positioning errors. This concept is well established, though only on precise geometric object models. This work is concerned with the application of the ICR paradigm to models reconstructed from real-world range data. We propose a method for increasing the robustness of grasp synthesis on uncertain geometric models. The sensitivity of the ICR algorithm to noisy data is evaluated and a filtering approach is proposed to improve the quality of the final result.

  • 18.
    Della Corte, Bartolomeo
    et al.
    Department of Computer, Control, and Management Engineering “Antonio Ruberti” Sapienza, University of Rome, Rome, Italy.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Grisetti, Giorgio
    Department of Computer, Control, and Management Engineering “Antonio Ruberti” Sapienza, University of Rome, Rome, Italy.
    Unified Motion-Based Calibration of Mobile Multi-Sensor Platforms With Time Delay Estimation2019Ingår i: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 4, nr 2, s. 902-909Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The ability to maintain and continuously update geometric calibration parameters of a mobile platform is a key functionality for every robotic system. These parameters include the intrinsic kinematic parameters of the platform, the extrinsic parameters of the sensors mounted on it, and their time delays. In this letter, we present a unified pipeline for motion-based calibration of mobile platforms equipped with multiple heterogeneous sensors. We formulate a unified optimization problem to concurrently estimate the platform kinematic parameters, the sensors extrinsic parameters, and their time delays. We analyze the influence of the trajectory followed by the robot on the accuracy of the estimate. Our framework automatically selects appropriate trajectories to maximize the information gathered and to obtain a more accurate parameters estimate. In combination with that, our pipeline observes the parameters evolution in long-term operation to detect possible values change in the parameters set. The experiments conducted on real data show a smooth convergence along with the ability to detect changes in parameters value. We release an open-source version of our framework to the community.

  • 19.
    Dominguez, David Caceres
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Iannotta, Marco
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stork, Johannes Andreas
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Schaffernicht, Erik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    A Stack-of-Tasks Approach Combined With Behavior Trees: A New Framework for Robot Control2022Ingår i: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 7, nr 4, s. 12110-12117Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Stack-of-Tasks (SoT) control allows a robot to simultaneously fulfill a number of prioritized goals formulated in terms of (in)equality constraints in error space. Since this approach solves a sequence of Quadratic Programs (QP) at each time-step, without taking into account any temporal state evolution, it is suitable for dealing with local disturbances. However, its limitation lies in the handling of situations that require non-quadratic objectives to achieve a specific goal, as well as situations where countering the control disturbance would require a locally suboptimal action. Recent works address this shortcoming by exploiting Finite State Machines (FSMs) to compose the tasks in such a way that the robot does not get stuck in local minima. Nevertheless, the intrinsic trade-off between reactivity and modularity that characterizes FSMs makes them impractical for defining reactive behaviors in dynamic environments. In this letter, we combine the SoT control strategy with Behavior Trees (BTs), a task switching structure that addresses some of the limitations of the FSMs in terms of reactivity, modularity and re-usability. Experimental results on a Franka Emika Panda 7-DOF manipulator show the robustness of our framework, that allows the robot to benefit from the reactivity of both SoT and BTs.

  • 20.
    Ferri, Gabriele
    et al.
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Mondini, Alessio
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Manzi, Alessandro
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Mazzolai, Barbara
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Laschi, Cecilia
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Mattoli, Virgilio
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Reggente, Matteo
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lettere, Marco
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Dario, Paolo.
    Scuola Superiore Sant'Anna, Pisa, Italy.
    DustCart, a Mobile Robot for Urban Environments: Experiments of Pollution Monitoring and Mapping during Autonomous Navigation in Urban Scenarios2010Ingår i: Proceedings of ICRA Workshop on Networked and Mobile Robot Olfaction in Natural, Dynamic Environments, 2010Konferensbidrag (Refereegranskat)
    Abstract [en]

    In the framework of DustBot European project, aimed at developing a new multi-robot system for urban hygiene management, we have developed a twowheeled robot: DustCart. DustCart aims at providing a solution to door-to-door garbage collection: the robot, called by a user, navigates autonomously to his/her house; collects the garbage from the user and discharges it in an apposite area. An additional feature of DustCart is the capability to monitor the air pollution by means of an on board Air Monitoring Module (AMM). The AMM integrates sensors to monitor several atmospheric pollutants, such as carbon monoxide (CO), particular matter (PM10), nitrogen dioxide (NO2), ozone (O3) plus temperature (T) and relative humidity (rHu). An Ambient Intelligence platform (AmI) manages the robots’ operations through a wireless connection. AmI is able to collect measurements taken by different robots and to process them to create a pollution distribution map. In this paper we describe the DustCart robot system, focusing on the AMM and on the process of creating the pollutant distribution maps. We report results of experiments of one DustCart robot moving in urban scenarios and producing gas distribution maps using the Kernel DM+V algorithm. These experiments can be considered as one of the first attempts to use robots as mobile monitoring devices that can complement the traditional fixed stations.

  • 21.
    Gabellieri, Chiara
    et al.
    Centro di Ricerca “E. Piaggio” e Departimento di Ingnegneria dell’Informazione, Università di Pisa, Pisa, Italia.
    Palleschi, Alessandro
    Centro di Ricerca “E. Piaggio” e Departimento di Ingnegneria dell’Informazione, Università di Pisa, Pisa, Italia.
    Mannucci, Anna
    Centro di Ricerca “E. Piaggio” e Departimento di Ingnegneria dell’Informazione, Università di Pisa, Pisa, Italia.
    Pierallini, Michele
    Centro di Ricerca “E. Piaggio” e Departimento di Ingnegneria dell’Informazione, Università di Pisa, Pisa, Italia.
    Stefanini, Elisa
    Centro di Ricerca “E. Piaggio” e Departimento di Ingnegneria dell’Informazione, Università di Pisa, Pisa, Italia.
    Catalano, Manuel G.
    Istituto Italiano di Tecnologia, Genova GE, Italy.
    Caporale, Danilo
    Centro di Ricerca “E. Piaggio” e Departimento di Ingnegneria dell’Informazione, Università di Pisa, Pisa, Italia.
    Settimi, Alessandro
    Centro di Ricerca “E. Piaggio” e Departimento di Ingnegneria dell’Informazione, Università di Pisa, Pisa, Italia.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Magnusson, Martin
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Garabini, Manolo
    Centro di Ricerca “E. Piaggio” e Departimento di Ingnegneria dell’Informazione, Università di Pisa, Pisa, Italia.
    Pallottino, Lucia
    Centro di Ricerca “E. Piaggio” e Departimento di Ingnegneria dell’Informazione, Università di Pisa, Pisa, Italia.
    Towards an Autonomous Unwrapping System for Intralogistics2019Ingår i: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 4, nr 4, s. 4603-4610Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Warehouse logistics is a rapidly growing market for robots. However, one key procedure that has not received much attention is the unwrapping of pallets to prepare them for objects picking. In fact, to prevent the goods from falling and to protect them, pallets are normally wrapped in plastic when they enter the warehouse. Currently, unwrapping is mainly performed by human operators, due to the complexity of its planning and control phases. Autonomous solutions exist, but usually they are designed for specific situations, require a large footprint and are characterized by low flexibility. In this work, we propose a novel integrated robotic solution for autonomous plastic film removal relying on an impedance-controlled robot. The main contribution is twofold: on one side, a strategy to plan Cartesian impedance and trajectory to execute the cut without damaging the goods is discussed; on the other side, we present a cutting device that we designed for this purpose. The proposed solution presents the characteristics of high versatility and the need for a reduced footprint, due to the adopted technologies and the integration with a mobile base. Experimental results are shown to validate the proposed approach.

    Ladda ner fulltext (pdf)
    Towards an Autonomous Unwrapping System for Intralogistics
  • 22.
    Güler, Püren
    et al.
    Örebro University, Örebro, Sweden.
    Stork, Johannes A.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Visual state estimation in unseen environments through domain adaptation and metric learning2022Ingår i: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 9, artikel-id 833173Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In robotics, deep learning models are used in many visual perception applications, including the tracking, detection and pose estimation of robotic manipulators. The state of the art methods however are conditioned on the availability of annotated training data, which may in practice be costly or even impossible to collect. Domain augmentation is one popular method to improve generalization to out-of-domain data by extending the training data set with predefined sources of variation, unrelated to the primary task. While this typically results in better performance on the target domain, it is not always clear that the trained models are capable to accurately separate the signals relevant to solving the task (e.g., appearance of an object of interest) from those associated with differences between the domains (e.g., lighting conditions). In this work we propose to improve the generalization capabilities of models trained with domain augmentation by formulating a secondary structured metric-space learning objective. We concentrate on one particularly challenging domain transfer task-visual state estimation for an articulated underground mining machine-and demonstrate the benefits of imposing structure on the encoding space. Our results indicate that the proposed method has the potential to transfer feature embeddings learned on the source domain, through a suitably designed augmentation procedure, and on to an unseen target domain.

  • 23.
    Hernandez Bennetts, Victor
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Schaffernicht, Erik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Trincavelli, Marco
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Robot assisted gas tomography: an alternative approach for the detection of fugitive methane emissions2014Ingår i: Workshop on Robot Monitoring, 2014Konferensbidrag (Refereegranskat)
    Abstract [en]

    Methane (CH4) based combustibles, such as Natural Gas (NG) and BioGas (BG), are considered bridge fuels towards a decarbonized global energy system. NG emits less CO2 during combustion than other fossil fuels and BG can be produced from organic waste. However, at BG production sites, leaks are common and CH4 can escape through fissures in pipes and insulation layers. While by regulation BG producers shall issue monthly CH4 emission reports, measurements are sparsely collected, only at a few predefined locations. Due to the high global warming potential of CH4, efficient leakage detection systems are critical. We present a robotics approach to localize CH4 leaks. In Robot assisted Gas Tomography (RGT), a mobile robot is equipped with remote gas sensors to create gas distribution maps, which can be used to infer the location of emitting sources. Spectroscopy based remote gas sensors report integral concentrations, which means that the measurements are spatially unresolved, with neither information regarding the gas distribution over the optical path nor the length of the s beam. Thus, RGT fuses different sensing modalities, such as range sensors for robot localization and ray tracing, in order to infer plausible gas distribution models that explain the acquired integral concentration measurements.

  • 24.
    Hoang, Dinh-Cuong
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Object-RPE: Dense 3D Reconstruction and Pose Estimation with Convolutional Neural Networks2020Ingår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 133, artikel-id 103632Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We present an approach for recognizing objects present in a scene and estimating their full pose by means of an accurate 3D instance-aware semantic reconstruction. Our framework couples convolutional neural networks (CNNs) and a state-of-the-art dense Simultaneous Localisation and Mapping(SLAM) system, ElasticFusion [1], to achieve both high-quality semantic reconstruction as well as robust 6D pose estimation for relevant objects. We leverage the pipeline of ElasticFusion as a back-bone and propose a joint geometric and photometric error function with per-pixel adaptive weights. While the main trend in CNN-based 6D pose estimation has been to infer an object’s position and orientation from single views of the scene, our approach explores performing pose estimation from multiple viewpoints, under the conjecture that combining multiple predictions can improve the robustness of an object detection system. The resulting system is capable of producing high-quality instance-aware semantic reconstructions of room-sized environments, as well as accurately detecting objects and their 6D poses. The developed method has been verified through extensive experiments on different datasets. Experimental results confirmed that the proposed system achieves improvements over state-of-the-art methods in terms of surface reconstruction and object pose prediction. Our code and video are available at https://sites.google.com/view/object-rpe.

    Ladda ner fulltext (pdf)
    Object-RPE: Dense 3D reconstruction and pose estimation with convolutional neural networks
  • 25.
    Hoang, Dinh-Cuong
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Panoptic 3D Mapping and Object Pose Estimation Using Adaptively Weighted Semantic Information2020Ingår i: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 5, nr 2, s. 1962-1969Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We present a system capable of reconstructing highly detailed object-level models and estimating the 6D pose of objects by means of an RGB-D camera. In this work, we integrate deep-learning-based semantic segmentation, instance segmentation, and 6D object pose estimation into a state of the art RGB-D mapping system. We leverage the pipeline of ElasticFusion as a backbone and propose modifications of the registration cost function to make full use of the semantic class labels in the process. The proposed objective function features tunable weights for the depth, appearance, and semantic information channels, which are learned from data. A fast semantic segmentation and registration weight prediction convolutional neural network (Fast-RGBD-SSWP) suited to efficient computation is introduced. In addition, our approach explores performing 6D object pose estimation from multiple viewpoints supported by the high-quality reconstruction system. The developed method has been verified through experimental validation on the YCB-Video dataset and a dataset of warehouse objects. Our results confirm that the proposed system performs favorably in terms of surface reconstruction, segmentation quality, and accurate object pose estimation in comparison to other state-of-the-art systems. Our code and video are available at https://sites.google.com/view/panoptic-mope.

  • 26.
    Hoang, Dinh-Cuong
    et al.
    CT Department, FPT University, Hanoi, Vietnam.
    Stork, Johannes A.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik. Department of Computing and Software, McMaster University, Hamilton ON, Canada.
    Voting and Attention-Based Pose Relation Learning for Object Pose Estimation From 3D Point Clouds2022Ingår i: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 7, nr 4, s. 8980-8987Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Estimating the 6DOF pose of objects is an important function in many applications, such as robot manipulation or augmented reality. However, accurate and fast pose estimation from 3D point clouds is challenging, because of the complexity of object shapes, measurement noise, and presence of occlusions. We address this challenging task using an end-to-end learning approach for object pose estimation given a raw point cloud input. Our architecture pools geometric features together using a self-attention mechanism and adopts a deep Hough voting scheme for pose proposal generation. To build robustness to occlusion, the proposed network generates candidates by casting votes and accumulating evidence for object locations. Specifically, our model learns higher-level features by leveraging the dependency of object parts and object instances, thereby boosting the performance of object pose estimation. Our experiments show that our method outperforms state-of-the-art approaches in public benchmarks including the Sileane dataset 135 and the Fraunhofer IPA dataset [36]. We also deploy our proposed method to a real robot pick-and-place based on the estimated pose.

  • 27.
    Hoang, Dinh-Cuong
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik. ICT Department, FPT University, Hanoi, Vietnam.
    Stork, Johannes Andreas
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Context-Aware Grasp Generation in Cluttered Scenes2022Ingår i: 2022 International Conference on Robotics and Automation (ICRA), IEEE, 2022, s. 1492-1498Konferensbidrag (Refereegranskat)
    Abstract [en]

    Conventional methods to autonomous grasping rely on a pre-computed database with known objects to synthesize grasps, which is not possible for novel objects. On the other hand, recently proposed deep learning-based approaches have demonstrated the ability to generalize grasp for unknown objects. However, grasp generation still remains a challenging problem, especially in cluttered environments under partial occlusion. In this work, we propose an end-to-end deep learning approach for generating 6-DOF collision-free grasps given a 3D scene point cloud. To build robustness to occlusion, the proposed model generates candidates by casting votes and accumulating evidence for feasible grasp configurations. We exploit contextual information by encoding the dependency of objects in the scene into features to boost the performance of grasp generation. The contextual information enables our model to increase the likelihood that the generated grasps are collision-free. Our experimental results confirm that the proposed system performs favorably in terms of predicting object grasps in cluttered environments in comparison to the current state of the art methods.

    Ladda ner fulltext (pdf)
    Context-Aware Grasp Generation in Cluttered Scenes
  • 28.
    Hoang, Dinh-Cuong
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Object-RPE: Dense 3D Reconstruction and Pose Estimation with Convolutional Neural Networks for Warehouse Robots2019Ingår i: 2019 European Conference on Mobile Robots, ECMR 2019: Proceedings, IEEE, 2019, artikel-id 152970Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a system for accurate 3D instance-aware semantic reconstruction and 6D pose estimation, using an RGB-D camera. Our framework couples convolutional neural networks (CNNs) and a state-of-the-art dense Simultaneous Localisation and Mapping (SLAM) system, ElasticFusion, to achieve both high-quality semantic reconstruction as well as robust 6D pose estimation for relevant objects. The method presented in this paper extends a high-quality instance-aware semantic 3D Mapping system from previous work [1] by adding a 6D object pose estimator. While the main trend in CNN-based 6D pose estimation has been to infer object's position and orientation from single views of the scene, our approach explores performing pose estimation from multiple viewpoints, under the conjecture that combining multiple predictions can improve the robustness of an object detection system. The resulting system is capable of producing high-quality object-aware semantic reconstructions of room-sized environments, as well as accurately detecting objects and their 6D poses. The developed method has been verified through experimental validation on the YCB-Video dataset and a newly collected warehouse object dataset. Experimental results confirmed that the proposed system achieves improvements over state-of-the-art methods in terms of surface reconstruction and object pose prediction. Our code and video are available at https://sites.google.com/view/object-rpe.

    Ladda ner fulltext (pdf)
    Object-RPE: Dense 3D Reconstruction and Pose Estimation with Convolutional Neural Networks for Warehouse Robots
  • 29.
    Iannotta, Marco
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Dominguez, David Caceres
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stork, Johannes Andreas
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Schaffernicht, Erik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Heterogeneous Full-body Control of a Mobile Manipulator with Behavior Trees2022Ingår i: IROS 2022 Workshop on Mobile Manipulation and Embodied Intelligence (MOMA): Challenges and  Opportunities, 2022Konferensbidrag (Refereegranskat)
    Abstract [en]

    Integrating the heterogeneous controllers of a complex mechanical system, such as a mobile manipulator, within the same structure and in a modular way is still challenging. In this work we extend our framework based on Behavior Trees for the control of a redundant mechanical system to the problem of commanding more complex systems that involve multiple low-level controllers. This allows the integrated systems to achieve non-trivial goals that require coordination among the sub-systems.

    Ladda ner fulltext (pdf)
    Heterogeneous Full-body Control of a Mobile Manipulator with Behavior Trees
  • 30.
    Ivan, Jean-Paul A.
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stork, Johannes A.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Online Distance Field Priors for Gaussian Process Implicit Surfaces2022Ingår i: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 7, nr 4, s. 8996-9003Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Gaussian process (GP) implicit surface models provide environment and object representations which elegantly address noise and uncertainty while remaining sufficiently flexible to capture complex geometry. However, GP models quickly become intractable as the size of the observation set grows-a trait which is difficult to reconcile with the rate at which modern range sensors produce data. Furthermore, naive applications of GPs to implicit surface models allocate model resources uniformly, thus using precious resources to capture simple geometry. In contrast to prior work addressing these challenges though model sparsification, spatial partitioning, or ad-hoc filtering, we propose introducing model bias online through the GP's mean function. We achieve more accurate distance fields using smaller models by creating a distance field prior from features which are easy to extract and have analytic distance fields. In particular, we demonstrate this approach using linear features. We show the proposed distance field halves model size in a 2D mapping task using data from a SICK S300 sensor. When applied to a single 3D scene from the TUM RGB-D SLAM dataset, we achieve a fivefold reduction in model size. Our proposed prior results in more accurate GP implicit surfaces, while allowing existing models to function in larger environments or with larger spatial partitions due to reduced model size.

  • 31.
    Krug, Robert
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Bonilla, Manuel
    Faculty of Engineering, Interdepart. Research Center "Enrico Piaggio", University of Pisa, Pisa, Italy.
    Tincani, Vinicio
    Faculty of Engineering, Interdepart. Research Center "Enrico Piaggio", University of Pisa, Pisa, Italy.
    Vaskevicius, Narunas
    Faculty of Engineering, Interdepart. Research Center "Enrico Piaggio", University of Pisa, Pisa, Italy.
    Fantoni, Gualtiero
    Faculty of Engineering, Interdepart. Research Center "Enrico Piaggio", University of Pisa, Pisa, Italy.
    Birk, Andreas
    Faculty of Engineering, Interdepart. Research Center "Enrico Piaggio", University of Pisa, Pisa, Italy.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Bicchi, Antonio
    Faculty of Engineering, Interdepart. Research Center "Enrico Piaggio", University of Pisa, Pisa, Italy.
    Improving Grasp Robustness via In-Hand Manipulation with Active Surfaces2014Ingår i: Workshop on Autonomous Grasping and Manipulation: An Open Challenge, 2014Konferensbidrag (Refereegranskat)
    Ladda ner fulltext (pdf)
    fulltext
  • 32.
    Krug, Robert
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Bonilla, Manuel
    Interdepart. Research Center “E. Piaggio”, University of Pisa, Pisa, Italy.
    Tincani, Vinicio
    Interdepart. Research Center “E. Piaggio”, University of Pisa, Pisa, Italy.
    Vaskevicius, Narunas
    Robotics Group, School of Engineering and Science, Jacobs University Bremen, Bremen, Germany.
    Fantoni, Gualtiero
    Interdepart. Research Center “E. Piaggio”, University of Pisa, Pisa, Italy.
    Birk, Andreas
    Robotics Group, School of Engineering and Science, Jacobs University Bremen, Bremen, Germany.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Bicchi, Antonio
    Interdepart. Research Center “E. Piaggio”, University of Pisa, Pisa, Italy.
    Velvet fingers: grasp planning and execution for an underactuated gripper with active surfaces2014Ingår i: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), IEEE conference proceedings, 2014, s. 3669-3675Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this work we tackle the problem of planning grasps for an underactuated gripper which enable it to retrieve target objects from a cluttered environment. Furthermore,we investigate how additional manipulation capabilities of the gripping device, provided by active surfaces on the inside of the fingers, can lead to performance improvement in the grasp execution process. To this end, we employ a simple strategy, in which the target object is ‘pulled-in’ towards the palm during grasping which results in firm enveloping grasps. We show the effectiveness of the suggested methods by means of experiments conducted in a real-world scenario.

  • 33.
    Krug, Robert
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Grasp Envelopes for Constraint-based Robot Motion Planning and Control2015Ingår i: Robotics: Science and Systems Conference: Workshop on Bridging the Gap between Data-driven and Analytical Physics-based Grasping and Manipulation, 2015Konferensbidrag (Refereegranskat)
    Abstract [en]

    We suggest a grasp represen-tation in form of a set of enveloping spatial constraints. Our representation transforms the grasp synthesisproblem (i. e., the question of where to position the graspingdevice) from finding a suitable discrete manipulator wrist pose to finding a suitable pose manifold. Also the correspondingmotion planning and execution problem is relaxed – insteadof transitioning the wrist to a discrete pose, it is enough tomove it anywhere within the grasp envelope which allows toexploit kinematic redundancy.

    Ladda ner fulltext (pdf)
    fulltext
  • 34.
    Krug, Robert
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Tincani, Vinicio
    Interdepart. Research Center “E. Piaggio”; University of Pisa, Pisa, Italy.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Mosberger, Rafael
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Fantoni, Gualtiero
    Interdepart. Research Center “E. Piaggio”; University of Pisa, Pisa, Italy.
    Bicchi, Antonio
    Interdepart. Research Center “E. Piaggio”; University of Pisa, Pisa, Italy.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    On Using Optimization-based Control instead of Path-Planning for Robot Grasp Motion Generation2015Ingår i: IEEE International Conference on Robotics and Automation (ICRA) - Workshop on Robotic Hands, Grasping, and Manipulation, 2015Konferensbidrag (Refereegranskat)
    Ladda ner fulltext (pdf)
    fulltext
  • 35.
    Krug, Robert
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Tincani, Vinicio
    University of Pisa, Pisa, Italy.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Mosberger, Rafael
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Fantoni, Gualtiero
    University of Pisa, Pisa, Italy.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    The Next Step in Robot Commissioning: Autonomous Picking and Palletizing2016Ingår i: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 1, nr 1, s. 546-553Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    So far, autonomous order picking (commissioning) systems have not been able to meet the stringent demands regarding speed, safety, and accuracy of real-world warehouse automation, resulting in reliance on human workers. In this letter, we target the next step in autonomous robot commissioning: automatizing the currently manual order picking procedure. To this end, we investigate the use case of autonomous picking and palletizing with a dedicated research platform and discuss lessons learned during testing in simplified warehouse settings. The main theoretical contribution is a novel grasp representation scheme which allows for redundancy in the gripper pose placement. This redundancy is exploited by a local, prioritized kinematic controller which generates reactive manipulator motions on-the-fly. We validated our grasping approach by means of a large set of experiments, which yielded an average grasp acquisition time of 23.5 s at a success rate of 94.7%. Our system is able to autonomously carry out simple order picking tasks in a humansafe manner, and as such serves as an initial step toward future commercial-scale in-house logistics automation solutions.

    Ladda ner fulltext (pdf)
    fulltext
  • 36.
    Lundell, Jens
    et al.
    Intelligent Robotics Group, Aalto University, Helsinki, Finland.
    Krug, Robert
    Royal Institute of Technology, Stockholm, Sweden.
    Schaffernicht, Erik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kyrki, Ville
    Intelligent Robotics Group, Aalto University, Helsinki, Finland.
    Safe-To-Explore State Spaces: Ensuring Safe Exploration in Policy Search with Hierarchical Task Optimization2018Ingår i: IEEE-RAS Conference on Humanoid Robots / [ed] Asfour, T, IEEE, 2018, s. 132-138Konferensbidrag (Refereegranskat)
    Abstract [en]

    Policy search reinforcement learning allows robots to acquire skills by themselves. However, the learning procedure is inherently unsafe as the robot has no a-priori way to predict the consequences of the exploratory actions it takes. Therefore, exploration can lead to collisions with the potential to harm the robot and/or the environment. In this work we address the safety aspect by constraining the exploration to happen in safe-to-explore state spaces. These are formed by decomposing target skills (e.g., grasping) into higher ranked sub-tasks (e.g., collision avoidance, joint limit avoidance) and lower ranked movement tasks (e.g., reaching). Sub-tasks are defined as concurrent controllers (policies) in different operational spaces together with associated Jacobians representing their joint-space mapping. Safety is ensured by only learning policies corresponding to lower ranked sub-tasks in the redundant null space of higher ranked ones. As a side benefit, learning in sub-manifolds of the state-space also facilitates sample efficiency. Reaching skills performed in simulation and grasping skills performed on a real robot validate the usefulness of the proposed approach.

  • 37.
    Magnusson, Martin
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Vaskevicius, Narunas
    Deptartment of EECS, Jacobs University, Bremen, Germany.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Pathak, Kaustubh
    Deptartment of EECS, Jacobs University, Bremen, Germany.
    Birk, Andreas
    Deptartment of EECS, Jacobs University, Bremen, Germany.
    Beyond points: Evaluating recent 3D scan-matching algorithms2015Ingår i: 2015 IEEE International Conference on Robotics and Automation (ICRA), IEEE conference proceedings , 2015, Vol. 2015 June, s. 3631-3637Konferensbidrag (Refereegranskat)
    Abstract [en]

    Given that 3D scan matching is such a central part of the perception pipeline for robots, thorough and large-scale investigations of scan matching performance are still surprisingly few. A crucial part of the scientific method is to perform experiments that can be replicated by other researchers in order to compare different results. In light of this fact, this paper presents a thorough comparison of 3D scan registration algorithms using a recently published benchmark protocol which makes use of a publicly available challenging data set that covers a wide range of environments. In particular, we evaluate two types of recent 3D registration algorithms - one local and one global. Both approaches take local surface structure into account, rather than matching individual points. After well over 100 000 individual tests, we conclude that algorithms using the normal distributions transform (NDT) provides accurate results compared to a modern implementation of the iterative closest point (ICP) method, when faced with scan data that has little overlap and weak geometric structure. We also demonstrate that the minimally uncertain maximum consensus (MUMC) algorithm provides accurate results in structured environments without needing an initial guess, and that it provides useful measures to detect whether it has succeeded or not. We also propose two amendments to the experimental protocol, in order to provide more valuable results in future implementations.

    Ladda ner fulltext (pdf)
    fulltext
  • 38.
    Mojtahedzadeh, Rasoul
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Application Based 3D Sensor Evaluation: A Case Study in 3D Object Pose Estimation for Automated Unloading of Containers2013Ingår i: Proceedings of the European Conference on Mobile Robots (ECMR), IEEE conference proceedings, 2013, s. 313-318Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    A fundamental task in the design process of a complex system that requires 3D visual perception is the choice of suitable 3D range sensors. Identifying the utility of 3D range sensors in an industrial application solely based on an evaluation of their distance accuracy and the noise level may lead to an inappropriate selection. To assess the actual effect on the performance of the system as a whole requires a more involved analysis. In this paper, we examine the problem of selecting a set of 3D range sensors when designing autonomous systems for specific industrial applications in a holistic manner. As an instance of this problem we present a case study with an experimental evaluation of the utility of four 3D range sensors for object pose estimation in the process of automation of unloading containers.

  • 39.
    Nevatia, Yashodhan
    et al.
    Univ Bremen, Dept EECS, Robot Lab, D-28725 Bremen, Germany.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Rathnam, Ravi
    Univ Bremen, Dept EECS, Robot Lab, D-28725 Bremen, Germany.
    Pfingsthorn, Max
    Markov, Stefan
    Ambrus, Rares
    Birk, Andreas
    Augmented Autonomy: Improving human-robot team performance in Urban Search and Rescue2008Ingår i: 2008 IEEE/RSJ International Conference on Robots and Intelligent Systems, vols 1-3, conference proceedings, New York: IEEE Robotics and Automation Society, 2008, s. 2103-2108Konferensbidrag (Refereegranskat)
    Abstract [en]

    Exploration of unknown environments remains one of the fundamental problems of mobile robotics. It is also a prime example for a task that can benefit significantly from multi-robot teams. We present an integrated system for semi-autonomous cooperative exploration, augmented by an intuitive user interface for efficient human supervision and control. In this preliminary study we demonstrate the effectiveness of the system as a whole and the intuitive interface in particular. Congruent with previous findings, results confirm that having a human in the loop improves task performance, especially with larger numbers of robots. Specific to our interface, we find that even untrained operators can efficiently manage a decently sized team of robots.

  • 40.
    Pfingsthorn, Max
    et al.
    Jacobs University Bremen, Campus Ring 1, 28759 Bremen, Germany.
    Nevatia, Yashodhan
    Jacobs University Bremen, Campus Ring 1, 28759 Bremen, Germany.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Rathnam, Ravi
    Jacobs University Bremen, Campus Ring 1, 28759 Bremen, Germany.
    Markov, Stefan
    Jacobs University Bremen, Campus Ring 1, 28759 Bremen, Germany.
    Birk, Andreas
    Jacobs University Bremen, Campus Ring 1, 28759 Bremen, Germany.
    Towards Cooperative and Decentralized Mapping in the Jacobs Virtual Rescue Team2009Ingår i: RoboCup 2008: Robot Soccer World Cup XII Vol 5399 / [ed] Iocchi, Luca; Matsubara, Hitoshi; Weitzenfeld, Alfredo; Zhou, Changjiu, Springer Berlin / Heidelberg , 2009, Vol. 5399, s. 225-234Kapitel i bok, del av antologi (Övrigt vetenskapligt)
    Abstract [en]

    The task of mapping and exploring an unknown environment remains one of the fundamental problems of mobile robotics. It is a task that can intuitively benefit significantly from a multi-robot approach. In this paper, we describe the design of the multi-robot mapping system used in the Jacobs Virtual Rescue team. The team competed in the World Cup 2007 and won the second place. It is shown how the recently proposed pose graph map representation facilitates not only map merging but also allows transmitting map updates efficiently

  • 41.
    Rietz, Finn
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik. Department of Informatics, University of Hamburg, Hamburg, Germany.
    Magg, Sven
    Hamburger Informatik Technologie-Center, Universität Hamburg, Hamburg, Germany.
    Heintz, Fredrik
    Department of Computer and Information Science, Linköping University, Linköping, Sweden.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Wermter, Stefan
    Department of Informatics, University of Hamburg, Hamburg, Germany.
    Stork, Johannes A
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Hierarchical goals contextualize local reward decomposition explanations2023Ingår i: Neural Computing & Applications, ISSN 0941-0643, E-ISSN 1433-3058, Vol. 35, nr 23, s. 16693-16704Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    One-step reinforcement learning explanation methods account for individual actions but fail to consider the agent's future behavior, which can make their interpretation ambiguous. We propose to address this limitation by providing hierarchical goals as context for one-step explanations. By considering the current hierarchical goal as a context, one-step explanations can be interpreted with higher certainty, as the agent's future behavior is more predictable. We combine reward decomposition with hierarchical reinforcement learning into a novel explainable reinforcement learning framework, which yields more interpretable, goal-contextualized one-step explanations. With a qualitative analysis of one-step reward decomposition explanations, we first show that their interpretability is indeed limited in scenarios with multiple, different optimal policies-a characteristic shared by other one-step explanation methods. Then, we show that our framework retains high interpretability in such cases, as the hierarchical goal can be considered as context for the explanation. To the best of our knowledge, our work is the first to investigate hierarchical goals not as an explanation directly but as additional context for one-step reinforcement learning explanations.

  • 42.
    Rietz, Finn
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Schaffernicht, Erik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stork, Johannes Andreas
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Towards Task-Prioritized Policy Composition2022Konferensbidrag (Refereegranskat)
    Abstract [en]

    Combining learned policies in a prioritized, ordered manner is desirable because it allows for modular design and facilitates data reuse through knowledge transfer. In control theory, prioritized composition is realized by null-space control, where low-priority control actions are projected into the null-space of high-priority control actions. Such a method is currently unavailable for Reinforcement Learning. We propose a novel, task-prioritized composition framework for Reinforcement Learning, which involves a novel concept: The indifferent-space of Reinforcement Learning policies. Our framework has the potential to facilitate knowledge transfer and modular design while greatly increasing data efficiency and data reuse for Reinforcement Learning agents. Further, our approach can ensure high-priority constraint satisfaction, which makes it promising for learning in safety-critical domains like robotics. Unlike null-space control, our approach allows learning globally optimal policies for the compound task by online learning in the indifference-space of higher-level policies after initial compound policy construction. 

  • 43.
    Saarinen, Jari
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Ala-Luhtala, Juha
    Aalto University of Technology, Aalto, Finland.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Normal distributions transform occupancy maps: application to large-scale online 3D mapping2013Ingår i: IEEE International Conference on Robotics and Automation, New York: IEEE conference proceedings, 2013, s. 2233-2238Konferensbidrag (Refereegranskat)
    Abstract [en]

    Autonomous vehicles operating in real-world industrial environments have to overcome numerous challenges, chief among which is the creation and maintenance of consistent 3D world models. This paper proposes to address the challenges of online real-world mapping by building upon previous work on compact spatial representation and formulating a novel 3D mapping approach — the Normal Distributions Transform Occupancy Map (NDT-OM). The presented algorithm enables accurate real-time 3D mapping in large-scale dynamic nvironments employing a recursive update strategy. In addition, the proposed approach can seamlessly provide maps at multiple resolutions allowing for fast utilization in high-level functions such as localization or path planning. Compared to previous approaches that use the NDT representation, the proposed NDT-OM formulates an exact and efficient recursive update formulation and models the full occupancy of the map.

  • 44.
    Saarinen, Jari
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    3D normal distributions transform occupancy maps: an efficient representation for mapping in dynamic environments2013Ingår i: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 32, nr 14, s. 1627-1644Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In order to enable long-term operation of autonomous vehicles in industrial environments numerous challenges need to be addressed. A basic requirement for many applications is the creation and maintenance of consistent 3D world models. This article proposes a novel 3D spatial representation for online real-world mapping, building upon two known representations: normal distributions transform (NDT) maps and occupancy grid maps. The proposed normal distributions transform occupancy map (NDT-OM) combines the advantages of both representations; compactness of NDT maps and robustness of occupancy maps. One key contribution in this article is that we formulate an exact recursive updates for NDT-OMs. We show that the recursive update equations provide natural support for multi-resolution maps. Next, we describe a modification of the recursive update equations that allows adaptation in dynamic environments. As a second key contribution we introduce NDT-OMs and formulate the occupancy update equations that allow to build consistent maps in dynamic environments. The update of the occupancy values are based on an efficient probabilistic sensor model that is specially formulated for NDT-OMs. In several experiments with a total of 17 hours of data from a milk factory we demonstrate that NDT-OMs enable real-time performance in large-scale, long-term industrial setups.

  • 45.
    Saarinen, Jari
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Normal distributions transform monte-carlo localization (NDT-MCL)2013Ingår i: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2013, s. 382-389Konferensbidrag (Refereegranskat)
  • 46.
    Saarinen, Jari
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Fast 3D mapping in highly dynamic environments using normal distributions transform occupancy maps2013Ingår i: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2013, s. 4694-4701Konferensbidrag (Refereegranskat)
  • 47.
    Stork, Johannes Andreas
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Ensemble of Sparse Gaussian Process Experts for Implicit Surface Mapping with Streaming Data2020Ingår i: IEEE International Conference on Robotics and Automation, IEEE, 2020, s. 10758-10764, artikel-id 9196620Konferensbidrag (Refereegranskat)
    Abstract [en]

    Creating maps is an essential task in robotics and provides the basis for effective planning and navigation. In this paper, we learn a compact and continuous implicit surface map of an environment from a stream of range data with known poses. For this, we create and incrementally adjust an ensemble of approximate Gaussian process (GP) experts which are each responsible for a different part of the map. Instead of inserting all arriving data into the GP models, we greedily trade-off between model complexity and prediction error. Our algorithm therefore uses less resources on areas with few geometric features and more where the environment is rich in variety. We evaluate our approach on synthetic and real-world data sets and analyze sensitivity to parameters and measurement noise. The results show that we can learn compact and accurate implicit surface models under different conditions, with a performance …

  • 48.
    Stoyanov, Todor Dimitrov
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Reliable autonomus navigation in semi-structured environments using the three-dimensional normal distributions transform (3D-NDT)2012Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Ladda ner (pdf)
    omslag
    Ladda ner (pdf)
    spikblad
  • 49.
    Stoyanov, Todor
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Krug, Robert
    Robotics, Learning and Perception lab, Royal Institute of Technology, Stockholm, Sweden.
    Kiselev, Andrey
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Sun, Da
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Assisted Telemanipulation: A Stack-Of-Tasks Approach to Remote Manipulator Control2018Ingår i: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE Press, 2018, s. 6640-6645Konferensbidrag (Refereegranskat)
    Abstract [en]

    This article presents an approach for assisted teleoperation of a robot arm, formulated within a real-time stack-of-tasks (SoT) whole-body motion control framework. The approach leverages the hierarchical nature of the SoT framework to integrate operator commands with assistive tasks, such as joint limit and obstacle avoidance or automatic gripper alignment. Thereby some aspects of the teleoperation problem are delegated to the controller and carried out autonomously. The key contributions of this work are two-fold: the first is a method for unobtrusive integration of autonomy in a telemanipulation system; and the second is a user study evaluation of the proposed system in the context of teleoperated pick-and-place tasks. The proposed approach of assistive control was found to result in higher grasp success rates and shorter trajectories than achieved through manual control, without incurring additional cognitive load to the operator.

  • 50.
    Stoyanov, Todor
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Krug, Robert
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Muthusamy, Rajkumar
    Aalto University, Esbo, Finland.
    Kyrki, Ville
    Aalto University, Esbo, Finland.
    Grasp Envelopes: Extracting Constraints on Gripper Postures from Online Reconstructed 3D Models2016Ingår i: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), New York: Institute of Electrical and Electronics Engineers (IEEE), 2016, s. 885-892Konferensbidrag (Refereegranskat)
    Abstract [en]

    Grasping systems that build upon meticulously planned hand postures rely on precise knowledge of object geometry, mass and frictional properties - assumptions which are often violated in practice. In this work, we propose an alternative solution to the problem of grasp acquisition in simple autonomous pick and place scenarios, by utilizing the concept of grasp envelopes: sets of constraints on gripper postures. We propose a fast method for extracting grasp envelopes for objects that fit within a known shape category, placed in an unknown environment. Our approach is based on grasp envelope primitives, which encode knowledge of human grasping strategies. We use environment models, reconstructed from noisy sensor observations, to refine the grasp envelope primitives and extract bounded envelopes of collision-free gripper postures. Also, we evaluate the envelope extraction procedure both in a stand alone fashion, as well as an integrated component of an autonomous picking system.

    Ladda ner fulltext (pdf)
    fulltext
12 1 - 50 av 79
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf