oru.sePublikationer
Ändra sökning
Avgränsa sökresultatet
1 - 28 av 28
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Antonova, Rika
    et al.
    Robotics, Perception and Learning, CSC, Royal Institute of Technology, Stockholm, Sweden.
    Kokic, Mia
    Robotics, Perception and Learning, CSC, Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Robotics, Perception and Learning, CSC, Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Robotics, Perception and Learning, CSC, Royal Institute of Technology, Stockholm, Sweden.
    Global Search with Bernoulli Alternation Kernel for Task-oriented Grasping Informed by Simulation2018Ingår i: Proceedings of Machine Learning Research: Conference on Robot Learning 2018, PMLR , 2018, Vol. 87, s. 641-650Konferensbidrag (Refereegranskat)
    Abstract [en]

    We develop an approach that benefits from large simulated datasets and takes full advantage of the limited online data that is most relevant. We propose a variant of Bayesian optimization that alternates between using informed and uninformed kernels. With this Bernoulli Alternation Kernel we ensure that discrepancies between simulation and reality do not hinder adapting robot control policies online. The proposed approach is applied to a challenging real-world problem of task-oriented grasping with novel objects. Our further contribution is a neural network architecture and training pipeline that use experience from grasping objects in simulation to learn grasp stability scores. We learn task scores from a labeled dataset with a convolutional network, which is used to construct an informed kernel for our variant of Bayesian optimization. Experiments on an ABB Yumi robot with real sensor data demonstrate success of our approach, despite the challenge of fulfilling task requirements and high uncertainty over physical properties of objects.

  • 2.
    Arnekvist, Isac
    et al.
    Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
    VPE: Variational Policy Embedding for Transfer Reinforcement Learning2018Manuskript (preprint) (Övrigt vetenskapligt)
  • 3.
    Bekiroglu, Yasemin
    et al.
    School of Mechanical Engineering, University of Birmingham, Birmingham, UK.
    Damianou, Andreas
    Department of Computer Science, University of Sheffield, Sheffield, UK.
    Detry, Renaud
    Centre for Autonomous Systems, CSC, Royal Institute of Technology, Sweden.
    Stork, Johannes Andreas
    Centre for Autonomous Systems, CSC, Royal Institute of Technology, Sweden.
    Kragic, Danica
    Centre for Autonomous Systems, CSC, Royal Institute of Technology, Sweden.
    Ek, Carl Henrik
    Centre for Autonomous Systems, CSC, Royal Institute of Technology, Sweden.
    Probabilistic consolidation of grasp experience2016Ingår i: 2016 IEEE International Conference on Robotics and Automation (ICRA), IEEE conference proceedings, 2016, s. 193-200Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a probabilistic model for joint representation of several sensory modalities and action parameters in a robotic grasping scenario. Our non-linear probabilistic latent variable model encodes relationships between grasp-related parameters, learns the importance of features, and expresses confidence in estimates. The model learns associations between stable and unstable grasps that it experiences during an exploration phase. We demonstrate the applicability of the model for estimating grasp stability, correcting grasps, identifying objects based on tactile imprints and predicting tactile imprints from object-relative gripper poses. We performed experiments on a real platform with both known and novel objects, i.e., objects the robot trained with, and previously unseen objects. Grasp correction had a 75% success rate on known objects, and 73% on new objects. We compared our model to a traditional regression model that succeeded in correcting grasps in only 38% of cases.

  • 4. Hang, Kaiyu
    et al.
    Li, Miao
    Stork, Johannes Andreas
    Bekiroglu, Yasemin
    Billard, Aude
    Kragic, Danica
    Hierarchical Fingertip Space for Synthesizing Adaptable Fingertip Grasps2014Konferensbidrag (Övrigt vetenskapligt)
  • 5.
    Hang, Kaiyu
    et al.
    Computer Vision and Active Perception Laboratory, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Li, Miao
    Learning Algorithms and Systems Laboratory, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
    Stork, Johannes Andreas
    Computer Vision and Active Perception Laboratory, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Bekiroglu, Yasemin
    Department of Mechanical Engineering, School of Engineering, University of Birmingham, Birmingham, UK.
    Pokorny, Florian T.
    Computer Vision and Active Perception Laboratory, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Billard, Aude
    Learning Algorithms and Systems Laboratory, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
    Kragic, Danica
    Computer Vision and Active Perception Laboratory, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Hierarchical fingertip space: A unified framework for grasp planning and in-hand grasp adaptation2016Ingår i: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 32, nr 4, s. 960-972Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We present a unified framework for grasp planning and in-hand grasp adaptation using visual, tactile, and proprioceptive feedback. The main objective of the proposed framework is to enable fingertip grasping by addressing problems of changed weight of the object, slippage, and external disturbances. For this purpose we introduce the Hierarchical Fingertip Space as a representation enabling optimization for both efficient grasp synthesis and online finger gaiting. Grasp synthesis is followed by a grasp adaptation step that consists of both grasp force adaptation through impedance control and regrasping/finger gaiting when the former is not sufficient. Experimental evaluation is conducted on an Allegro hand mounted on a Kuka LWR arm.

  • 6.
    Hang, Kaiyu
    et al.
    Department of Mechanical Engineering and Material Science, Yale University, New Haven, CT, USA.
    Lyu, Ximin
    Hong Kong University of Science and Technology, Hong Kong, China.
    Song, Haoran
    Hong Kong University of Science and Technology, Hong Kong, China.
    Stork, Johannes Andreas
    Örebro universitet, Institutionen för naturvetenskap och teknik. RPL, KTH Royal Institute of Technology, Stockholm, Sweden.
    Dollar, Aaron
    Department of Mechanical Engineering and Material Science, Yale University, New Haven, CT, USA.
    Kragic, Danica
    RPL, KTH Royal Institute of Technology, Stockholm, Sweden.
    Zhang, Fu
    The University of Hong Kong, Hong Kong, China.
    Perching and resting: A paradigm for UAV maneuvering with modularized landing gears2019Ingår i: Science Robotics, E-ISSN 2470-9476, Vol. 4, nr 28, artikel-id eaau6637Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Perching helps small unmanned aerial vehicles (UAVs) extend their time of operation by saving battery power. However, most strategies for UAV perching require complex maneuvering and rely on specific structures, such as rough walls for attaching or tree branches for grasping. Many strategies to perching neglect the UAV’s mission such that saving battery power interrupts the mission. We suggest enabling UAVs with the capability of making and stabilizing contacts with the environment, which will allow the UAV to consume less energy while retaining its altitude, in addition to the perching capability that has been proposed before. This new capability is termed “resting.” For this, we propose a modularized and actuated landing gear framework that allows stabilizing the UAV on a wide range of different structures by perching and resting. Modularization allows our framework to adapt to specific structures for resting through rapid prototyping with additive manufacturing. Actuation allows switching between different modes of perching and resting during flight and additionally enables perching by grasping. Our results show that this framework can be used to perform UAV perching and resting on a set of common structures, such as street lights and edges or corners of buildings. We show that the design is effective in reducing power consumption, promotes increased pose stability, and preserves large vision ranges while perching or resting at heights. In addition, we discuss the potential applications facilitated by our design, as well as the potential issues to be addressed for deployment in practice.

  • 7.
    Hang, Kaiyu
    et al.
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Hierarchical fingertip space for multi-fingered precision grasping2014Ingår i: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE Press, 2014, s. 1641-1648Konferensbidrag (Refereegranskat)
    Abstract [en]

    Dexterous in-hand manipulation of objects benefits from the ability of a robot system to generate precision grasps. In this paper, we propose a concept of Fingertip Space and its use for precision grasp synthesis. Fingertip Space is a representation that takes into account both the local geometry of object surface as well as the fingertip geometry. As such, it is directly applicable to the object point cloud data and it establishes a basis for the grasp search space. We propose a model for a hierarchical encoding of the Fingertip Space that enables multilevel refinement for efficient grasp synthesis. The proposed method works at the grasp contact level while not neglecting object shape nor hand kinematics. Experimental evaluation is performed for the Barrett hand considering also noisy and incomplete point cloud data.

  • 8.
    Hang, Kaiyu
    et al.
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Pokorny, Florian T.
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Combinatorial optimization for hierarchical contact-level grasping2014Ingår i: 2014 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2014, s. 381-388Konferensbidrag (Refereegranskat)
    Abstract [en]

    We address the problem of generating force-closed point contact grasps on complex surfaces and model it as a combinatorial optimization problem. Using a multilevel refinement metaheuristic, we maximize the quality of a grasp subject to a reachability constraint by recursively forming a hierarchy of increasingly coarser optimization problems. A grasp is initialized at the top of the hierarchy and then locally refined until convergence at each level. Our approach efficiently addresses the high dimensional problem of synthesizing stable point contact grasps while resulting in stable grasps from arbitrary initial configurations. Compared to a sampling-based approach, our method yields grasps with higher grasp quality. Empirical results are presented for a set of different objects. We investigate the number of levels in the hierarchy, the computational complexity, and the performance relative to a random sampling baseline approach.

  • 9.
    Hang, Kaiyu
    et al.
    Robotics, Perception, and Learning Lab, KTH Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Robotics, Perception, and Learning Lab, KTH Royal Institute of Technology, Stockholm, Sweden.
    Pollard, Nancy S.
    Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA.
    Kragic, Danica
    Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA.
    A Framework For Optimal Grasp Contact Planning2017Ingår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 2, nr 2, s. 704-711Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We consider the problem of finding grasp contacts that are optimal under a given grasp quality function on arbitrary objects. Our approach formulates a framework for contact-level grasping as a path finding problem in the space of supercontact grasps. The initial supercontact grasp contains all grasps and in each step along a path grasps are removed. For this, we introduce and formally characterize search space structure and cost functions under which minimal cost paths correspond to optimal grasps. Our formulation avoids expensive exhaustive search and reduces computational cost by several orders of magnitude. We present admissible heuristic functions and exploit approximate heuristic search to further reduce the computational cost while maintaining bounded suboptimality for resulting grasps. We exemplify our formulation with point-contact grasping for which we define domain specific heuristics and demonstrate optimality and bounded suboptimality by comparing against exhaustive and uniform cost search on example objects. Furthermore, we explain how to restrict the search graph to satisfy grasp constraints for modeling hand kinematics. We also analyze our algorithm empirically in terms of created and visited search states and resultant effective branching factor.

  • 10.
    Haustein, Joshua A.
    et al.
    Robotics, Perception and Learning Lab (RPL), CAS, EECS, KTH Royal Institute of Technology, Stockholm, Sweden.
    Arnekvist, Isac
    Robotics, Perception and Learning Lab (RPL), CAS, EECS, KTH Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Robotics, Perception and Learning Lab (RPL), CAS, EECS, KTH Royal Institute of Technology, Stockholm, Sweden.
    Hang, Kaiyu
    GRAB Lab, Yale University, New Haven, USA.
    Kragic, Danica
    Robotics, Perception and Learning Lab (RPL), CAS, EECS, KTH Royal Institute of Technology, Stockholm, Sweden.
    Learning Manipulation States and Actions for Efficient Non-prehensile Rearrangement Planning2019Manuskript (preprint) (Övrigt vetenskapligt)
  • 11.
    Kokic, Mia
    et al.
    Robotics, Perception, and Learning lab, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Robotics, Perception, and Learning lab, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Haustein, Joshua A.
    Robotics, Perception, and Learning lab, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Robotics, Perception, and Learning lab, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Affordance detection for task-specific grasping using deep learning2017Ingår i: 2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), IEEE conference proceedings, 2017, s. 91-98Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we utilize the notion of affordances to model relations between task, object and a grasp to address the problem of task-specific robotic grasping. We use convolutional neural networks for encoding and detecting object affordances, class and orientation, which we utilize to formulate grasp constraints. Our approach applies to previously unseen objects from a fixed set of classes and facilitates reasoning about which tasks an object affords and how to grasp it for that task. We evaluate affordance detection on full-view and partial-view synthetic data and compute task-specific grasps for objects that belong to ten different classes and afford five different tasks. We demonstrate the feasibility of our approach by employing an optimization-based grasp planner to compute task-specific grasps.

  • 12.
    Luber, Matthias
    et al.
    Social Robotics Lab, Department of Computer Science, University of Freiburg, Germany.
    Stork, Johannes Andreas
    Social Robotics Lab, Department of Computer Science, University of Freiburg, Germany.
    Tipaldi, Gian Diego
    Social Robotics Lab, Department of Computer Science, University of Freiburg, Germany.
    Arras, Kai O.
    Social Robotics Lab, Department of Computer Science, University of Freiburg, Germany.
    People tracking with human motion predictions from social forces2010Ingår i: 2010 IEEE International Conference on Robotics and Automation, Proceedings, IEEE conference proceedings, 2010, s. 464-469Konferensbidrag (Refereegranskat)
    Abstract [en]

    For many tasks in populated environments, robots need to keep track of current and future motion states of people. Most approaches to people tracking make weak assumptions on human motion such as constant velocity or acceleration. But even over a short period, human behavior is more complex and influenced by factors such as the intended goal, other people, objects in the environment, and social rules. This motivates the use of more sophisticated motion models for people tracking especially since humans frequently undergo lengthy occlusion events. In this paper, we consider computational models developed in the cognitive and social science communities that describe individual and collective pedestrian dynamics for tasks such as crowd behavior analysis. In particular, we integrate a model based on a social force concept into a multi-hypothesis target tracker. We show how the refined motion predictions translate into more informed probability distributions over hypotheses and finally into a more robust tracking behavior and better occlusion handling. In experiments in indoor and outdoor environments with data from a laser range finder, the social force model leads to more accurate tracking with up to two times fewer data association errors.

  • 13.
    Marzinotto, Alejandro
    et al.
    Computer Vision and Active Perception Lab., Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Computer Vision and Active Perception Lab., Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Rope through Loop Insertion for Robotic Knotting: A Virtual Magnetic Field Formulation2016Rapport (Övrigt vetenskapligt)
    Abstract [en]

    Inserting an end of a rope through a loop is a common and important action that is required for creating most types of knots. To perform this action, we need to pass the end of the rope through an area that is enclosed by another segment of rope. As for all knotting actions, the robot must for this exercise control over a semi-compliant and flexible body whose complex 3d shape is difficult to perceive and follow. Additionally, the target loop often deforms during the insertion. We address this problem by defining a virtual magnetic field through the loop's interior and use the Biot Savart law to guide the robotic manipulator that holds the end of the rope. This approach directly defines, for any manipulator position, a motion vector that results in a path that passes through the loop. The motion vector is directly derived from the position of the loop and changes as soon as it moves or deforms. In simulation, we test the insertion action against dynamic loop deformation of different intensity. We also combine insertion with grasp and release actions, coordinated by a hybrid control system, to tie knots in simulation and with a NAO robot.

  • 14.
    Marzinotto, Alejandro
    et al.
    Computer Vision and Active Perception Lab., Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Computer Vision and Active Perception Lab., Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Dimarogonas, Dimos V.
    Computer Vision and Active Perception Lab., Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Computer Vision and Active Perception Lab., Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Cooperative grasping through topological object representation2014Ingår i: 2014 IEEE-RAS International Conference on Humanoid Robots, IEEE, 2014, s. 685-692Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a cooperative grasping approach based on a topological representation of objects. Using point cloud data we extract loops on objects suitable for generating entanglement. We use the Gauss Linking Integral to derive controllers for multi-agent systems that generate hooking grasps on such loops while minimizing the entanglement between robots. The approach copes well with noisy point cloud data, it is computationally simple and robust. We demonstrate the method for performing object grasping and transportation, through a hooking maneuver, with two coordinated NAO robots.

  • 15.
    Mitsioni, Ioanna
    et al.
    Division of Robotics, Perception and Learning (RPL), CAS, EECS, KTH Royal Institute of Technology, Stockholm, Sweden.
    Karayiannidis, Yiannis
    Division of Systems and Control, Department of Electrical Engineering, Chalmers University of Technology, Gothenburg, Sweden.
    Stork, Johannes Andreas
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kragic, Danica
    Division of Robotics, Perception and Learning (RPL), CAS, EECS, KTH Royal Institute of Technology, Stockholm, Sweden.
    Data-Driven Model Predictive Control for Food-CuttingManuskript (preprint) (Övrigt vetenskapligt)
    Abstract [en]

    Modelling of contact-rich tasks is challenging and cannot be entirely solved using classical control approaches due to the difficulty of constructing an analytic description of the contact dynamics. Additionally, in a manipulation task like food-cutting, purely learning-based methods such as Reinforcement Learning, require either a vast amount of data that is expensive to collect on a real robot, or a highly realistic simulation environment, which is currently not available. This paper presents a data-driven control approach that employs a recurrent neural network to model the dynamics for a Model Predictive Controller. We extend on previous work that was limited to torque-controlled robots by incorporating Force/Torque sensor measurements and formulate the control problem so that it can be applied to the more common velocity controlled robots. We evaluate the performance on objects used for training, as well as on unknown objects, by means of the cutting rates achieved and demonstrate that the method can efficiently treat different cases with only one dynamic model. Finally we investigate the behavior of the system during force-critical instances of cutting and illustrate its adaptive behavior in difficult cases.

  • 16.
    Pokorny, Florian T.
    et al.
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Grasping Objects with Holes: A Topological Approach2013Ingår i: 2013 IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2013, s. 1100-1107Konferensbidrag (Refereegranskat)
    Abstract [en]

    This work proposes a topologically inspired approach for generating robot grasps on objects with `holes'. Starting from a noisy point-cloud, we generate a simplicial representation of an object of interest and use a recently developed method for approximating shortest homology generators to identify graspable loops. To control the movement of the robot hand, a topologically motivated coordinate system is used in order to wrap the hand around such loops. Finally, another concept from topology - namely the Gauss linking integral - is adapted to serve as evidence for secure caging grasps after a grasp has been executed. We evaluate our approach in simulation on a Barrett hand using several target objects of different sizes and shapes and present an initial experiment with real sensor data.

  • 17.
    Stork, Johannes Andreas
    et al.
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Ek, Carl Henrik
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Bekiroglu, Yasemin
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Learning Predictive State Representation for In-Hand Manipulation2015Ingår i: 2015 IEEE International Conference on Robotics and Automation (ICRA), IEEE conference proceedings, 2015, s. 3207-3214Konferensbidrag (Refereegranskat)
    Abstract [en]

    We study the use of Predictive State Representation (PSR) for modeling of an in-hand manipulation task through interaction with the environment. We extend the original PSR model to a new domain of in-hand manipulation and address the problem of partial observability by introducing new kernel-based features that integrate both actions and observations. The model is learned directly from haptic data and is used to plan series of actions that rotate the object in the hand to a specific configuration by pushing it against a table. Further, we analyze the model's belief states using additional visual data and enable planning of action sequences when the observations are ambiguous. We show that the learned representation is geometrically meaningful by embedding labeled action-observation traces. Suitability for planning is demonstrated by a post-grasp manipulation example that changes the object state to multiple specified target configurations.

  • 18.
    Stork, Johannes Andreas
    et al.
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Ek, Carl Henrik
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Learning Predictive State Representations for planning2015Ingår i: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE Press, 2015, s. 3427-3434Konferensbidrag (Refereegranskat)
    Abstract [en]

    Predictive State Representations (PSRs) allow modeling of dynamical systems directly in observables and without relying on latent variable representations. A problem that arises from learning PSRs is that it is often hard to attribute semantic meaning to the learned representation. This makes generalization and planning in PSRs challenging. In this paper, we extend PSRs and introduce the notion of PSRs that include prior information (P-PSRs) to learn representations which are suitable for planning and interpretation. By learning a low-dimensional embedding of test features we map belief points of similar semantic to the same region of a subspace. This facilitates better generalization for planning and semantical interpretation of the learned representation. In specific, we show how to overcome the training sample bias and introduce feature selection such that the resulting representation emphasizes observables related to the planning task. We show that our P-PSRs result in qualitatively meaningful representations and present quantitative results that indicate improved suitability for planning.

  • 19.
    Stork, Johannes Andreas
    et al.
    Royal Institute of Technology, Stockholm, Sweden.
    Ek, Carl Henrik
    Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Royal Institute of Technology, Stockholm, Sweden.
    Learning Predictive State Representations for planning2015Konferensbidrag (Övrigt vetenskapligt)
  • 20.
    Stork, Johannes Andreas
    et al.
    Centre for Autonomous Systems, Computer Vision and Active Perception Lab, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Pokorny, Florian T.
    Centre for Autonomous Systems, Computer Vision and Active Perception Lab, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Centre for Autonomous Systems, Computer Vision and Active Perception Lab, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    A topology-based object representation for clasping, latching and hooking2013Ingår i: 2013 13TH IEEE-RAS INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS (HUMANOIDS), IEEE conference proceedings, 2013, s. 138-145Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a loop-based topological object representation for objects with holes. The representation is used to model object parts suitable for grasping, e.g. handles, and it incorporates local volume information about these. Furthermore, we present a grasp synthesis framework that utilizes this representation for synthesizing caging grasps that are robust under measurement noise. The approach is complementary to a local contact-based force-closure analysis as it depends on global topological features of the object. We perform an extensive evaluation with four robotic hands on synthetic data. Additionally, we provide real world experiments using a Kinect sensor on two robotic platforms: a Schunk dexterous hand attached to a Kuka robot arm as well as a Nao humanoid robot. In the case of the Nao platform, we provide initial experiments showing that our approach can be used to plan whole arm hooking as well as caging grasps involving only one hand.

  • 21.
    Stork, Johannes Andreas
    et al.
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, CSC, KTH Royal Institute of Technology, Stockholm, Sweden.
    Pokorny, Florian T.
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, CSC, KTH Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, CSC, KTH Royal Institute of Technology, Stockholm, Sweden.
    Integrated motion and clasp planning with virtual linking2013Ingår i: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE Press, 2013, s. 3007-3014Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this work, we address the problem of simultaneous clasp and motion planning on unknown objects with holes. Clasping an object enables a rich set of activities such as dragging, toting, pulling and hauling which can be applied to both soft and rigid objects. To this end, we define a virtual linking measure which characterizes the spacial relation between the robot hand and object. The measure utilizes a set of closed curves arising from an approximately shortest basis of the object's first homology group. We define task spaces to perform collision-free motion planing with respect to multiple prioritized objectives using a sampling-based planing method. The approach is tested in simulation using different robot hands and various real-world objects.

  • 22.
    Stork, Johannes Andreas
    et al.
    Royal Institute of Technology, Stockholm, Sweden.
    Pokorny, Florian T.
    Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Royal Institute of Technology, Stockholm, Sweden.
    Towards Postural Synergies for Caging Grasps2013Konferensbidrag (Övrigt vetenskapligt)
  • 23. Stork, Johannes Andreas
    et al.
    Silva, Jens
    Spinello, Luciano
    Arras, Kai O.
    Audio-Based Human Activity Recognition with Robots2011Konferensbidrag (Övrigt vetenskapligt)
  • 24.
    Stork, Johannes Andreas
    et al.
    Social Robotics Lab, Department of Computer Science, University of Freiburg, Freiburg im Breisgau, Germany.
    Spinello, Luciano
    Social Robotics Lab, Department of Computer Science, University of Freiburg, Freiburg im Breisgau, Germany.
    Silva, Jens
    Social Robotics Lab, Department of Computer Science, University of Freiburg, Freiburg im Breisgau, Germany.
    Arras, Kai O.
    Social Robotics Lab, Department of Computer Science, University of Freiburg, Freiburg im Breisgau, Germany.
    Audio-Based Human Activity Recognition Using Non-Markovian Ensemble Voting2012Ingår i: 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, IEEE conference proceedings, 2012, s. 509-514Konferensbidrag (Refereegranskat)
    Abstract [en]

    Human activity recognition is a key component for socially enabled robots to effectively and naturally interact with humans. In this paper we exploit the fact that many human activities produce characteristic sounds from which a robot can infer the corresponding actions. We propose a novel recognition approach called Non-Markovian Ensemble Voting (NEV) able to classify multiple human activities in an online fashion without the need for silence detection or audio stream segmentation. Moreover, the method can deal with activities that are extended over undefined periods in time. In a series of experiments in real reverberant environments, we are able to robustly recognize 22 different sounds that correspond to a number of human activities in a bathroom and kitchen context. Our method outperforms several established classification techniques.

  • 25.
    Thippur, Akshaya
    et al.
    RPL (CVAP), KTH Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    RPL (CVAP), KTH Royal Institute of Technology, Stockholm, Sweden.
    Jensfelt, Patric
    RPL (CVAP), KTH Royal Institute of Technology, Stockholm, Sweden.
    Non-Parametric Spatial Context Structure Learning for Autonomous Understanding of Human Environments2017Ingår i: 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), IEEE conference proceedings, 2017, s. 1317-1324Konferensbidrag (Refereegranskat)
    Abstract [en]

    Autonomous scene understanding by object classification today, crucially depends on the accuracy of appearance based robotic perception. However, this is prone to difficulties in object detection arising from unfavourable lighting conditions and vision unfriendly object properties. In our work, we propose a spatial context based system which infers object classes utilising solely structural information captured from the scenes to aid traditional perception systems. Our system operates on novel spatial features (IFRC) that are robust to noisy object detections; It also caters to on-the-fly learned knowledge modification improving performance with practise. IFRC are aligned with human expression of 3D space, thereby facilitating easy HRI and hence simpler supervised learning. We tested our spatial context based system to successfully conclude that it can capture spatio structural information to do joint object classification to not only act as a vision aide, but sometimes even perform on par with appearance based robotic vision.

  • 26.
    Yuan, Weihao
    et al.
    Robotics Institute, ECE, Hong Kong University of Science and Technology, Hong Kong SAR, China.
    Hang, Kaiyu
    Mechanical Engineering and Material Science, Yale University, New Haven CT, USA.
    Kragic, Danica
    Centre for Autonomous Systems, EECS, KTH Royal Institute of Technology, Stockholm, Sweden.
    Wang, Michael Y.
    Robotics Institute, ECE, Hong Kong University of Science and Technology, Hong Kong SAR, China.
    Stork, Johannes Andreas
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    End-to-end nonprehensile rearrangement with deep reinforcement learning and simulation-to-reality transfer2019Ingår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 119, s. 119-134Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Nonprehensile rearrangement is the problem of controlling a robot to interact with objects through pushing actions in order to reconfigure the objects into a predefined goal pose. In this work, we rearrange one object at a time in an environment with obstacles using an end-to-end policy that maps raw pixels as visual input to control actions without any form of engineered feature extraction. To reduce the amount of training data that needs to be collected using a real robot, we propose a simulation-to-reality transfer approach. In the first step, we model the nonprehensile rearrangement task in simulation and use deep reinforcement learning to learn a suitable rearrangement policy, which requires in the order of hundreds of thousands of example actions for training. Thereafter, we collect a small dataset of only 70 episodes of real-world actions as supervised examples for adapting the learned rearrangement policy to real-world input data. In this process, we make use of newly proposed strategies for improving the reinforcement learning process, such as heuristic exploration and the curation of a balanced set of experiences. We evaluate our method in both simulation and real setting using a Baxter robot to show that the proposed approach can effectively improve the training process in simulation, as well as efficiently adapt the learned policy to the real world application, even when the camera pose is different from simulation. Additionally, we show that the learned system not only can provide adaptive behavior to handle unforeseen events during executions, such as distraction objects, sudden changes in positions of the objects, and obstacles, but also can deal with obstacle shapes that were not present in the training process.

  • 27.
    Yuan, Weihao
    et al.
    Hong Kong University of Science and Technology, Hong Kong, China.
    Hang, Kaiyu
    Department of Mechanical Engineering and Material Science, Yale University, New Haven, Connecticut, USA.
    Song, Haoran
    Hong Kong University of Science and Technology, Hong Kong, China.
    Kragic, Danica
    Centre for Autonomous Systems, EECS, KTH Royal Institute of Technology, Stockholm, Sweden.
    Wang, Michael Yu
    Hong Kong University of Science and Technology, Hong Kong, China.
    Stork, Johannes Andreas
    Centre for Autonomous Systems, EECS, KTH Royal Institute of Technology, Stockholm, Sweden.
    Reinforcement Learning in Topology-based Representation for Human Body Movement with Whole Arm Manipulation2018Manuskript (preprint) (Övrigt vetenskapligt)
  • 28.
    Yuan, Weihao
    et al.
    HKUST Robotics Institute, Hong Kong University of Science and Technology, Hong Kong.
    Stork, Johannes Andreas
    Robotics, Perception and Learning Lab, Centre for Autonomous Systems, KTH Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Wang, Michael Y.
    Hang, Kaiyu
    HKUST Robotics Institute, Hong Kong University of Science and Technology, Hong Kong.
    Rearrangement with Nonprehensile Manipulation Using Deep Reinforcement Learning2018Ingår i: 2018 IEEE International Conference on Robotics and Automation (ICRA), IEEE conference proceedings, 2018, s. 270-277Konferensbidrag (Refereegranskat)
    Abstract [en]

    Rearranging objects on a tabletop surface by means of nonprehensile manipulation is a task which requires skillful interaction with the physical world. Usually, this is achieved by precisely modeling physical properties of the objects, robot, and the environment for explicit planning. In contrast, as explicitly modeling the physical environment is not always feasible and involves various uncertainties, we learn a nonprehensile rearrangement strategy with deep reinforcement learning based on only visual feedback. For this, we model the task with rewards and train a deep Q-network. Our potential field-based heuristic exploration strategy reduces the amount of collisions which lead to suboptimal outcomes and we actively balance the training set to avoid bias towards poor examples. Our training process leads to quicker learning and better performance on the task as compared to uniform exploration and standard experience replay. We demonstrate empirical evidence from simulation that our method leads to a success rate of 85%, show that our system can cope with sudden changes of the environment, and compare our performance with human level performance.

1 - 28 av 28
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf