Open this publication in new window or tab >>Show others...
2022 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 7, no 3, p. 8391-8398Article in journal, Letter (Refereed) Published
Abstract [en]
Contact-rich manipulation tasks remain a hard problem in robotics that requires interaction with unstructured environments. Reinforcement Learning (RL) is one potential solution to such problems, as it has been successfully demonstrated on complex continuous control tasks. Nevertheless, current state-of-the-art methods require policy training in simulation to prevent undesired behavior and later domain transfer even for simple skills involving contact. In this paper, we address the problem of learning contact-rich manipulation policies by extending an existing skill-based RL framework with a variable impedance action space. Our method leverages a small set of suboptimal demonstration trajectories and learns from both position, but also crucially impedance-space information. We evaluate our method on a number of peg-in-hole task variants with a Franka Panda arm and demonstrate that learning variable impedance actions for RL in Cartesian space can be deployed directly on the real robot, without resorting to learning in simulation.
Place, publisher, year, edition, pages
IEEE Press, 2022
Keywords
Machine learning for robot control, reinforcement learning, variable impedance control
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-100386 (URN)10.1109/LRA.2022.3187276 (DOI)000838455200009 ()2-s2.0-85133737407 (Scopus ID)
Funder
Knut and Alice Wallenberg Foundation
2022-08-012022-08-012024-01-17