oru.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
End-to-end nonprehensile rearrangement with deep reinforcement learning and simulation-to-reality transfer
Robotics Institute, ECE, Hong Kong University of Science and Technology, Hong Kong SAR, China.
Mechanical Engineering and Material Science, Yale University, New Haven CT, USA.
Centre for Autonomous Systems, EECS, KTH Royal Institute of Technology, Stockholm, Sweden.
Robotics Institute, ECE, Hong Kong University of Science and Technology, Hong Kong SAR, China.
Show others and affiliations
2019 (English)In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 119, p. 119-134Article in journal (Refereed) Published
Abstract [en]

Nonprehensile rearrangement is the problem of controlling a robot to interact with objects through pushing actions in order to reconfigure the objects into a predefined goal pose. In this work, we rearrange one object at a time in an environment with obstacles using an end-to-end policy that maps raw pixels as visual input to control actions without any form of engineered feature extraction. To reduce the amount of training data that needs to be collected using a real robot, we propose a simulation-to-reality transfer approach. In the first step, we model the nonprehensile rearrangement task in simulation and use deep reinforcement learning to learn a suitable rearrangement policy, which requires in the order of hundreds of thousands of example actions for training. Thereafter, we collect a small dataset of only 70 episodes of real-world actions as supervised examples for adapting the learned rearrangement policy to real-world input data. In this process, we make use of newly proposed strategies for improving the reinforcement learning process, such as heuristic exploration and the curation of a balanced set of experiences. We evaluate our method in both simulation and real setting using a Baxter robot to show that the proposed approach can effectively improve the training process in simulation, as well as efficiently adapt the learned policy to the real world application, even when the camera pose is different from simulation. Additionally, we show that the learned system not only can provide adaptive behavior to handle unforeseen events during executions, such as distraction objects, sudden changes in positions of the objects, and obstacles, but also can deal with obstacle shapes that were not present in the training process.

Place, publisher, year, edition, pages
Elsevier, 2019. Vol. 119, p. 119-134
Keywords [en]
Nonprehensile rearrangement, Deep reinforcement learning, Transfer learning
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:oru:diva-76176DOI: 10.1016/j.robot.2019.06.007ISI: 000482250400009Scopus ID: 2-s2.0-85068467713OAI: oai:DiVA.org:oru-76176DiVA, id: diva2:1349893
Funder
Knut and Alice Wallenberg Foundation, 2014.0011 2017.0426Swedish Foundation for Strategic Research , GMT14-0082
Note

Funding Agencies:

HKUST SSTSP project RoMRO 

FP802  HKUST IGN project  16EG09 

HKUST PGS Fund of Office of Vice-President (Research & Graduate Studies)  

Örebro University Vice-chancellor's Fellowship Development Program 

Available from: 2019-09-10 Created: 2019-09-10 Last updated: 2019-09-10Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records BETA

Stork, Johannes Andreas

Search in DiVA

By author/editor
Stork, Johannes Andreas
By organisation
School of Science and Technology
In the same journal
Robotics and Autonomous Systems
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 29 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf