oru.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
VPE: Variational Policy Embedding for Transfer Reinforcement Learning
Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
Örebro University, School of Science and Technology. Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.ORCID iD: 0000-0003-3958-6179
2019 (English)In: 2019 International Conference on Robotics and Automation (ICRA) / [ed] Howard, A; Althoefer, K; Arai, F; Arrichiello, F; Caputo, B; Castellanos, J; Hauser, K; Isler, V Kim, J; Liu, H; Oh, P; Santos, V; Scaramuzza, D; Ude, A; Voyles, R; Yamane, K; Okamura, A, IEEE , 2019, p. 36-42Conference paper, Published paper (Refereed)
Abstract [en]

Reinforcement Learning methods are capable of solving complex problems, but resulting policies might perform poorly in environments that are even slightly different. In robotics especially, training and deployment conditions often vary and data collection is expensive, making retraining undesirable. Simulation training allows for feasible training times, but on the other hand suffer from a reality-gap when applied in real-world settings. This raises the need of efficient adaptation of policies acting in new environments.

We consider the problem of transferring knowledge within a family of similar Markov decision processes. We assume that Q-functions are generated by some low-dimensional latent variable. Given such a Q-function, we can find a master policy that can adapt given different values of this latent variable. Our method learns both the generative mapping and an approximate posterior of the latent variables, enabling identification of policies for new tasks by searching only in the latent space, rather than the space of all policies. The low-dimensional space, and master policy found by our method enables policies to quickly adapt to new environments. We demonstrate the method on both a pendulum swing-up task in simulation, and for simulation-to-real transfer on a pushing task.

Place, publisher, year, edition, pages
IEEE , 2019. p. 36-42
Series
IEEE International Conference on Robotics and Automation ICRA, ISSN 1050-4729, E-ISSN 2577-087X
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:oru:diva-78530DOI: 10.1109/ICRA.2019.8793556ISI: 000494942300006Scopus ID: 2-s2.0-85071508761ISBN: 978-1-5386-6026-3 (print)ISBN: 978-1-5386-6027-0 (electronic)OAI: oai:DiVA.org:oru-78530DiVA, id: diva2:1376822
Conference
International Conference on Robotics and Automation (ICRA), Montreal, Canada, May 20-24, 2019
Note

Funding Agency:

Swedish Foundation for Strategic Research (SSF) through the project Factories of the Future (FACT)

Available from: 2019-12-10 Created: 2019-12-10 Last updated: 2019-12-10Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records BETA

Stork, Johannes Andreas

Search in DiVA

By author/editor
Stork, Johannes Andreas
By organisation
School of Science and Technology
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 25 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf