oru.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
VPE: Variational Policy Embedding for Transfer Reinforcement Learning
Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
Örebro universitet, Institutionen för naturvetenskap och teknik. Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.ORCID-id: 0000-0003-3958-6179
2019 (Engelska)Ingår i: 2019 International Conference on Robotics and Automation (ICRA) / [ed] Howard, A; Althoefer, K; Arai, F; Arrichiello, F; Caputo, B; Castellanos, J; Hauser, K; Isler, V Kim, J; Liu, H; Oh, P; Santos, V; Scaramuzza, D; Ude, A; Voyles, R; Yamane, K; Okamura, A, IEEE , 2019, s. 36-42Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Reinforcement Learning methods are capable of solving complex problems, but resulting policies might perform poorly in environments that are even slightly different. In robotics especially, training and deployment conditions often vary and data collection is expensive, making retraining undesirable. Simulation training allows for feasible training times, but on the other hand suffer from a reality-gap when applied in real-world settings. This raises the need of efficient adaptation of policies acting in new environments.

We consider the problem of transferring knowledge within a family of similar Markov decision processes. We assume that Q-functions are generated by some low-dimensional latent variable. Given such a Q-function, we can find a master policy that can adapt given different values of this latent variable. Our method learns both the generative mapping and an approximate posterior of the latent variables, enabling identification of policies for new tasks by searching only in the latent space, rather than the space of all policies. The low-dimensional space, and master policy found by our method enables policies to quickly adapt to new environments. We demonstrate the method on both a pendulum swing-up task in simulation, and for simulation-to-real transfer on a pushing task.

Ort, förlag, år, upplaga, sidor
IEEE , 2019. s. 36-42
Serie
IEEE International Conference on Robotics and Automation ICRA, ISSN 1050-4729, E-ISSN 2577-087X
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Identifikatorer
URN: urn:nbn:se:oru:diva-78530DOI: 10.1109/ICRA.2019.8793556ISI: 000494942300006Scopus ID: 2-s2.0-85071508761ISBN: 978-1-5386-6026-3 (tryckt)ISBN: 978-1-5386-6027-0 (digital)OAI: oai:DiVA.org:oru-78530DiVA, id: diva2:1376822
Konferens
International Conference on Robotics and Automation (ICRA), Montreal, Canada, May 20-24, 2019
Anmärkning

Funding Agency:

Swedish Foundation for Strategic Research (SSF) through the project Factories of the Future (FACT)

Tillgänglig från: 2019-12-10 Skapad: 2019-12-10 Senast uppdaterad: 2019-12-10Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

Förlagets fulltextScopus

Personposter BETA

Stork, Johannes Andreas

Sök vidare i DiVA

Av författaren/redaktören
Stork, Johannes Andreas
Av organisationen
Institutionen för naturvetenskap och teknik
Datorseende och robotik (autonoma system)

Sök vidare utanför DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetricpoäng

doi
isbn
urn-nbn
Totalt: 55 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf