oru.sePublikasjoner
Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
VPE: Variational Policy Embedding for Transfer Reinforcement Learning
Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
Örebro universitet, Institutionen för naturvetenskap och teknik. Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.ORCID-id: 0000-0003-3958-6179
2019 (engelsk)Inngår i: 2019 International Conference on Robotics and Automation (ICRA) / [ed] Howard, A; Althoefer, K; Arai, F; Arrichiello, F; Caputo, B; Castellanos, J; Hauser, K; Isler, V Kim, J; Liu, H; Oh, P; Santos, V; Scaramuzza, D; Ude, A; Voyles, R; Yamane, K; Okamura, A, IEEE , 2019, s. 36-42Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Reinforcement Learning methods are capable of solving complex problems, but resulting policies might perform poorly in environments that are even slightly different. In robotics especially, training and deployment conditions often vary and data collection is expensive, making retraining undesirable. Simulation training allows for feasible training times, but on the other hand suffer from a reality-gap when applied in real-world settings. This raises the need of efficient adaptation of policies acting in new environments.

We consider the problem of transferring knowledge within a family of similar Markov decision processes. We assume that Q-functions are generated by some low-dimensional latent variable. Given such a Q-function, we can find a master policy that can adapt given different values of this latent variable. Our method learns both the generative mapping and an approximate posterior of the latent variables, enabling identification of policies for new tasks by searching only in the latent space, rather than the space of all policies. The low-dimensional space, and master policy found by our method enables policies to quickly adapt to new environments. We demonstrate the method on both a pendulum swing-up task in simulation, and for simulation-to-real transfer on a pushing task.

sted, utgiver, år, opplag, sider
IEEE , 2019. s. 36-42
Serie
IEEE International Conference on Robotics and Automation ICRA, ISSN 1050-4729, E-ISSN 2577-087X
HSV kategori
Identifikatorer
URN: urn:nbn:se:oru:diva-78530DOI: 10.1109/ICRA.2019.8793556ISI: 000494942300006Scopus ID: 2-s2.0-85071508761ISBN: 978-1-5386-6026-3 (tryckt)ISBN: 978-1-5386-6027-0 (digital)OAI: oai:DiVA.org:oru-78530DiVA, id: diva2:1376822
Konferanse
International Conference on Robotics and Automation (ICRA), Montreal, Canada, May 20-24, 2019
Merknad

Funding Agency:

Swedish Foundation for Strategic Research (SSF) through the project Factories of the Future (FACT)

Tilgjengelig fra: 2019-12-10 Laget: 2019-12-10 Sist oppdatert: 2019-12-10bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekstScopus

Personposter BETA

Stork, Johannes Andreas

Søk i DiVA

Av forfatter/redaktør
Stork, Johannes Andreas
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric

doi
isbn
urn-nbn
Totalt: 55 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf