To Örebro University

oru.seÖrebro University Publications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Q-RAN: a constructive reinforcement learning approach for robot behavior learning
Örebro University, Department of Technology. (AASS)
Örebro University, Department of Technology. (AASS)ORCID iD: 0000-0003-0217-9326
Department of Physics, System Engineering and Signal Theory, University of Alicante, Alicante, Spain.
Örebro University, Department of Technology. (Learning Systems Lab)
2006 (English)In: 2006 IEEE/RSJ international conference on intelligent robots and systems, New York, NY, USA: IEEE, 2006, p. 2656-2662, article id 4058792Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents a learning system that uses Q-learning with a resource allocating network (RAN) for behavior learning in mobile robotics. The RAN is used as a function approximator, and Q-learning is used to learn the control policy in `off-policy' fashion that enables learning to be bootstrapped by a prior knowledge controller, thus speeding up the reinforcement learning. Our approach is verified on a PeopleBot robot executing a visual servoing based docking behavior in which the robot is required to reach a goal pose. Further experiments show that the RAN network can also be used for supervised learning prior to reinforcement learning in a layered architecture, thus further improving the performance of the docking behavior.

Place, publisher, year, edition, pages
New York, NY, USA: IEEE, 2006. p. 2656-2662, article id 4058792
National Category
Computer and Information Sciences
Research subject
Computer and Systems Science
Identifiers
URN: urn:nbn:se:oru:diva-3957DOI: 10.1109/IROS.2006.281986ISI: 000245452402127Scopus ID: 2-s2.0-34250630005ISBN: 978-1-4244-0258-8 (print)OAI: oai:DiVA.org:oru-3957DiVA, id: diva2:138256
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9-15 Oct, 2006
Available from: 2007-08-27 Created: 2007-08-27 Last updated: 2022-08-05Bibliographically approved

Open Access in DiVA

Q-RAN: A Constructive Reinforcement Learning(464 kB)678 downloads
File information
File name FULLTEXT01.pdfFile size 464 kBChecksum SHA-512
4e094ef89d867b2966639b72c83f7c7ebafbe1c442bd4f4f974a964983e29a244f2f93616067cdba7f4c5de0f5a9c4386b4269f1767472ccee6214a23b52266c
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Lilienthal, Achim J.Duckett, Tom

Search in DiVA

By author/editor
Lilienthal, Achim J.Duckett, Tom
By organisation
Department of Technology
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 678 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 772 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf