oru.sePublikationer
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Learning Actions to Improve the Perceptual Anchoring of Object
Örebro University, School of Science and Technology. (AASS)
Örebro University, School of Science and Technology. (AASS)ORCID iD: 0000-0002-0579-7181
Örebro University, School of Science and Technology. (AASS)ORCID iD: 0000-0002-3122-693X
2017 (English)In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 3, no 76Article in journal (Refereed) Published
Abstract [en]

In this paper, we examine how to ground symbols referring to objects in perceptual data from a robot system by examining object entities and their changes over time. In particular, we approach the challenge by 1) tracking and maintaining object entities over time; and 2) utilizing an artificial neural network to learn the coupling between words referring to actions and movement patterns of tracked object entities. For this purpose, we propose a framework which relies on the notations presented in perceptual anchoring. We further present a practical extension of the notation such that our framework can track and maintain the history of detected object entities. Our approach is evaluated using everyday objects typically found in a home environment. Our object classification module has the possibility to detect and classify over several hundred object categories. We demonstrate how the framework creates and maintains, both in space and time, representations of objects such as 'spoon' and 'coffee mug'. These representations are later used for training of different sequential learning algorithms in order to learn movement actions such as 'pour' and 'stir'. We finally exemplify how learned movements actions, combined with common-sense knowledge, further can be used to improve the anchoring process per se.

Place, publisher, year, edition, pages
Lausanne: Frontiers Media , 2017. Vol. 3, no 76
Keyword [en]
Perceptual anchoring, symbol grounding, action learning, sequential learning algorithms, common-sense knowledge, object classification, object tracking
National Category
Computer Science Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:oru:diva-54025DOI: 10.3389/frobt.2016.00076ISI: 000392981800001OAI: oai:DiVA.org:oru-54025DiVA: diva2:1057459
Projects
Chist-Era ReGround project
Funder
Swedish Research Council, 2016-05321
Note

Funding Agency:

Chist-Era ReGround project

Available from: 2016-12-18 Created: 2016-12-18 Last updated: 2017-10-18Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full text

Search in DiVA

By author/editor
Persson, AndreasLängkvist, MartinLoutfi, Amy
By organisation
School of Science and Technology
Computer ScienceComputer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

Altmetric score

Total: 865 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf