To Örebro University

oru.seÖrebro University Publications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Learning Actions to Improve the Perceptual Anchoring of Object
Örebro University, School of Science and Technology. (AASS)
Örebro University, School of Science and Technology. (AASS)ORCID iD: 0000-0002-0579-7181
Örebro University, School of Science and Technology. (AASS)ORCID iD: 0000-0002-3122-693X
2017 (English)In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 3, no 76Article in journal (Refereed) Published
Abstract [en]

In this paper, we examine how to ground symbols referring to objects in perceptual data from a robot system by examining object entities and their changes over time. In particular, we approach the challenge by 1) tracking and maintaining object entities over time; and 2) utilizing an artificial neural network to learn the coupling between words referring to actions and movement patterns of tracked object entities. For this purpose, we propose a framework which relies on the notations presented in perceptual anchoring. We further present a practical extension of the notation such that our framework can track and maintain the history of detected object entities. Our approach is evaluated using everyday objects typically found in a home environment. Our object classification module has the possibility to detect and classify over several hundred object categories. We demonstrate how the framework creates and maintains, both in space and time, representations of objects such as 'spoon' and 'coffee mug'. These representations are later used for training of different sequential learning algorithms in order to learn movement actions such as 'pour' and 'stir'. We finally exemplify how learned movements actions, combined with common-sense knowledge, further can be used to improve the anchoring process per se.

Place, publisher, year, edition, pages
Lausanne: Frontiers Media S.A., 2017. Vol. 3, no 76
Keywords [en]
Perceptual anchoring, symbol grounding, action learning, sequential learning algorithms, common-sense knowledge, object classification, object tracking
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:oru:diva-54025DOI: 10.3389/frobt.2016.00076ISI: 000392981800001Scopus ID: 2-s2.0-85081941373OAI: oai:DiVA.org:oru-54025DiVA, id: diva2:1057459
Projects
Chist-Era ReGround project
Funder
Swedish Research Council, 2016-05321
Note

Funding Agency:

Chist-Era ReGround project

Available from: 2016-12-18 Created: 2016-12-18 Last updated: 2023-12-08Bibliographically approved
In thesis
1. Studies in Semantic Modeling of Real-World Objects using Perceptual Anchoring
Open this publication in new window or tab >>Studies in Semantic Modeling of Real-World Objects using Perceptual Anchoring
2019 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Autonomous agents, situated in real-world scenarios, need to maintain consonance between the perceived world (through sensory capabilities) and their internal representation of the world in the form of symbolic knowledge. An approach for modeling such representations of objects is through the concept of perceptual anchoring, which, by definition, handles the problem of creating and maintaining, in time and space, the correspondence between symbols and sensor data that refer to the same physical object in the external world.

The work presented in this thesis leverages notations found within perceptual anchoring to address the problem of real-world semantic world modeling, emphasizing, in particular, sensor-driven bottom-up acquisition of perceptual data. The proposed method for handling the attribute values that constitute the perceptual signature of an object is to first integrate and explore available resources of information, such as a Convolutional Neural Network (CNN) to classify objects on the perceptual level. In addition, a novel anchoring matching function is proposed. This function introduces both the theoretical procedure for comparing attribute values, as well as establishes the use of a learned model that approximates the anchoring matching problem. To verify the proposed method, an evaluation using human judgment to collect annotated ground truth data of real-world objects is further presented. The collected data is subsequently used to train and validate different classification algorithms, in order to learn how to correctly anchor objects, and thereby learn to invoke correct anchoring functionality.

There are, however, situations that are difficult to handle purely from the perspective of perceptual anchoring, e.g., situations where an object is moved during occlusion. In the absence of perceptual observations, it is necessary to couple the anchoring procedure with probabilistic object tracking to speculate about occluded objects, and hence, maintain a consistent world model. Motivated by the limitation in the original anchoring definition, which prohibited the modeling of the history of an object, an extension to the anchoring definition is also presented. This extension permits the historical trace of an anchored object to be maintained and used for the purpose of learning additional properties of an object, e.g., learning of the action applied to an object.

Place, publisher, year, edition, pages
Örebro: Örebro University, 2019. p. 93
Series
Örebro Studies in Technology, ISSN 1650-8580 ; 83
Keywords
Perceptual Anchoring, Semantic World Modeling, Sensor-Driven Acquisition of Data, Object Recognition, Object Classification, Symbol Grounding, Probabilistic Object Tracking
National Category
Information Systems
Identifiers
urn:nbn:se:oru:diva-73175 (URN)978-91-7529-283-0 (ISBN)
Public defence
2019-04-29, Örebro universitet, Teknikhuset, Hörsal T, Fakultetsgatan 1, Örebro, 13:15 (English)
Opponent
Supervisors
Available from: 2019-03-18 Created: 2019-03-18 Last updated: 2020-02-14Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Persson, AndreasLängkvist, MartinLoutfi, Amy

Search in DiVA

By author/editor
Persson, AndreasLängkvist, MartinLoutfi, Amy
By organisation
School of Science and Technology
In the same journal
Frontiers in Robotics and AI
Computer SciencesComputer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 1399 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf