oru.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Persson, Andreas
Publications (9 of 9) Show all publications
Persson, A. (2019). Studies in Semantic Modeling of Real-World Objects using Perceptual Anchoring. (Doctoral dissertation). Örebro: Örebro University
Open this publication in new window or tab >>Studies in Semantic Modeling of Real-World Objects using Perceptual Anchoring
2019 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Autonomous agents, situated in real-world scenarios, need to maintain consonance between the perceived world (through sensory capabilities) and their internal representation of the world in the form of symbolic knowledge. An approach for modeling such representations of objects is through the concept of perceptual anchoring, which, by definition, handles the problem of creating and maintaining, in time and space, the correspondence between symbols and sensor data that refer to the same physical object in the external world.

The work presented in this thesis leverages notations found within perceptual anchoring to address the problem of real-world semantic world modeling, emphasizing, in particular, sensor-driven bottom-up acquisition of perceptual data. The proposed method for handling the attribute values that constitute the perceptual signature of an object is to first integrate and explore available resources of information, such as a Convolutional Neural Network (CNN) to classify objects on the perceptual level. In addition, a novel anchoring matching function is proposed. This function introduces both the theoretical procedure for comparing attribute values, as well as establishes the use of a learned model that approximates the anchoring matching problem. To verify the proposed method, an evaluation using human judgment to collect annotated ground truth data of real-world objects is further presented. The collected data is subsequently used to train and validate different classification algorithms, in order to learn how to correctly anchor objects, and thereby learn to invoke correct anchoring functionality.

There are, however, situations that are difficult to handle purely from the perspective of perceptual anchoring, e.g., situations where an object is moved during occlusion. In the absence of perceptual observations, it is necessary to couple the anchoring procedure with probabilistic object tracking to speculate about occluded objects, and hence, maintain a consistent world model. Motivated by the limitation in the original anchoring definition, which prohibited the modeling of the history of an object, an extension to the anchoring definition is also presented. This extension permits the historical trace of an anchored object to be maintained and used for the purpose of learning additional properties of an object, e.g., learning of the action applied to an object.

Place, publisher, year, edition, pages
Örebro: Örebro University, 2019. p. 93
Series
Örebro Studies in Technology, ISSN 1650-8580 ; 83
Keywords
Perceptual Anchoring, Semantic World Modeling, Sensor-Driven Acquisition of Data, Object Recognition, Object Classification, Symbol Grounding, Probabilistic Object Tracking
National Category
Information Systems
Identifiers
urn:nbn:se:oru:diva-73175 (URN)978-91-7529-283-0 (ISBN)
Public defence
2019-04-29, Örebro universitet, Teknikhuset, Hörsal T, Fakultetsgatan 1, Örebro, 13:15 (English)
Opponent
Supervisors
Available from: 2019-03-18 Created: 2019-03-18 Last updated: 2019-04-05Bibliographically approved
Persson, A., Längkvist, M. & Loutfi, A. (2017). Learning Actions to Improve the Perceptual Anchoring of Object. Frontiers in Robotics and AI, 3(76)
Open this publication in new window or tab >>Learning Actions to Improve the Perceptual Anchoring of Object
2017 (English)In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 3, no 76Article in journal (Refereed) Published
Abstract [en]

In this paper, we examine how to ground symbols referring to objects in perceptual data from a robot system by examining object entities and their changes over time. In particular, we approach the challenge by 1) tracking and maintaining object entities over time; and 2) utilizing an artificial neural network to learn the coupling between words referring to actions and movement patterns of tracked object entities. For this purpose, we propose a framework which relies on the notations presented in perceptual anchoring. We further present a practical extension of the notation such that our framework can track and maintain the history of detected object entities. Our approach is evaluated using everyday objects typically found in a home environment. Our object classification module has the possibility to detect and classify over several hundred object categories. We demonstrate how the framework creates and maintains, both in space and time, representations of objects such as 'spoon' and 'coffee mug'. These representations are later used for training of different sequential learning algorithms in order to learn movement actions such as 'pour' and 'stir'. We finally exemplify how learned movements actions, combined with common-sense knowledge, further can be used to improve the anchoring process per se.

Place, publisher, year, edition, pages
Lausanne: Frontiers Media S.A., 2017
Keywords
Perceptual anchoring, symbol grounding, action learning, sequential learning algorithms, common-sense knowledge, object classification, object tracking
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-54025 (URN)10.3389/frobt.2016.00076 (DOI)000392981800001 ()
Projects
Chist-Era ReGround project
Funder
Swedish Research Council, 2016-05321
Note

Funding Agency:

Chist-Era ReGround project

Available from: 2016-12-18 Created: 2016-12-18 Last updated: 2019-04-09Bibliographically approved
Persson, A. & Loutfi, A. (2016). Fast Matching of Binary Descriptors for Large-scale Applications in Robot Vision. International Journal of Advanced Robotic Systems, 13, Article ID 58.
Open this publication in new window or tab >>Fast Matching of Binary Descriptors for Large-scale Applications in Robot Vision
2016 (English)In: International Journal of Advanced Robotic Systems, ISSN 1729-8806, E-ISSN 1729-8814, Vol. 13, article id 58Article in journal (Refereed) Published
Abstract [en]

The introduction of computationally efficient binary feature descriptors has raised new opportunities for real-world robot vision applications. However, brute force feature matching of binary descriptors is only practical for smaller datasets. In the literature, there has therefore been an increasing interest in representing and matching binary descriptors more efficiently. In this article, we follow this trend and present a method for efficiently and dynamically quantizing binary descriptors through a summarized frequency count into compact representations (called fsum) for improved feature matching of binary pointfeatures. With the motivation that real-world robot applications must adapt to a changing environment, we further present an overview of the field of algorithms, which concerns the efficient matching of binary descriptors and which are able to incorporate changes over time, such as clustered search trees and bag-of-features improved by vocabulary adaptation. The focus for this article is on evaluation, particularly large scale evaluation, compared to alternatives that exist within the field. Throughout this evaluation it is shown that the fsum approach is both efficient in terms of computational cost and memory requirements, while retaining adequate retrieval accuracy. It is further shown that the presented algorithm is equally suited to binary descriptors of arbitrary type and that the algorithm is therefore a valid option for several types of vision applications.

Place, publisher, year, edition, pages
Rijeka, Croatia: InTech, 2016
Keywords
Binary Descriptors, Efficient Feature Matching, Real-world Robotic Vision Applications
National Category
Robotics
Identifiers
urn:nbn:se:oru:diva-49931 (URN)10.5772/62162 (DOI)000372839300002 ()
Funder
Swedish Research Council, 2011-6104
Available from: 2016-04-26 Created: 2016-04-26 Last updated: 2019-04-05Bibliographically approved
Beeson, P., Kortenkamp, D., Bonasso, R. P., Persson, A., Loutfi, A. & Bona, J. P. (2014). An Ontology-Based Symbol Grounding System for Human-Robot Interaction. In: Artificial Intelligence for Human-Robot Interaction: 2014 AAAI Fall Symposium. Paper presented at 2014 AAAI Fall Symposium series, Washington, USA, November 13-15, 2014. AAAI Press
Open this publication in new window or tab >>An Ontology-Based Symbol Grounding System for Human-Robot Interaction
Show others...
2014 (English)In: Artificial Intelligence for Human-Robot Interaction: 2014 AAAI Fall Symposium, AAAI Press, 2014Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents an ongoing collaboration to develop a perceptual anchoring framework which creates and maintains the symbol-percept links concerning household objects. The paper presents an approach to non-trivialize the symbol system using ontologies and allow for HRI via enabling queries about objects properties, their affordances, and their perceptual characteristics as viewed from the robot (e.g. last seen). This position paper describes in brief the objective of creating a long term perceptual anchoring framework for HRI and outlines the preliminary work done this far.

Place, publisher, year, edition, pages
AAAI Press, 2014
National Category
Robotics
Research subject
Computer Science; Human-Computer Interaction
Identifiers
urn:nbn:se:oru:diva-39627 (URN)
Conference
2014 AAAI Fall Symposium series, Washington, USA, November 13-15, 2014
Funder
Swedish Research Council
Available from: 2014-12-12 Created: 2014-12-12 Last updated: 2018-06-14Bibliographically approved
Persson, A., Al Moubayed, S. & Loutfi, A. (2014). Fluent human–robot dialogues about grounded objects in home environments. Cognitive Computation, 6(4), 914-927
Open this publication in new window or tab >>Fluent human–robot dialogues about grounded objects in home environments
2014 (English)In: Cognitive Computation, ISSN 1866-9956, E-ISSN 1866-9964, Vol. 6, no 4, p. 914-927Article in journal (Refereed) Published
Abstract [en]

To provide a spoken interaction between robots and human users, an internal representation of the robots sensory information must be available at a semantic level and accessible to a dialogue system in order to be used in a human-like and intuitive manner. In this paper, we integrate the fields of perceptual anchoring (which creates and maintains the symbol-percept correspondence of objects) in robotics with multimodal dialogues in order to achieve a fluent interaction between humans and robots when talking about objects. These everyday objects are located in a so-called symbiotic system where humans, robots, and sensors are co-operating in a home environment. To orchestrate the dialogue system, the IrisTK dialogue platform is used. The IrisTK system is based on modelling the interaction of events, between different modules, e.g. speech recognizer, face tracker, etc. This system is running on a mobile robot device, which is part of a distributed sensor network. A perceptual anchoring framework, recognizes objects placed in the home and maintains a consistent identity of the objects consisting of their symbolic and perceptual data. Particular effort is placed on creating flexible dialogues where requests to objects can be made in a variety of ways. Experimental validation consists of evaluating the system when many objects are possible candidates for satisfying these requests.

Place, publisher, year, edition, pages
Springer, 2014
Keywords
Human–robot interaction, Perceptual anchoring, Symbol grounding, Spoken dialogue systems, Social robotics
National Category
Robotics
Research subject
Computer Science; Information technology
Identifiers
urn:nbn:se:oru:diva-39388 (URN)10.1007/s12559-014-9291-y (DOI)000345994900022 ()
Funder
Swedish Research Council, 2011-6104
Available from: 2014-12-06 Created: 2014-12-06 Last updated: 2019-04-05Bibliographically approved
Persson, A. & Loutfi, A. (2013). A Hash Table Approach for Large Scale Perceptual Anchoring. In: 2013 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC 2013): . Paper presented at IEEE International Conference on Systems, Man, and Cybernetics (SMC), OCT 13-16, 2013, Manchester, ENGLAND (pp. 3060-3066).
Open this publication in new window or tab >>A Hash Table Approach for Large Scale Perceptual Anchoring
2013 (English)In: 2013 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC 2013), 2013, p. 3060-3066Conference paper, Published paper (Refereed)
Abstract [en]

Perceptual anchoring deals with the problem of creating and maintaining the connection between percepts and symbols that refer to the same physical object. When approaching long term use of an anchoring framework which must cope with large sets of data, it is challenging to both efficiently and accurately anchor objects. An approach to address this problem is through visual perception and computationally efficient binary visual features. In this paper, we present a novel hash table algorithm derived from summarized binary visual features. This algorithm is later contextualized in an anchoring framework. Advantages of the internal structure of proposed hash tables are presented, as well as improvements through the use of hierarchies structured by semantic knowledge. Through evaluation on a larger set of data, we show that our approach is appropriate for efficient bottom-up anchoring, and performance-wise comparable to recently presented search tree algorithm.

Series
IEEE International Conference on Systems Man and Cybernetics Conference Proceedings, ISSN 1062-922X
Keywords
Perceptual anchoring, large scale efficient matching, hash table, binary visual features, semantic categorization
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-34860 (URN)10.1109/SMC.2013.522 (DOI)000332201903033 ()978-1-4799-0652-9 (ISBN)
Conference
IEEE International Conference on Systems, Man, and Cybernetics (SMC), OCT 13-16, 2013, Manchester, ENGLAND
Available from: 2014-04-25 Created: 2014-04-25 Last updated: 2018-01-11Bibliographically approved
Persson, A., Coradeschi, S., Rajasekaran, B., Krishna, V., Loutfi, A. & Alirezaie, M. (2013). I would like some food: anchoring objects to semantic web informationin human-robot dialogue interactions. In: Guido Herrmann, Martin J. Pearson, Alexander Lenz, Paul Bremner, Adam Spiers, Ute Leonards (Ed.), Social Robotics: Proceedings of 5th International Conference, ICSR 2013, Bristol, UK, October 27-29, 2013.. Paper presented at International Conference on Social Robotics, ICSR, Bristol, UK, October 27-29, 2013. (pp. 361-370). Springer
Open this publication in new window or tab >>I would like some food: anchoring objects to semantic web informationin human-robot dialogue interactions
Show others...
2013 (English)In: Social Robotics: Proceedings of 5th International Conference, ICSR 2013, Bristol, UK, October 27-29, 2013. / [ed] Guido Herrmann, Martin J. Pearson, Alexander Lenz, Paul Bremner, Adam Spiers, Ute Leonards, Springer, 2013, p. 361-370Conference paper, Published paper (Refereed)
Abstract [en]

Ubiquitous robotic systems present a number of interesting application areas for socially assistive robots that aim to improve quality of life. In particular the combination of smart home environments and relatively inexpensive robots can be a viable technological solutions for assisting elderly and persons with disability in their own home. Such services require an easy interface like spoken dialogue and the ability to refer to physical objects using semantic terms. This paper presents an implemented system combining a robot and a sensor network deployed in a test apartment in an elderly residence area. The paper focuses on the creation and maintenance (anchoring) of the connection between the semantic information present in the dialogue with perceived physical objects in the home. Semantic knowledge about concepts and their correlations are retrieved from on-line resources and ontologies, e.g. WordNet, and sensor information is provided by cameras distributed in the apartment.

Place, publisher, year, edition, pages
Springer, 2013
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 8239
Keywords
Anchoring framework, semantic web information, dynamic system, human-robot dialogue, sensor network, smart home environment.
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-32898 (URN)10.1007/978-3-319-02675-6_36 (DOI)000341015200036 ()2-s2.0-84892418873 (Scopus ID)978-3-319-02674-9 (ISBN)978-3-319-02675-6 (ISBN)
Conference
International Conference on Social Robotics, ICSR, Bristol, UK, October 27-29, 2013.
Funder
Swedish Research Council, 2011-6104
Available from: 2013-12-31 Created: 2013-12-31 Last updated: 2018-09-16Bibliographically approved
Persson, A. & Loutfi, A.A Database-Centric Architecture for Efficient Matching of Object Instances in Context of Perceptual Anchoring.
Open this publication in new window or tab >>A Database-Centric Architecture for Efficient Matching of Object Instances in Context of Perceptual Anchoring
(English)Manuscript (preprint) (Other academic)
National Category
Information Systems
Identifiers
urn:nbn:se:oru:diva-73526 (URN)
Available from: 2019-04-05 Created: 2019-04-05 Last updated: 2019-04-05Bibliographically approved
Persson, A., Zuidberg Dos Martires, P., Loutfi, A. & De Raedt, L.Semantic Relational Object Tracking.
Open this publication in new window or tab >>Semantic Relational Object Tracking
(English)Manuscript (preprint) (Other academic)
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-73529 (URN)
Available from: 2019-04-05 Created: 2019-04-05 Last updated: 2019-04-05Bibliographically approved
Organisations

Search in DiVA

Show all publications