oru.sePublications
Change search
Refine search result
1 - 9 of 9
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Beeson, Patrick
    et al.
    TRACLabs Inc., Webster TX, USA.
    Kortenkamp, David
    TRACLabs Inc., Webster TX, USA.
    Bonasso, R. Peter
    TRACLabs Inc., Webster TX, USA.
    Persson, Andreas
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Bona, Jonathan P
    State University of New York, Buffalo, USA.
    An Ontology-Based Symbol Grounding System for Human-Robot Interaction2014In: Artificial Intelligence for Human-Robot Interaction: 2014 AAAI Fall Symposium, AAAI Press, 2014Conference paper (Refereed)
    Abstract [en]

    This paper presents an ongoing collaboration to develop a perceptual anchoring framework which creates and maintains the symbol-percept links concerning household objects. The paper presents an approach to non-trivialize the symbol system using ontologies and allow for HRI via enabling queries about objects properties, their affordances, and their perceptual characteristics as viewed from the robot (e.g. last seen). This position paper describes in brief the objective of creating a long term perceptual anchoring framework for HRI and outlines the preliminary work done this far.

  • 2.
    Persson, Andreas
    Örebro University, School of Science and Technology.
    Studies in Semantic Modeling of Real-World Objects using Perceptual Anchoring2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Autonomous agents, situated in real-world scenarios, need to maintain consonance between the perceived world (through sensory capabilities) and their internal representation of the world in the form of symbolic knowledge. An approach for modeling such representations of objects is through the concept of perceptual anchoring, which, by definition, handles the problem of creating and maintaining, in time and space, the correspondence between symbols and sensor data that refer to the same physical object in the external world.

    The work presented in this thesis leverages notations found within perceptual anchoring to address the problem of real-world semantic world modeling, emphasizing, in particular, sensor-driven bottom-up acquisition of perceptual data. The proposed method for handling the attribute values that constitute the perceptual signature of an object is to first integrate and explore available resources of information, such as a Convolutional Neural Network (CNN) to classify objects on the perceptual level. In addition, a novel anchoring matching function is proposed. This function introduces both the theoretical procedure for comparing attribute values, as well as establishes the use of a learned model that approximates the anchoring matching problem. To verify the proposed method, an evaluation using human judgment to collect annotated ground truth data of real-world objects is further presented. The collected data is subsequently used to train and validate different classification algorithms, in order to learn how to correctly anchor objects, and thereby learn to invoke correct anchoring functionality.

    There are, however, situations that are difficult to handle purely from the perspective of perceptual anchoring, e.g., situations where an object is moved during occlusion. In the absence of perceptual observations, it is necessary to couple the anchoring procedure with probabilistic object tracking to speculate about occluded objects, and hence, maintain a consistent world model. Motivated by the limitation in the original anchoring definition, which prohibited the modeling of the history of an object, an extension to the anchoring definition is also presented. This extension permits the historical trace of an anchored object to be maintained and used for the purpose of learning additional properties of an object, e.g., learning of the action applied to an object.

    List of papers
    1. A Database-Centric Architecture for Efficient Matching of Object Instances in Context of Perceptual Anchoring
    Open this publication in new window or tab >>A Database-Centric Architecture for Efficient Matching of Object Instances in Context of Perceptual Anchoring
    (English)Manuscript (preprint) (Other academic)
    National Category
    Information Systems
    Identifiers
    urn:nbn:se:oru:diva-73526 (URN)
    Available from: 2019-04-05 Created: 2019-04-05 Last updated: 2019-04-05Bibliographically approved
    2. Semantic Relational Object Tracking
    Open this publication in new window or tab >>Semantic Relational Object Tracking
    (English)Manuscript (preprint) (Other academic)
    National Category
    Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:oru:diva-73529 (URN)
    Available from: 2019-04-05 Created: 2019-04-05 Last updated: 2019-04-05Bibliographically approved
    3. Learning Actions to Improve the Perceptual Anchoring of Object
    Open this publication in new window or tab >>Learning Actions to Improve the Perceptual Anchoring of Object
    2017 (English)In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 3, no 76Article in journal (Refereed) Published
    Abstract [en]

    In this paper, we examine how to ground symbols referring to objects in perceptual data from a robot system by examining object entities and their changes over time. In particular, we approach the challenge by 1) tracking and maintaining object entities over time; and 2) utilizing an artificial neural network to learn the coupling between words referring to actions and movement patterns of tracked object entities. For this purpose, we propose a framework which relies on the notations presented in perceptual anchoring. We further present a practical extension of the notation such that our framework can track and maintain the history of detected object entities. Our approach is evaluated using everyday objects typically found in a home environment. Our object classification module has the possibility to detect and classify over several hundred object categories. We demonstrate how the framework creates and maintains, both in space and time, representations of objects such as 'spoon' and 'coffee mug'. These representations are later used for training of different sequential learning algorithms in order to learn movement actions such as 'pour' and 'stir'. We finally exemplify how learned movements actions, combined with common-sense knowledge, further can be used to improve the anchoring process per se.

    Place, publisher, year, edition, pages
    Lausanne: Frontiers Media S.A., 2017
    Keywords
    Perceptual anchoring, symbol grounding, action learning, sequential learning algorithms, common-sense knowledge, object classification, object tracking
    National Category
    Computer Sciences Computer Vision and Robotics (Autonomous Systems)
    Research subject
    Computer Science
    Identifiers
    urn:nbn:se:oru:diva-54025 (URN)10.3389/frobt.2016.00076 (DOI)000392981800001 ()
    Projects
    Chist-Era ReGround project
    Funder
    Swedish Research Council, 2016-05321
    Note

    Funding Agency:

    Chist-Era ReGround project

    Available from: 2016-12-18 Created: 2016-12-18 Last updated: 2019-04-09Bibliographically approved
    4. Fast Matching of Binary Descriptors for Large-scale Applications in Robot Vision
    Open this publication in new window or tab >>Fast Matching of Binary Descriptors for Large-scale Applications in Robot Vision
    2016 (English)In: International Journal of Advanced Robotic Systems, ISSN 1729-8806, E-ISSN 1729-8814, Vol. 13, article id 58Article in journal (Refereed) Published
    Abstract [en]

    The introduction of computationally efficient binary feature descriptors has raised new opportunities for real-world robot vision applications. However, brute force feature matching of binary descriptors is only practical for smaller datasets. In the literature, there has therefore been an increasing interest in representing and matching binary descriptors more efficiently. In this article, we follow this trend and present a method for efficiently and dynamically quantizing binary descriptors through a summarized frequency count into compact representations (called fsum) for improved feature matching of binary pointfeatures. With the motivation that real-world robot applications must adapt to a changing environment, we further present an overview of the field of algorithms, which concerns the efficient matching of binary descriptors and which are able to incorporate changes over time, such as clustered search trees and bag-of-features improved by vocabulary adaptation. The focus for this article is on evaluation, particularly large scale evaluation, compared to alternatives that exist within the field. Throughout this evaluation it is shown that the fsum approach is both efficient in terms of computational cost and memory requirements, while retaining adequate retrieval accuracy. It is further shown that the presented algorithm is equally suited to binary descriptors of arbitrary type and that the algorithm is therefore a valid option for several types of vision applications.

    Place, publisher, year, edition, pages
    Rijeka, Croatia: InTech, 2016
    Keywords
    Binary Descriptors, Efficient Feature Matching, Real-world Robotic Vision Applications
    National Category
    Robotics
    Identifiers
    urn:nbn:se:oru:diva-49931 (URN)10.5772/62162 (DOI)000372839300002 ()
    Funder
    Swedish Research Council, 2011-6104
    Available from: 2016-04-26 Created: 2016-04-26 Last updated: 2019-04-05Bibliographically approved
    5. Fluent human–robot dialogues about grounded objects in home environments
    Open this publication in new window or tab >>Fluent human–robot dialogues about grounded objects in home environments
    2014 (English)In: Cognitive Computation, ISSN 1866-9956, E-ISSN 1866-9964, Vol. 6, no 4, p. 914-927Article in journal (Refereed) Published
    Abstract [en]

    To provide a spoken interaction between robots and human users, an internal representation of the robots sensory information must be available at a semantic level and accessible to a dialogue system in order to be used in a human-like and intuitive manner. In this paper, we integrate the fields of perceptual anchoring (which creates and maintains the symbol-percept correspondence of objects) in robotics with multimodal dialogues in order to achieve a fluent interaction between humans and robots when talking about objects. These everyday objects are located in a so-called symbiotic system where humans, robots, and sensors are co-operating in a home environment. To orchestrate the dialogue system, the IrisTK dialogue platform is used. The IrisTK system is based on modelling the interaction of events, between different modules, e.g. speech recognizer, face tracker, etc. This system is running on a mobile robot device, which is part of a distributed sensor network. A perceptual anchoring framework, recognizes objects placed in the home and maintains a consistent identity of the objects consisting of their symbolic and perceptual data. Particular effort is placed on creating flexible dialogues where requests to objects can be made in a variety of ways. Experimental validation consists of evaluating the system when many objects are possible candidates for satisfying these requests.

    Place, publisher, year, edition, pages
    Springer, 2014
    Keywords
    Human–robot interaction, Perceptual anchoring, Symbol grounding, Spoken dialogue systems, Social robotics
    National Category
    Robotics
    Research subject
    Computer Science; Information technology
    Identifiers
    urn:nbn:se:oru:diva-39388 (URN)10.1007/s12559-014-9291-y (DOI)000345994900022 ()
    Funder
    Swedish Research Council, 2011-6104
    Available from: 2014-12-06 Created: 2014-12-06 Last updated: 2019-04-05Bibliographically approved
  • 3.
    Persson, Andreas
    et al.
    Örebro University, School of Science and Technology.
    Al Moubayed, Samer
    Örebro University, School of Science and Technology. Department for Speech, Music and Hearing (TMH), Royal Institute of Technology (KTH), Stockholm, Sweden.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Fluent human–robot dialogues about grounded objects in home environments2014In: Cognitive Computation, ISSN 1866-9956, E-ISSN 1866-9964, Vol. 6, no 4, p. 914-927Article in journal (Refereed)
    Abstract [en]

    To provide a spoken interaction between robots and human users, an internal representation of the robots sensory information must be available at a semantic level and accessible to a dialogue system in order to be used in a human-like and intuitive manner. In this paper, we integrate the fields of perceptual anchoring (which creates and maintains the symbol-percept correspondence of objects) in robotics with multimodal dialogues in order to achieve a fluent interaction between humans and robots when talking about objects. These everyday objects are located in a so-called symbiotic system where humans, robots, and sensors are co-operating in a home environment. To orchestrate the dialogue system, the IrisTK dialogue platform is used. The IrisTK system is based on modelling the interaction of events, between different modules, e.g. speech recognizer, face tracker, etc. This system is running on a mobile robot device, which is part of a distributed sensor network. A perceptual anchoring framework, recognizes objects placed in the home and maintains a consistent identity of the objects consisting of their symbolic and perceptual data. Particular effort is placed on creating flexible dialogues where requests to objects can be made in a variety of ways. Experimental validation consists of evaluating the system when many objects are possible candidates for satisfying these requests.

  • 4.
    Persson, Andreas
    et al.
    Örebro University, School of Science and Technology.
    Coradeschi, Silvia
    Örebro University, School of Science and Technology.
    Rajasekaran, Balasubramanian
    Dept Sci & Technol, Ctr Appl Autonomous Sensor Syst AASS, Univ Örebro, Örebro, Sweden.
    Krishna, Vamsi
    Dept Science & Technology, Center for Applied Autonomous Sensor Syst (AASS), Örebro University, Örebro, Sweden.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Alirezaie, Marjan
    Örebro University, School of Science and Technology.
    I would like some food: anchoring objects to semantic web informationin human-robot dialogue interactions2013In: Social Robotics: Proceedings of 5th International Conference, ICSR 2013, Bristol, UK, October 27-29, 2013. / [ed] Guido Herrmann, Martin J. Pearson, Alexander Lenz, Paul Bremner, Adam Spiers, Ute Leonards, Springer, 2013, p. 361-370Conference paper (Refereed)
    Abstract [en]

    Ubiquitous robotic systems present a number of interesting application areas for socially assistive robots that aim to improve quality of life. In particular the combination of smart home environments and relatively inexpensive robots can be a viable technological solutions for assisting elderly and persons with disability in their own home. Such services require an easy interface like spoken dialogue and the ability to refer to physical objects using semantic terms. This paper presents an implemented system combining a robot and a sensor network deployed in a test apartment in an elderly residence area. The paper focuses on the creation and maintenance (anchoring) of the connection between the semantic information present in the dialogue with perceived physical objects in the home. Semantic knowledge about concepts and their correlations are retrieved from on-line resources and ontologies, e.g. WordNet, and sensor information is provided by cameras distributed in the apartment.

  • 5.
    Persson, Andreas
    et al.
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    A Database-Centric Architecture for Efficient Matching of Object Instances in Context of Perceptual AnchoringManuscript (preprint) (Other academic)
  • 6.
    Persson, Andreas
    et al.
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    A Hash Table Approach for Large Scale Perceptual Anchoring2013In: 2013 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC 2013), 2013, p. 3060-3066Conference paper (Refereed)
    Abstract [en]

    Perceptual anchoring deals with the problem of creating and maintaining the connection between percepts and symbols that refer to the same physical object. When approaching long term use of an anchoring framework which must cope with large sets of data, it is challenging to both efficiently and accurately anchor objects. An approach to address this problem is through visual perception and computationally efficient binary visual features. In this paper, we present a novel hash table algorithm derived from summarized binary visual features. This algorithm is later contextualized in an anchoring framework. Advantages of the internal structure of proposed hash tables are presented, as well as improvements through the use of hierarchies structured by semantic knowledge. Through evaluation on a larger set of data, we show that our approach is appropriate for efficient bottom-up anchoring, and performance-wise comparable to recently presented search tree algorithm.

  • 7.
    Persson, Andreas
    et al.
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Fast Matching of Binary Descriptors for Large-scale Applications in Robot Vision2016In: International Journal of Advanced Robotic Systems, ISSN 1729-8806, E-ISSN 1729-8814, Vol. 13, article id 58Article in journal (Refereed)
    Abstract [en]

    The introduction of computationally efficient binary feature descriptors has raised new opportunities for real-world robot vision applications. However, brute force feature matching of binary descriptors is only practical for smaller datasets. In the literature, there has therefore been an increasing interest in representing and matching binary descriptors more efficiently. In this article, we follow this trend and present a method for efficiently and dynamically quantizing binary descriptors through a summarized frequency count into compact representations (called fsum) for improved feature matching of binary pointfeatures. With the motivation that real-world robot applications must adapt to a changing environment, we further present an overview of the field of algorithms, which concerns the efficient matching of binary descriptors and which are able to incorporate changes over time, such as clustered search trees and bag-of-features improved by vocabulary adaptation. The focus for this article is on evaluation, particularly large scale evaluation, compared to alternatives that exist within the field. Throughout this evaluation it is shown that the fsum approach is both efficient in terms of computational cost and memory requirements, while retaining adequate retrieval accuracy. It is further shown that the presented algorithm is equally suited to binary descriptors of arbitrary type and that the algorithm is therefore a valid option for several types of vision applications.

  • 8.
    Persson, Andreas
    et al.
    Örebro University, School of Science and Technology.
    Längkvist, Martin
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Learning Actions to Improve the Perceptual Anchoring of Object2017In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 3, no 76Article in journal (Refereed)
    Abstract [en]

    In this paper, we examine how to ground symbols referring to objects in perceptual data from a robot system by examining object entities and their changes over time. In particular, we approach the challenge by 1) tracking and maintaining object entities over time; and 2) utilizing an artificial neural network to learn the coupling between words referring to actions and movement patterns of tracked object entities. For this purpose, we propose a framework which relies on the notations presented in perceptual anchoring. We further present a practical extension of the notation such that our framework can track and maintain the history of detected object entities. Our approach is evaluated using everyday objects typically found in a home environment. Our object classification module has the possibility to detect and classify over several hundred object categories. We demonstrate how the framework creates and maintains, both in space and time, representations of objects such as 'spoon' and 'coffee mug'. These representations are later used for training of different sequential learning algorithms in order to learn movement actions such as 'pour' and 'stir'. We finally exemplify how learned movements actions, combined with common-sense knowledge, further can be used to improve the anchoring process per se.

  • 9.
    Persson, Andreas
    et al.
    Örebro University, School of Science and Technology.
    Zuidberg Dos Martires, Pedro
    Declaratieve Talen en Artificiele Intelligentie (DTAI), Department of Computer Science, KU Leuven, Heverlee, Belgium.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    De Raedt, Luc
    Declaratieve Talen en Artificiele Intelligentie (DTAI), Department of Computer Science, KU Leuven, Heverlee, Belgium.
    Semantic Relational Object TrackingManuscript (preprint) (Other academic)
1 - 9 of 9
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf