oru.sePublikationer
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Fluent human–robot dialogues about grounded objects in home environments
Örebro University, School of Science and Technology, Örebro University, Sweden. (AASS)
Örebro University, School of Science and Technology, Örebro University, Sweden. Department for Speech, Music and Hearing (TMH), Royal Institute of Technology (KTH), Stockholm, Sweden.
Örebro University, School of Science and Technology, Örebro University, Sweden. (AASS)ORCID iD: 0000-0002-3122-693X
2014 (English)In: Cognitive Computation, ISSN 1866-9956, E-ISSN 1866-9964, Vol. 6, no 4, 914-927 p.Article in journal (Refereed) Published
Abstract [en]

To provide a spoken interaction between robots and human users, an internal representation of the robots sensory information must be available at a semantic level and accessible to a dialogue system in order to be used in a human-like and intuitive manner. In this paper, we integrate the fields of perceptual anchoring (which creates and maintains the symbol-percept correspondence of objects) in robotics with multimodal dialogues in order to achieve a fluent interaction between humans and robots when talking about objects. These everyday objects are located in a so-called symbiotic system where humans, robots, and sensors are co-operating in a home environment. To orchestrate the dialogue system, the IrisTK dialogue platform is used. The IrisTK system is based on modelling the interaction of events, between different modules, e.g. speech recognizer, face tracker, etc. This system is running on a mobile robot device, which is part of a distributed sensor network. A perceptual anchoring framework, recognizes objects placed in the home and maintains a consistent identity of the objects consisting of their symbolic and perceptual data. Particular effort is placed on creating flexible dialogues where requests to objects can be made in a variety of ways. Experimental validation consists of evaluating the system when many objects are possible candidates for satisfying these requests.

Place, publisher, year, edition, pages
Springer, 2014. Vol. 6, no 4, 914-927 p.
Keyword [en]
Human–robot interaction, Perceptual anchoring, Symbol grounding, Spoken dialogue systems, Social robotics
National Category
Robotics
Research subject
Computer Science; Information technology
Identifiers
URN: urn:nbn:se:oru:diva-39388DOI: 10.1007/s12559-014-9291-yISI: 000345994900022OAI: oai:DiVA.org:oru-39388DiVA: diva2:769246
Funder
Swedish Research Council, 2011-6104
Available from: 2014-12-06 Created: 2014-12-06 Last updated: 2015-01-23Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full text

Search in DiVA

By author/editor
Persson, AndreasAl Moubayed, SamerLoutfi, Amy
By organisation
School of Science and Technology, Örebro University, Sweden
In the same journal
Cognitive Computation
Robotics

Search outside of DiVA

GoogleGoogle Scholar

Altmetric score

Total: 276 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf