To Örebro University

oru.seÖrebro University Publications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Learning from Implicit Information in Natural Language Instructions for Robotic Manipulations
Koc University.
KU Leuven.
Örebro University, School of Science and Technology.
Osnabrück University.
Show others and affiliations
2019 (English)In: Proceedings of the Combined Workshop on Spatial Language Understanding (SpLU) and Grounded Communication for Robotics (RoboNLP) / [ed] Archna Bhatia, Yonatan Bisk, Parisa Kordjamshidi, Jesse Thomason, Association for Computational Linguistics , 2019, p. 29-39, article id W19-1604Conference paper, Published paper (Refereed)
Abstract [en]

Human-robot interaction often occurs in the form of instructions given from a human to a robot. For a robot to successfully follow instructions, a common representation of the world and objects in it should be shared between humans and the robot so that the instructions can be grounded. Achieving this representation can be done via learning, where both the world representation and the language grounding are learned simultaneously. However, in robotics this can be a difficult task due to the cost and scarcity of data. In this paper, we tackle the problem by separately learning the world representation of the robot and the language grounding. While this approach can address the challenges in getting sufficient data, it may give rise to inconsistencies between both learned components. Therefore, we further propose Bayesian learning to resolve such inconsistencies between the natural language grounding and a robot’s world representation by exploiting spatio-relational information that is implicitly present in instructions given by a human. Moreover, we demonstrate the feasibility of our approach on a scenario involving a robotic arm in the physical world.

Place, publisher, year, edition, pages
Association for Computational Linguistics , 2019. p. 29-39, article id W19-1604
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems) Human Computer Interaction
Identifiers
URN: urn:nbn:se:oru:diva-79501DOI: 10.18653/v1/W19-1604OAI: oai:DiVA.org:oru-79501DiVA, id: diva2:1389349
Conference
Combined Workshop on Spatial Language Understanding (SpLU) and Grounded Communication for Robotics (RoboNLP), Minneapolis, Minnesota, USA, June, 2019
Funder
Swedish Research Council, 2016-05321EU, Horizon 2020
Note

This work has been supported by the ReGROUND project (http://reground.cs.kuleuven.be), which is a CHISTERA project funded by the EU H2020 framework program, the Research Foundation - Flanders, the Swedish Research Council (Vetenskapsrådet), and the Scientific and Technological Research Council of Turkey (TUBITAK). The work is also supported by Vetenskapsrådet under the grant number: 2016-05321 and by TUBITAK under the grants 114E628 and 215E201.

Available from: 2020-01-29 Created: 2020-01-29 Last updated: 2020-06-17Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Persson, AndreasLoutfi, AmyDe Raedt, LucSaffiotti, Alessandro

Search in DiVA

By author/editor
Persson, AndreasLoutfi, AmyDe Raedt, LucSaffiotti, Alessandro
By organisation
School of Science and Technology
Computer SciencesComputer Vision and Robotics (Autonomous Systems)Human Computer Interaction

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 382 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf