To Örebro University

oru.seÖrebro universitets publikasjoner
Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Relational affordances for multiple-object manipulation
Department of Computer Science, Katholieke Universiteit Leuven, Leuven, Belgium.
Institute for Systems and Robotics (ISR/IST), LARSyS, Instituto Superior Técnico, University of Lisbon, Lisbon, Portugal.
Department of Computer Science, Katholieke Universiteit Leuven, Leuven, Belgium.
Institute for Systems and Robotics (ISR/IST), LARSyS, Instituto Superior Técnico, University of Lisbon, Lisbon, Portugal.
Vise andre og tillknytning
2018 (engelsk)Inngår i: Autonomous Robots, ISSN 0929-5593, E-ISSN 1573-7527, Vol. 42, nr 1, s. 19-44Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

The concept of affordances has been used in robotics to model action opportunities of a robot and as a basis for making decisions involving objects. Affordances capture the interdependencies between the objects and their properties, the executed actions on those objects, and the effects of those respective actions. However, existing affordance models cannot cope with multiple objects that may interact during action execution. Our approach is unique in that possesses the following four characteristics. First, our model employs recent advances in probabilistic programming to learn affordance models that take into account (spatial) relations between different objects, such as relative distances. Two-object interaction models are first learned from the robot interacting with the world in a behavioural exploration stage, and are then employed in worlds with an arbitrary number of objects. The model thus generalizes over both the number of and the particular objects used in the exploration stage, and it also effectively deals with uncertainty. Secondly, rather than using a (discrete) action repertoire, the actions are parametrised according to the motor capabilities of the robot, which allows to model and achieve goals at several levels of complexity. It also supports a two-arm parametrised action. Thirdly, the relational affordance model represents the state of the world using both discrete (action and object features) and continuous (effects) random variables. The effects follow a multivariate Gaussian distribution with the correlated discrete variables (actions and object properties). Fourthly, the learned model can be employed on planning for high-level goals that closely correspond to goals formulated in natural language. The goals are specified by means of (spatial) relations between the objects. The model is evaluated in real experiments using an iCub robot given a series of such planning goals of increasing difficulty.

sted, utgiver, år, opplag, sider
Springer, 2018. Vol. 42, nr 1, s. 19-44
Emneord [en]
Affordances, Relational affordances, Probabilistic programming, Object manipulation, Planning
HSV kategori
Identifikatorer
URN: urn:nbn:se:oru:diva-83321DOI: 10.1007/s10514-017-9637-xISI: 000419487700002Scopus ID: 2-s2.0-85018987328OAI: oai:DiVA.org:oru-83321DiVA, id: diva2:1442629
Tilgjengelig fra: 2020-06-17 Laget: 2020-06-17 Sist oppdatert: 2025-02-09bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekstScopus

Person

De Raedt, Luc

Søk i DiVA

Av forfatter/redaktør
De Raedt, Luc
I samme tidsskrift
Autonomous Robots

Søk utenfor DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric

doi
urn-nbn
Totalt: 156 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf