oru.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Affordance detection for task-specific grasping using deep learning
Robotics, Perception, and Learning lab, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
Robotics, Perception, and Learning lab, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden. (AASS)ORCID-id: 0000-0003-3958-6179
Robotics, Perception, and Learning lab, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
Robotics, Perception, and Learning lab, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
2017 (Engelska)Ingår i: 2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), IEEE conference proceedings, 2017, s. 91-98Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

In this paper we utilize the notion of affordances to model relations between task, object and a grasp to address the problem of task-specific robotic grasping. We use convolutional neural networks for encoding and detecting object affordances, class and orientation, which we utilize to formulate grasp constraints. Our approach applies to previously unseen objects from a fixed set of classes and facilitates reasoning about which tasks an object affords and how to grasp it for that task. We evaluate affordance detection on full-view and partial-view synthetic data and compute task-specific grasps for objects that belong to ten different classes and afford five different tasks. We demonstrate the feasibility of our approach by employing an optimization-based grasp planner to compute task-specific grasps.

Ort, förlag, år, upplaga, sidor
IEEE conference proceedings, 2017. s. 91-98
Serie
IEEE-RAS International Conference on Humanoid Robotics, E-ISSN 2164-0580
Nationell ämneskategori
Datavetenskap (datalogi) Datorseende och robotik (autonoma system)
Identifikatorer
URN: urn:nbn:se:oru:diva-71556DOI: 10.1109/HUMANOIDS.2017.8239542ISI: 000427350100013Scopus ID: 2-s2.0-85044473077OAI: oai:DiVA.org:oru-71556DiVA, id: diva2:1280228
Konferens
IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), Birmingham, UK, November 15-17, 2017
Tillgänglig från: 2019-01-18 Skapad: 2019-01-18 Senast uppdaterad: 2019-01-23Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

Förlagets fulltextScopus

Personposter BETA

Stork, Johannes Andreas

Sök vidare i DiVA

Av författaren/redaktören
Stork, Johannes Andreas
Datavetenskap (datalogi)Datorseende och robotik (autonoma system)

Sök vidare utanför DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetricpoäng

doi
urn-nbn
Totalt: 83 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf