oru.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Affordance detection for task-specific grasping using deep learning
Robotics, Perception, and Learning lab, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
Robotics, Perception, and Learning lab, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden. (AASS)ORCID iD: 0000-0003-3958-6179
Robotics, Perception, and Learning lab, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
Robotics, Perception, and Learning lab, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
2017 (English)In: 2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), IEEE conference proceedings, 2017, p. 91-98Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we utilize the notion of affordances to model relations between task, object and a grasp to address the problem of task-specific robotic grasping. We use convolutional neural networks for encoding and detecting object affordances, class and orientation, which we utilize to formulate grasp constraints. Our approach applies to previously unseen objects from a fixed set of classes and facilitates reasoning about which tasks an object affords and how to grasp it for that task. We evaluate affordance detection on full-view and partial-view synthetic data and compute task-specific grasps for objects that belong to ten different classes and afford five different tasks. We demonstrate the feasibility of our approach by employing an optimization-based grasp planner to compute task-specific grasps.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2017. p. 91-98
Series
IEEE-RAS International Conference on Humanoid Robotics, E-ISSN 2164-0580
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:oru:diva-71556DOI: 10.1109/HUMANOIDS.2017.8239542ISI: 000427350100013Scopus ID: 2-s2.0-85044473077OAI: oai:DiVA.org:oru-71556DiVA, id: diva2:1280228
Conference
IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), Birmingham, UK, November 15-17, 2017
Available from: 2019-01-18 Created: 2019-01-18 Last updated: 2019-01-23Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records BETA

Stork, Johannes Andreas

Search in DiVA

By author/editor
Stork, Johannes Andreas
Computer SciencesComputer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 66 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf