To Örebro University

oru.seÖrebro University Publications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Context-Aware Grasp Generation in Cluttered Scenes
Örebro University, School of Science and Technology. ICT Department, FPT University, Hanoi, Vietnam. (AASS)
Örebro University, School of Science and Technology. (AASS)ORCID iD: 0000-0003-3958-6179
Örebro University, School of Science and Technology. (AASS)ORCID iD: 0000-0002-6013-4874
2022 (English)In: 2022 International Conference on Robotics and Automation (ICRA), IEEE, 2022, p. 1492-1498Conference paper, Published paper (Refereed)
Abstract [en]

Conventional methods to autonomous grasping rely on a pre-computed database with known objects to synthesize grasps, which is not possible for novel objects. On the other hand, recently proposed deep learning-based approaches have demonstrated the ability to generalize grasp for unknown objects. However, grasp generation still remains a challenging problem, especially in cluttered environments under partial occlusion. In this work, we propose an end-to-end deep learning approach for generating 6-DOF collision-free grasps given a 3D scene point cloud. To build robustness to occlusion, the proposed model generates candidates by casting votes and accumulating evidence for feasible grasp configurations. We exploit contextual information by encoding the dependency of objects in the scene into features to boost the performance of grasp generation. The contextual information enables our model to increase the likelihood that the generated grasps are collision-free. Our experimental results confirm that the proposed system performs favorably in terms of predicting object grasps in cluttered environments in comparison to the current state of the art methods.

Place, publisher, year, edition, pages
IEEE, 2022. p. 1492-1498
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:oru:diva-98437DOI: 10.1109/ICRA46639.2022.9811371ISI: 000941265701005Scopus ID: 2-s2.0-85136323876ISBN: 9781728196824 (print)ISBN: 9781728196817 (electronic)OAI: oai:DiVA.org:oru-98437DiVA, id: diva2:1648882
Conference
IEEE International Conference on Robotics and Automation (ICRA 2022), Philadelphia, USA, May 23-27, 2022
Funder
EU, Horizon 2020, 101017274 (DARKO)Available from: 2022-04-01 Created: 2022-04-01 Last updated: 2023-05-03Bibliographically approved

Open Access in DiVA

Context-Aware Grasp Generation in Cluttered Scenes(5899 kB)259 downloads
File information
File name FULLTEXT01.pdfFile size 5899 kBChecksum SHA-512
39e2ec2814bbbfa5ae4ea8505a0aacd6864c0cdcd003cee02edf9b976a592e2cd4dd9a00c8d84bf21aea978ffcb5c3b505fb5136975bacbc280637eb79cd4158
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Hoang, Dinh-CuongStork, Johannes AndreasStoyanov, Todor

Search in DiVA

By author/editor
Hoang, Dinh-CuongStork, Johannes AndreasStoyanov, Todor
By organisation
School of Science and Technology
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 259 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 472 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf