To Örebro University

oru.seÖrebro University Publications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Voting and Attention-Based Pose Relation Learning for Object Pose Estimation From 3D Point Clouds
CT Department, FPT University, Hanoi, Vietnam.
Örebro University, School of Science and Technology. (Centre for Applied Autonomous Sensor Systems)ORCID iD: 0000-0003-3958-6179
Örebro University, School of Science and Technology. Department of Computing and Software, McMaster University, Hamilton ON, Canada. (Centre for Applied Autonomous Sensor Systems)ORCID iD: 0000-0002-6013-4874
2022 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 7, no 4, p. 8980-8987Article in journal (Refereed) Published
Abstract [en]

Estimating the 6DOF pose of objects is an important function in many applications, such as robot manipulation or augmented reality. However, accurate and fast pose estimation from 3D point clouds is challenging, because of the complexity of object shapes, measurement noise, and presence of occlusions. We address this challenging task using an end-to-end learning approach for object pose estimation given a raw point cloud input. Our architecture pools geometric features together using a self-attention mechanism and adopts a deep Hough voting scheme for pose proposal generation. To build robustness to occlusion, the proposed network generates candidates by casting votes and accumulating evidence for object locations. Specifically, our model learns higher-level features by leveraging the dependency of object parts and object instances, thereby boosting the performance of object pose estimation. Our experiments show that our method outperforms state-of-the-art approaches in public benchmarks including the Sileane dataset 135 and the Fraunhofer IPA dataset [36]. We also deploy our proposed method to a real robot pick-and-place based on the estimated pose.

Place, publisher, year, edition, pages
IEEE, 2022. Vol. 7, no 4, p. 8980-8987
Keywords [en]
6D object pose estimation, 3D point cloud, robot manipulation
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:oru:diva-100891DOI: 10.1109/LRA.2022.3189158ISI: 000838567100053Scopus ID: 2-s2.0-85134230629OAI: oai:DiVA.org:oru-100891DiVA, id: diva2:1691597
Funder
EU, Horizon 2020, 101017274Knut and Alice Wallenberg FoundationAvailable from: 2022-08-30 Created: 2022-08-30 Last updated: 2022-08-30Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Stork, Johannes A.Stoyanov, Todor

Search in DiVA

By author/editor
Stork, Johannes A.Stoyanov, Todor
By organisation
School of Science and Technology
In the same journal
IEEE Robotics and Automation Letters
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 58 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf