oru.sePublikationer
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Sequence Searching With Deep-Learnt Depth for Condition- and Viewpoint-Invariant Route-Based Place Recognition
Queensland University of Technology Australia, Australian Centre for Robotic Vision, Brisbane, Australia .
The University of Adelaide, Australia, Australian Centre for Robotic Vision, Adelaide, Australia.
Queensland University of Technology Australia, Australian Centre for Robotic Vision, Brisbane, Australia . (AAS MRO Group)
Queensland University of Technology Australia, Australian Centre for Robotic Vision, Brisbane, Australia .
Show others and affiliations
2015 (English)In: 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops, IEEE conference proceedings, 2015, 18-25 p.Conference paper, (Other academic)
Abstract [en]

Vision-based localization on robots and vehicles remains unsolved when extreme appearance change and viewpoint change are present simultaneously. The current state of the art approaches to this challenge either deal with only one of these two problems; for example FAB-MAP (viewpoint invariance) or SeqSLAM (appearance-invariance), or use extensive training within the test environment, an impractical requirement in many application scenarios. In this paper we significantly improve the viewpoint invariance of the SeqSLAM algorithm by using state-of-the-art deep learning techniques to generate synthetic viewpoints. Our approach is different to other deep learning approaches in that it does not rely on the ability of the CNN network to learn invariant features, but only to produce "good enough" depth images from day-time imagery only. We evaluate the system on a new multi-lane day-night car dataset specifically gathered to simultaneously test both appearance and viewpoint change. Results demonstrate that the use of synthetic viewpoints improves the maximum recall achieved at 100% precision by a factor of 2.2 and maximum recall by a factor of 2.7, enabling correct place recognition across multiple road lanes and significantly reducing the time between correct localizations.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2015. 18-25 p.
Keyword [en]
route-based place recognition, deep learning
National Category
Computer Science
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:oru:diva-47928DOI: 10.1109/CVPRW.2015.7301395OAI: oai:DiVA.org:oru-47928DiVA: diva2:900436
Conference
2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Boston, MA, USA, June 7-12, 2015
Note

Funding Agencies:

ARC Centre of Excellence in Robotic Vision CE140100016

ARC Future Fellowship FT140101229

Available from: 2016-02-04 Created: 2016-02-04 Last updated: 2017-03-06Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textPDF

Search in DiVA

By author/editor
Lowry, Stephanie
Computer Science

Search outside of DiVA

GoogleGoogle Scholar

Altmetric score

Total: 89 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf