Till Örebro universitet

oru.seÖrebro universitets publikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Sequence Searching With Deep-Learnt Depth for Condition- and Viewpoint-Invariant Route-Based Place Recognition
Australian Centre for Robotic Vision, Queensland University of Technology Australia, Brisbane, Australia .
Australian Centre for Robotic Vision, The University of Adelaide, Adelaide, Australia.
Australian Centre for Robotic Vision, Queensland University of Technology Australia, Brisbane, Australia . (AASS MRO Group)ORCID-id: 0000-0003-3788-499X
Australian Centre for Robotic Vision, Queensland University of Technology Australia, Brisbane, Australia .
Visa övriga samt affilieringar
2015 (Engelska)Ingår i: 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops, IEEE conference proceedings, 2015, s. 18-25Konferensbidrag, Publicerat paper (Övrigt vetenskapligt)
Abstract [en]

Vision-based localization on robots and vehicles remains unsolved when extreme appearance change and viewpoint change are present simultaneously. The current state of the art approaches to this challenge either deal with only one of these two problems; for example FAB-MAP (viewpoint invariance) or SeqSLAM (appearance-invariance), or use extensive training within the test environment, an impractical requirement in many application scenarios. In this paper we significantly improve the viewpoint invariance of the SeqSLAM algorithm by using state-of-the-art deep learning techniques to generate synthetic viewpoints. Our approach is different to other deep learning approaches in that it does not rely on the ability of the CNN network to learn invariant features, but only to produce "good enough" depth images from day-time imagery only. We evaluate the system on a new multi-lane day-night car dataset specifically gathered to simultaneously test both appearance and viewpoint change. Results demonstrate that the use of synthetic viewpoints improves the maximum recall achieved at 100% precision by a factor of 2.2 and maximum recall by a factor of 2.7, enabling correct place recognition across multiple road lanes and significantly reducing the time between correct localizations.

Ort, förlag, år, upplaga, sidor
IEEE conference proceedings, 2015. s. 18-25
Nyckelord [en]
route-based place recognition, deep learning
Nationell ämneskategori
Datavetenskap (datalogi)
Forskningsämne
Datavetenskap
Identifikatorer
URN: urn:nbn:se:oru:diva-47928DOI: 10.1109/CVPRW.2015.7301395Scopus ID: 2-s2.0-84951916851OAI: oai:DiVA.org:oru-47928DiVA, id: diva2:900436
Konferens
2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Boston, MA, USA, June 7-12, 2015
Anmärkning

Funding Agencies:

ARC Centre of Excellence in Robotic Vision CE140100016

ARC Future Fellowship FT140101229

Tillgänglig från: 2016-02-04 Skapad: 2016-02-04 Senast uppdaterad: 2024-01-03Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

Förlagets fulltextScopus

Person

Lowry, Stephanie

Sök vidare i DiVA

Av författaren/redaktören
Lowry, Stephanie
Datavetenskap (datalogi)

Sök vidare utanför DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetricpoäng

doi
urn-nbn
Totalt: 424 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf