To Örebro University

oru.seÖrebro University Publications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Learning Occupancy Priors of Human Motion From Semantic Maps of Urban Environments
Örebro University, School of Science and Technology. Bosch Corporate Research, Renningen, Germany. (Mobile Robotics and Olfaction Lab)
Bosch Corporate Research, Renningen, Germany.
Bosch Center for Artificial Intelligence, Renningen, Germany.
Örebro University, School of Science and Technology. (Mobile Robotics and Olfaction Lab)ORCID iD: 0000-0003-0217-9326
Show others and affiliations
2021 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 6, no 2, p. 3248-3255Article in journal (Refereed) Published
Abstract [en]

Understanding and anticipating human activity is an important capability for intelligent systems in mobile robotics, autonomous driving, and video surveillance. While learning from demonstrations with on-site collected trajectory data is a powerful approach to discover recurrent motion patterns, generalization to new environments, where sufficient motion data are not readily available, remains a challenge. In many cases, however, semantic information about the environment is a highly informative cue for the prediction of pedestrian motion or the estimation of collision risks. In this work, we infer occupancy priors of human motion using only semantic environment information as input. To this end, we apply and discuss a traditional Inverse Optimal Control approach, and propose a novel approach based on Convolutional Neural Networks (CNN) to predict future occupancy maps. Our CNN method produces flexible context-aware occupancy estimations for semantically uniform map regions and generalizes well already with small amounts of training data. Evaluated on synthetic and real-world data, it shows superior results compared to several baselines, marking a qualitative step-up in semantic environment assessment.

Place, publisher, year, edition, pages
IEEE, 2021. Vol. 6, no 2, p. 3248-3255
Keywords [en]
Deep learning for visual perception, human detection and tracking, human motion analysis, human motion prediction, semantic scene understanding
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:oru:diva-91362DOI: 10.1109/LRA.2021.3062010ISI: 000633394300012Scopus ID: 2-s2.0-85101765394OAI: oai:DiVA.org:oru-91362DiVA, id: diva2:1546418
Note

Funding Agency:

European Commission 732737

Available from: 2021-04-22 Created: 2021-04-22 Last updated: 2024-01-17Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Rudenko, AndreyLilienthal, Achim

Search in DiVA

By author/editor
Rudenko, AndreyLilienthal, Achim
By organisation
School of Science and Technology
In the same journal
IEEE Robotics and Automation Letters
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 90 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf