To Örebro University

oru.seÖrebro University Publications
4243444546474845 of 157
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
THÖR-MAGNI Act: Actions for Human Motion Modeling in Robot-Shared Industrial Spaces
Örebro University, School of Science and Technology. (AASS)ORCID iD: 0000-0001-9059-6175
PercInS, Technical University of Munich Munich, Germany.
Corporate Research, Robert Bosch GmbH Stuttgart, Germany.
Corporate Research, Robert Bosch GmbH Stuttgart, Germany.
Show others and affiliations
2025 (English)In: 20th edition of the ACM/IEEE International Conference on Human-Robot Interaction, 2025Conference paper, Published paper (Refereed)
Abstract [en]

Accurate human activity and trajectory prediction are crucial for ensuring safe and reliable human-robot interactions in dynamic environments, such as industrial settings, with mobile robots. Datasets with fine-grained action labels for moving people in industrial environments with mobile robots are scarce, as most existing datasets focus on social navigation in public spaces. This paper introduces the THÖR-MAGNI Act dataset, a substantial extension of the THÖR-MAGNI dataset, which captures participant movements alongside robots in diverse semantic and spatial contexts. THÖR-MAGNI Act provides 8.3 hours of manually labeled participant actions derived from egocentric videos recorded via eye-tracking glasses. These actions, aligned with the provided THÖR-MAGNI motion cues, follow a long-tailed distribution with diversified acceleration, velocity, and navigation distance profiles. We demonstrate the utility of THÖR-MAGNI Act for two tasks: action-conditioned trajectory prediction and joint action and trajectory prediction. We propose two efficient transformer-based models that outperform the baselines to address these tasks. These results underscore the potential of THÖR-MAGNI Act to develop predictive models for enhanced human-robot interaction in complex environments.

Place, publisher, year, edition, pages
2025.
Keywords [en]
human motion dataset, human motion modeling, human activity prediction
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:oru:diva-119601OAI: oai:DiVA.org:oru-119601DiVA, id: diva2:1941542
Conference
20th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2025), Melbourne, Australia, March 4-6, 2025
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)EU, Horizon 2020, 101017274Available from: 2025-02-28 Created: 2025-02-28 Last updated: 2025-03-03Bibliographically approved

Open Access in DiVA

THÖR-MAGNI Act: Actions for Human Motion Modeling in Robot-Shared Industrial Spaces(2408 kB)599 downloads
File information
File name FULLTEXT01.pdfFile size 2408 kBChecksum SHA-512
7ed286efb3305b13abbe6ed66f2a2c78deea330388d4495b69c3a6df658777ef2c5312c69c345357c394436161be62ac9c7ae0005a8f6206c8d5d81135a9deea
Type fulltextMimetype application/pdf

Other links

arXiv

Authority records

Almeida, TiagoStork, Johannes A.Lilienthal, Achim J.

Search in DiVA

By author/editor
Almeida, TiagoStork, Johannes A.Lilienthal, Achim J.
By organisation
School of Science and Technology
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 600 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 993 hits
4243444546474845 of 157
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf