To Örebro University

oru.seÖrebro University Publications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
EgoTV: Egocentric Task Verification from Natural Language Task Descriptions
Örebro University, School of Science and Technology. (MPI, AASS)ORCID iD: 0000-0003-3422-2085
Meta.
Meta.
Meta.
Show others and affiliations
2023 (English)Conference paper, Published paper (Refereed)
Abstract [en]

To enable progress towards egocentric agents capable of understanding everyday tasks specified in natural language, we propose a benchmark and a synthetic dataset called Egocentric Task Verification (EgoTV). The goal in EgoTV is to verify the execution of tasks from egocentric videos based on the natural language description of these tasks. EgoTV contains pairs of videos and their task descriptions for multi-step tasks -- these tasks contain multiple sub-task decompositions, state changes, object interactions, and sub-task ordering constraints. In addition, EgoTV also provides abstracted task descriptions that contain only partial details about ways to accomplish a task. Consequently, EgoTV requires causal, temporal, and compositional reasoning of video and language modalities, which is missing in existing datasets. We also find that existing vision-language models struggle at such all-round reasoning needed for task verification in EgoTV Inspired by the needs of EgoTV, we propose a novel Neuro-Symbolic Grounding (NSG) approach that leverages symbolic representations to capture the compositional and temporal structure of tasks. We demonstrate NSG's capability towards task tracking and verification on our EgoTV dataset and a real-world dataset derived from CrossTask (CTV). We open-source the EgoTV and CTV datasets and the NSG model for future research on egocentric assistive agents. 

Place, publisher, year, edition, pages
2023.
Keywords [en]
Video Task Verification, Computer Vision, Language Understanding, Neuro-Symbolic Reasoning
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:oru:diva-108102DOI: 10.48550/arXiv.2303.16975OAI: oai:DiVA.org:oru-108102DiVA, id: diva2:1794433
Conference
International Conference on Computer Vision (ICCV 2023), Paris, France, October 2-6, 2023
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Available from: 2023-09-05 Created: 2023-09-05 Last updated: 2023-12-28Bibliographically approved

Open Access in DiVA

EgoTV: Egocentric Task Verificationfrom Natural Language Task Descriptions(4510 kB)98 downloads
File information
File name FULLTEXT01.pdfFile size 4510 kBChecksum SHA-512
9b276419a3b1499b3375320fe8f6c8ded0cc0014ed408933a80a0d4afe0cdc0eab9c3a12846912d843f6cfcf38ffae756a53965c2854568d2b15f20e6b4dba6c
Type fulltextMimetype application/pdf

Other links

Publisher's full text

Authority records

Hazra, Rishi

Search in DiVA

By author/editor
Hazra, Rishi
By organisation
School of Science and Technology
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar
Total: 98 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 587 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf