To Örebro University

oru.seÖrebro University Publications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Advantages of Multimodal versus Verbal-Only Robot-to-Human Communication with an Anthropomorphic Robotic Mock Driver
Örebro University, School of Science and Technology. (Centre for Applied Autonomous Sensor Systems (AASS))ORCID iD: 0000-0002-9387-2312
Örebro University, School of Science and Technology. (Centre for Applied Autonomous Sensor Systems (AASS))ORCID iD: 0000-0001-7339-8118
Örebro University, School of Science and Technology. (Centre for Applied Autonomous Sensor Systems (AASS))ORCID iD: 0000-0002-8380-4113
Robert Bosch GmbH, Corporate Research, Stuttgart, Germany.
Show others and affiliations
2023 (English)In: 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN): Proceedings, IEEE, 2023, p. 293-300Conference paper, Published paper (Refereed)
Abstract [en]

Robots are increasingly used in shared environments with humans, making effective communication a necessity for successful human-robot interaction. In our work, we study a crucial component: active communication of robot intent. Here, we present an anthropomorphic solution where a humanoid robot communicates the intent of its host robot acting as an "Anthropomorphic Robotic Mock Driver" (ARMoD). We evaluate this approach in two experiments in which participants work alongside a mobile robot on various tasks, while the ARMoD communicates a need for human attention, when required, or gives instructions to collaborate on a joint task. The experiments feature two interaction styles of the ARMoD: a verbal-only mode using only speech and a multimodal mode, additionally including robotic gaze and pointing gestures to support communication and register intent in space. Our results show that the multimodal interaction style, including head movements and eye gaze as well as pointing gestures, leads to more natural fixation behavior. Participants naturally identified and fixated longer on the areas relevant for intent communication, and reacted faster to instructions in collaborative tasks. Our research further indicates that the ARMoD intent communication improves engagement and social interaction with mobile robots in workplace settings.

Place, publisher, year, edition, pages
IEEE, 2023. p. 293-300
Series
IEEE RO-MAN, ISSN 1944-9445, E-ISSN 1944-9437
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:oru:diva-110873DOI: 10.1109/RO-MAN57019.2023.10309629ISI: 001108678600042ISBN: 9798350336702 (electronic)ISBN: 9798350336719 (print)OAI: oai:DiVA.org:oru-110873DiVA, id: diva2:1830088
Conference
32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Busan, South Korea, August 28-31, 2023
Funder
EU, Horizon 2020, 101017274 (DARKO)Available from: 2024-01-22 Created: 2024-01-22 Last updated: 2024-01-22Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Schreiter, TimMorillo-Mendez, LucasChadalavada, Ravi T.Magnusson, MartinLilienthal, Achim J.

Search in DiVA

By author/editor
Schreiter, TimMorillo-Mendez, LucasChadalavada, Ravi T.Magnusson, MartinLilienthal, Achim J.
By organisation
School of Science and Technology
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 18 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf