oru.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Krishna, Sai
Publications (8 of 8) Show all publications
Krishna, S., Kiselev, A., Kristoffersson, A., Repsilber, D. & Loutfi, A. (2019). A Novel Method for Estimating Distances from a Robot to Humans Using Egocentric RGB Camera. Sensors, 19(14), Article ID E3142.
Open this publication in new window or tab >>A Novel Method for Estimating Distances from a Robot to Humans Using Egocentric RGB Camera
Show others...
2019 (English)In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 19, no 14, article id E3142Article in journal (Refereed) Published
Abstract [en]

Estimating distances between people and robots plays a crucial role in understanding social Human-Robot Interaction (HRI) from an egocentric view. It is a key step if robots should engage in social interactions, and to collaborate with people as part of human-robot teams. For distance estimation between a person and a robot, different sensors can be employed, and the number of challenges to be addressed by the distance estimation methods rise with the simplicity of the technology of a sensor. In the case of estimating distances using individual images from a single camera in a egocentric position, it is often required that individuals in the scene are facing the camera, do not occlude each other, and are fairly visible so specific facial or body features can be identified. In this paper, we propose a novel method for estimating distances between a robot and people using single images from a single egocentric camera. The method is based on previously proven 2D pose estimation, which allows partial occlusions, cluttered background, and relatively low resolution. The method estimates distance with respect to the camera based on the Euclidean distance between ear and torso of people in the image plane. Ear and torso characteristic points has been selected based on their relatively high visibility regardless of a person orientation and a certain degree of uniformity with regard to the age and gender. Experimental validation demonstrates effectiveness of the proposed method.

Place, publisher, year, edition, pages
MDPI, 2019
Keywords
Human–Robot Interaction, distance estimation, single RGB image, social interaction
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-75583 (URN)10.3390/s19143142 (DOI)000479160300109 ()31319523 (PubMedID)2-s2.0-85070083052 (Scopus ID)
Note

Funding Agency:

Örebro University

Available from: 2019-08-16 Created: 2019-08-16 Last updated: 2019-11-15Bibliographically approved
Krishna, S., Kristoffersson, A., Kiselev, A. & Loutfi, A. (2019). Estimating Optimal Placement for a Robot in Social Group Interaction. In: IEEE International Workshop on Robot and Human Communication (ROMAN): . Paper presented at The 28th IEEE International Conference on Robot and Human Interactive Communication – RO-MAN 2019, New Delhi, India, October 14-18, 2019.. IEEE
Open this publication in new window or tab >>Estimating Optimal Placement for a Robot in Social Group Interaction
2019 (English)In: IEEE International Workshop on Robot and Human Communication (ROMAN), IEEE, 2019Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we present a model to propose anoptimal placement for a robot in a social group interaction. Ourmodel estimates the O-space according to the F-formation theory. The method automatically calculates a suitable placementfor the robot. An evaluation of the method has been performedby conducting an experiment where participants stand in differ-ent formations and a robot is teleoperated to join the group. Inone condition, the operator positions the robot according to thespecified location given by our algorithm. In another condition,operators have the freedom to position the robot according totheir personal choice. Follow-up questionnaires were performedto determine which of the placements were preferred by theparticipants. The results indicate that the proposed methodfor automatic placement of the robot is supported from theparticipants. The contribution of this work resides in a novelmethod to automatically estimate the best placement of therobot, as well as the results from user experiments to verify thequality of this method. These results suggest that teleoperatedrobots such as mobile robot telepresence systems could benefitfrom tools that assist operators in placing the robot in groupsin a socially accepted manner.

Place, publisher, year, edition, pages
IEEE, 2019
Keywords
F-formations, Robot Positioning Spot, Mobile Robotic Telepresence, HRI
National Category
Engineering and Technology Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-78832 (URN)10.1109/RO-MAN46459.2019.8956318 (DOI)978-1-7281-2622-7 (ISBN)978-1-7281-2623-4 (ISBN)
Conference
The 28th IEEE International Conference on Robot and Human Interactive Communication – RO-MAN 2019, New Delhi, India, October 14-18, 2019.
Projects
Successful Ageing
Available from: 2019-12-20 Created: 2019-12-20 Last updated: 2020-02-14Bibliographically approved
Krishna, S., Kristoffersson, A., Kiselev, A. & Loutfi, A. (2019). F-Formations for Social Interaction in Simulation Using Virtual Agents and Mobile Robotic Telepresence Systems. Multimodal Technologies and Interaction, 3(4), Article ID 69.
Open this publication in new window or tab >>F-Formations for Social Interaction in Simulation Using Virtual Agents and Mobile Robotic Telepresence Systems
2019 (English)In: Multimodal Technologies and Interaction, ISSN 2414-4088, Vol. 3, no 4, article id 69Article in journal (Refereed) Published
Abstract [en]

F-formations are a set of possible patterns in which groups of people tend to spatially organize themselves while engaging in social interactions. In this paper, we study the behavior of teleoperators of mobile robotic telepresence systems to determine whether they adhere to spatial formations when navigating to groups. This work uses a simulated environment in which teleoperators are requested to navigate to different groups of virtual agents. The simulated environment represents a conference lobby scenario where multiple groups of Virtual Agents with varying group sizes are placed in different spatial formations. The task requires teleoperators to navigate a robot to join each group using an egocentric-perspective camera. In a second phase, teleoperators are allowed to evaluate their own performance by reviewing how they navigated the robot from an exocentric perspective. The two important outcomes from this study are, firstly, teleoperators inherently respect F-formations even when operating a mobile robotic telepresence system. Secondly, teleoperators prefer additional support in order to correctly navigate the robot into a preferred position that adheres to F-formations.

Place, publisher, year, edition, pages
MDPI, 2019
Keywords
telepresence, mobile robotic telepresence, F-formations, simulation, virtual agents, HRI
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-78830 (URN)10.3390/mti3040069 (DOI)
Note

Funding Agency:

Örebro University

Available from: 2019-12-20 Created: 2019-12-20 Last updated: 2020-01-28Bibliographically approved
Krishna, S. (2018). Join the Group Formations using Social Cues in Social Robots. In: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS '18): . Paper presented at 17th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2018), Stockholm, Sweden, July 10-15, 2018 (pp. 1766-1767). New York: Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Join the Group Formations using Social Cues in Social Robots
2018 (English)In: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS '18), New York: Association for Computing Machinery (ACM), 2018, p. 1766-1767Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

This work investigates how agents can spatially orient themselves into formations which provide good conditions for enabling social interaction. To achieve this, we are using socio-psychological notion, F-formation in our project and based on this concept, we detect positions of other agents in a scene to find the optimum placement. Using both simulation and real robotic systems, the system aims to achieve a functionality which enables an agent to autonomously place itself within a group.

Place, publisher, year, edition, pages
New York: Association for Computing Machinery (ACM), 2018
Series
Proceedings of the ... International Joint Conference on Autonomous Agents and Multiagent Systems AAMAS, E-ISSN 1548-8403
Keywords
F-formations, Social Robots, Human-Robot Interaction
National Category
Robotics
Research subject
Human-Computer Interaction
Identifiers
urn:nbn:se:oru:diva-71875 (URN)000468231300218 ()2-s2.0-85054723068 (Scopus ID)978-1-4503-5649-7 (ISBN)
Conference
17th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2018), Stockholm, Sweden, July 10-15, 2018
Available from: 2019-01-28 Created: 2019-01-28 Last updated: 2019-06-03Bibliographically approved
Krishna, S. & Loutfi, A. (2018). Robotics for Successful Ageing. In: Eleonor Kristoffersson & Kerstin Nilsson (Ed.), Successful ageing in an interdisciplinary context: popular science presentations (pp. 29-35). Örebro, Sweden: Örebro University
Open this publication in new window or tab >>Robotics for Successful Ageing
2018 (English)In: Successful ageing in an interdisciplinary context: popular science presentations / [ed] Eleonor Kristoffersson & Kerstin Nilsson, Örebro, Sweden: Örebro University , 2018, p. 29-35Chapter in book (Other (popular science, discussion, etc.))
Abstract [en]

The main idea of the ongoing research is to use robotics to create new opportunities to help older people to remain alone in their apartments which can beachieved by using robots as an interacting tool between the elderly and theirfamily members or doctors. This can be done by building a system (software)for Mobile Robots to work autonomously (self-driving) and semi-autono-mously (controlled by the user) when necessary, depending on the situationand the surroundings. This system is integrated with social cues, particularlyproxemics, to know and understand human space, which is very importantfor social interaction. In conclusion, we are interested in having a Socially Intelligent Robot, which could use the social cues, proxemics, to have a natural interaction with people in groups.

Place, publisher, year, edition, pages
Örebro, Sweden: Örebro University, 2018
National Category
Robotics
Research subject
Human-Computer Interaction
Identifiers
urn:nbn:se:oru:diva-71874 (URN)978-91-87789-18-2 (ISBN)
Available from: 2019-01-28 Created: 2019-01-28 Last updated: 2019-01-28Bibliographically approved
Alexopoulou, S., Fart, F., Jonsson, A.-S., Karni, L., Kenalemang, L. M., Krishna, S., . . . Widell, B. (2018). Successful ageing in an interdisciplinary context: popular science presentations. Örebro: Örebro University
Open this publication in new window or tab >>Successful ageing in an interdisciplinary context: popular science presentations
Show others...
2018 (English)Book (Other (popular science, discussion, etc.))
Place, publisher, year, edition, pages
Örebro: Örebro University, 2018. p. 127
National Category
Gerontology, specialising in Medical and Health Sciences Other Social Sciences not elsewhere specified
Identifiers
urn:nbn:se:oru:diva-66306 (URN)978-91-87789-18-2 (ISBN)
Available from: 2018-04-03 Created: 2018-04-03 Last updated: 2019-03-26Bibliographically approved
Terzic, K., Krishna, S. & du Buf, J. M. (2017). Texture features for object salience. Image and Vision Computing, 67, 43-51
Open this publication in new window or tab >>Texture features for object salience
2017 (English)In: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 67, p. 43-51Article in journal (Refereed) Published
Abstract [en]

Although texture is important for many vision-related tasks, it is not used in most salience models. As a consequence, there are images where all existing salience algorithms fail. We introduce a novel set of texture features built on top of a fast model of complex cells in striate cortex, i.e., visual area V1. The texture at each position is characterised by the two-dimensional local power spectrum obtained from Gabor filters which are tuned to many scales and orientations. We then apply a parametric model and describe the local spectrum by the combination of two one-dimensional Gaussian approximations: the scale and orientation distributions. The scale distribution indicates whether the texture has a dominant frequency and what frequency it is. Likewise, the orientation distribution attests the degree of anisotropy. We evaluate the features in combination with the state-of-the-art VOCUS2 salience algorithm. We found that using our novel texture features in addition to colour improves AUC by 3.8% on the PASCAL-S dataset when compared to the colour-only baseline, and by 62% on a novel texture-based dataset.

Place, publisher, year, edition, pages
Elsevier, 2017
Keywords
Texture, Colour, Salience, Attention, Benchmark
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-62841 (URN)10.1016/j.imavis.2017.09.007 (DOI)000414883800004 ()2-s2.0-85030120053 (Scopus ID)
Note

Funding Agencies:

EU  ICT-2009.2.1-270247 

FCT  LarSYS UlD/EEA/50009/2013  EXPL/EEI-SII/1982/2013 

Available from: 2017-11-27 Created: 2017-11-27 Last updated: 2018-08-11Bibliographically approved
Krishna, S., Kiselev, A. & Loutfi, A. (2017). Towards a Method to Detect F-formations in Real-Time to Enable Social Robots to Join Groups. In: Towards a Method to Detect F-formations in Real-Time to Enable Social Robots to Join Groups: . Paper presented at ECCE Workshop 2017: : Robots in Contexts: Human-Robot Interaction as Physically and Socially Embedded conducted at Umeå University, Sweden, 19 September, 2017. Umeå, Sweden: Umeå University
Open this publication in new window or tab >>Towards a Method to Detect F-formations in Real-Time to Enable Social Robots to Join Groups
2017 (English)In: Towards a Method to Detect F-formations in Real-Time to Enable Social Robots to Join Groups, Umeå, Sweden: Umeå University , 2017Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

In this paper, we extend an algorithm to detect constraint based F-formations for a telepresence robot and also consider the situation when the robot is in motion. The proposed algorithm is computationally inexpensive, uses an egocentric (first-person) vision, low memory, low quality vision settings and also works in real time which is explicitly designed for a mobile robot. The proposed approach is a first step advancing in the direction of automatically detecting F-formations for the robotics community.

Place, publisher, year, edition, pages
Umeå, Sweden: Umeå University, 2017
Keywords
Social Robot, F-formations, Face Orientation
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-64606 (URN)
Conference
ECCE Workshop 2017: : Robots in Contexts: Human-Robot Interaction as Physically and Socially Embedded conducted at Umeå University, Sweden, 19 September, 2017
Projects
Successful Ageing
Available from: 2018-01-29 Created: 2018-01-29 Last updated: 2018-02-12Bibliographically approved
Organisations

Search in DiVA

Show all publications