oru.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 33) Show all publications
Sun, D., Kiselev, A., Liao, Q., Stoyanov, T. & Loutfi, A. (2020). A New Mixed Reality - based Teleoperation System for Telepresence and Maneuverability Enhancement. IEEE Transactions on Human-Machine Systems, 50(1), 55-67
Open this publication in new window or tab >>A New Mixed Reality - based Teleoperation System for Telepresence and Maneuverability Enhancement
Show others...
2020 (English)In: IEEE Transactions on Human-Machine Systems, ISSN 2168-2305, Vol. 50, no 1, p. 55-67Article in journal (Refereed) Published
Abstract [en]

Virtual Reality (VR) is regarded as a useful tool for teleoperation system that provides operators an immersive visual feedback on the robot and the environment. However, without any haptic feedback or physical constructions, VR-based teleoperation systems normally have poor maneuverability and may cause operational faults in some fine movements. In this paper, we employ Mixed Reality (MR), which combines real and virtual worlds, to develop a novel teleoperation system. New system design and control algorithms are proposed. For the system design, a MR interface is developed based on a virtual environment augmented with real-time data from the task space with a goal to enhance the operator’s visual perception. To allow the operator to be freely decoupled from the control loop and offload the operator’s burden, a new interaction proxy is proposed to control the robot. For the control algorithms, two control modes are introduced to improve long-distance movements and fine movements of the MR-based teleoperation. In addition, a set of fuzzy logic based methods are proposed to regulate the position, velocity and force of the robot in order to enhance the system maneuverability and deal with the potential operational faults. Barrier Lyapunov Function (BLF) and back-stepping methods are leveraged to design the control laws and simultaneously guarantee the system stability under state constraints.  Experiments conducted using a 6-Degree of Freedom (DoF) robotic arm prove the feasibility of the system.

Place, publisher, year, edition, pages
IEEE, 2020
Keywords
Force control, motion regulation, telerobotics, virtual reality
National Category
Robotics
Identifiers
urn:nbn:se:oru:diva-77829 (URN)10.1109/THMS.2019.2960676 (DOI)000508380700005 ()2-s2.0-85077905008 (Scopus ID)
Funder
Knowledge Foundation
Available from: 2019-11-11 Created: 2019-11-11 Last updated: 2020-03-10Bibliographically approved
Krishna, S., Kiselev, A., Kristoffersson, A., Repsilber, D. & Loutfi, A. (2019). A Novel Method for Estimating Distances from a Robot to Humans Using Egocentric RGB Camera. Sensors, 19(14), Article ID E3142.
Open this publication in new window or tab >>A Novel Method for Estimating Distances from a Robot to Humans Using Egocentric RGB Camera
Show others...
2019 (English)In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 19, no 14, article id E3142Article in journal (Refereed) Published
Abstract [en]

Estimating distances between people and robots plays a crucial role in understanding social Human-Robot Interaction (HRI) from an egocentric view. It is a key step if robots should engage in social interactions, and to collaborate with people as part of human-robot teams. For distance estimation between a person and a robot, different sensors can be employed, and the number of challenges to be addressed by the distance estimation methods rise with the simplicity of the technology of a sensor. In the case of estimating distances using individual images from a single camera in a egocentric position, it is often required that individuals in the scene are facing the camera, do not occlude each other, and are fairly visible so specific facial or body features can be identified. In this paper, we propose a novel method for estimating distances between a robot and people using single images from a single egocentric camera. The method is based on previously proven 2D pose estimation, which allows partial occlusions, cluttered background, and relatively low resolution. The method estimates distance with respect to the camera based on the Euclidean distance between ear and torso of people in the image plane. Ear and torso characteristic points has been selected based on their relatively high visibility regardless of a person orientation and a certain degree of uniformity with regard to the age and gender. Experimental validation demonstrates effectiveness of the proposed method.

Place, publisher, year, edition, pages
MDPI, 2019
Keywords
Human–Robot Interaction, distance estimation, single RGB image, social interaction
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-75583 (URN)10.3390/s19143142 (DOI)000479160300109 ()31319523 (PubMedID)2-s2.0-85070083052 (Scopus ID)
Note

Funding Agency:

Örebro University

Available from: 2019-08-16 Created: 2019-08-16 Last updated: 2019-11-15Bibliographically approved
Sun, D., Liao, Q., Stoyanov, T., Kiselev, A. & Loutfi, A. (2019). Bilateral telerobotic system using Type-2 fuzzy neural network based moving horizon estimation force observer for enhancement of environmental force compliance and human perception. Automatica, 106, 358-373
Open this publication in new window or tab >>Bilateral telerobotic system using Type-2 fuzzy neural network based moving horizon estimation force observer for enhancement of environmental force compliance and human perception
Show others...
2019 (English)In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 106, p. 358-373Article in journal (Refereed) Published
Abstract [en]

This paper firstly develops a novel force observer using Type-2 Fuzzy Neural Network (T2FNN)-based Moving Horizon Estimation (MHE) to estimate external force/torque information and simultaneously filter out the system disturbances. Then, by using the proposed force observer, a new bilateral teleoperation system is proposed that allows the slave industrial robot to be more compliant to the environment and enhances the situational awareness of the human operator by providing multi-level force feedback. Compared with existing force observer algorithms that highly rely on knowing exact mathematical models, the proposed force estimation strategy can derive more accurate external force/torque information of the robots with complex mechanism and with unknown dynamics. Applying the estimated force information, an external-force-regulated Sliding Mode Control (SMC) strategy with the support of machine vision is proposed to enhance the adaptability of the slave robot and the perception of the operator about various scenarios by virtue of the detected location of the task object. The proposed control system is validated by the experiment platform consisting of a universal robot (UR10), a haptic device and an RGB-D sensor.

Place, publisher, year, edition, pages
Pergamon Press, 2019
Keywords
Force estimation and control, Type-2 fuzzy neural network, Moving horizon estimation, Bilateral teleoperation, Machine vision
National Category
Control Engineering
Research subject
Computer and Systems Science
Identifiers
urn:nbn:se:oru:diva-74377 (URN)10.1016/j.automatica.2019.04.033 (DOI)000473380000041 ()2-s2.0-85065901728 (Scopus ID)
Funder
Swedish Research Council
Available from: 2019-05-23 Created: 2019-05-23 Last updated: 2019-11-13Bibliographically approved
Krishna, S., Kristoffersson, A., Kiselev, A. & Loutfi, A. (2019). Estimating Optimal Placement for a Robot in Social Group Interaction. In: IEEE International Workshop on Robot and Human Communication (ROMAN): . Paper presented at The 28th IEEE International Conference on Robot and Human Interactive Communication – RO-MAN 2019, New Delhi, India, October 14-18, 2019.. IEEE
Open this publication in new window or tab >>Estimating Optimal Placement for a Robot in Social Group Interaction
2019 (English)In: IEEE International Workshop on Robot and Human Communication (ROMAN), IEEE, 2019Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we present a model to propose anoptimal placement for a robot in a social group interaction. Ourmodel estimates the O-space according to the F-formation theory. The method automatically calculates a suitable placementfor the robot. An evaluation of the method has been performedby conducting an experiment where participants stand in differ-ent formations and a robot is teleoperated to join the group. Inone condition, the operator positions the robot according to thespecified location given by our algorithm. In another condition,operators have the freedom to position the robot according totheir personal choice. Follow-up questionnaires were performedto determine which of the placements were preferred by theparticipants. The results indicate that the proposed methodfor automatic placement of the robot is supported from theparticipants. The contribution of this work resides in a novelmethod to automatically estimate the best placement of therobot, as well as the results from user experiments to verify thequality of this method. These results suggest that teleoperatedrobots such as mobile robot telepresence systems could benefitfrom tools that assist operators in placing the robot in groupsin a socially accepted manner.

Place, publisher, year, edition, pages
IEEE, 2019
Keywords
F-formations, Robot Positioning Spot, Mobile Robotic Telepresence, HRI
National Category
Engineering and Technology Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-78832 (URN)10.1109/RO-MAN46459.2019.8956318 (DOI)978-1-7281-2622-7 (ISBN)978-1-7281-2623-4 (ISBN)
Conference
The 28th IEEE International Conference on Robot and Human Interactive Communication – RO-MAN 2019, New Delhi, India, October 14-18, 2019.
Projects
Successful Ageing
Available from: 2019-12-20 Created: 2019-12-20 Last updated: 2020-02-14Bibliographically approved
Edebol-Carlman, H., Rode, J., König, J., Hutchinson, A., Repsilber, D., Kiselev, A., . . . Brummer, R. J. (2019). Evaluating the effects of probiotic intake on brain activity during an emotional attention task and blood markers related to stress in healthy subjects. In: : . Paper presented at Mind, Mood & Microbes, 2nd International Conference on Microbiota-Gut-Brain Axis, Amsterdam, The Netherlands, 17-18 January, 2019.
Open this publication in new window or tab >>Evaluating the effects of probiotic intake on brain activity during an emotional attention task and blood markers related to stress in healthy subjects
Show others...
2019 (English)Conference paper, Oral presentation with published abstract (Refereed)
National Category
Biochemistry and Molecular Biology
Identifiers
urn:nbn:se:oru:diva-73848 (URN)
Conference
Mind, Mood & Microbes, 2nd International Conference on Microbiota-Gut-Brain Axis, Amsterdam, The Netherlands, 17-18 January, 2019
Available from: 2019-04-17 Created: 2019-04-17 Last updated: 2019-04-17Bibliographically approved
Krishna, S., Kristoffersson, A., Kiselev, A. & Loutfi, A. (2019). F-Formations for Social Interaction in Simulation Using Virtual Agents and Mobile Robotic Telepresence Systems. Multimodal Technologies and Interaction, 3(4), Article ID 69.
Open this publication in new window or tab >>F-Formations for Social Interaction in Simulation Using Virtual Agents and Mobile Robotic Telepresence Systems
2019 (English)In: Multimodal Technologies and Interaction, ISSN 2414-4088, Vol. 3, no 4, article id 69Article in journal (Refereed) Published
Abstract [en]

F-formations are a set of possible patterns in which groups of people tend to spatially organize themselves while engaging in social interactions. In this paper, we study the behavior of teleoperators of mobile robotic telepresence systems to determine whether they adhere to spatial formations when navigating to groups. This work uses a simulated environment in which teleoperators are requested to navigate to different groups of virtual agents. The simulated environment represents a conference lobby scenario where multiple groups of Virtual Agents with varying group sizes are placed in different spatial formations. The task requires teleoperators to navigate a robot to join each group using an egocentric-perspective camera. In a second phase, teleoperators are allowed to evaluate their own performance by reviewing how they navigated the robot from an exocentric perspective. The two important outcomes from this study are, firstly, teleoperators inherently respect F-formations even when operating a mobile robotic telepresence system. Secondly, teleoperators prefer additional support in order to correctly navigate the robot into a preferred position that adheres to F-formations.

Place, publisher, year, edition, pages
MDPI, 2019
Keywords
telepresence, mobile robotic telepresence, F-formations, simulation, virtual agents, HRI
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-78830 (URN)10.3390/mti3040069 (DOI)
Note

Funding Agency:

Örebro University

Available from: 2019-12-20 Created: 2019-12-20 Last updated: 2020-01-28Bibliographically approved
Stoyanov, T., Krug, R., Kiselev, A., Sun, D. & Loutfi, A. (2018). Assisted Telemanipulation: A Stack-Of-Tasks Approach to Remote Manipulator Control. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, October 1-5, 2018 (pp. 6640-6645). IEEE Press
Open this publication in new window or tab >>Assisted Telemanipulation: A Stack-Of-Tasks Approach to Remote Manipulator Control
Show others...
2018 (English)In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE Press, 2018, p. 6640-6645Conference paper, Published paper (Refereed)
Abstract [en]

This article presents an approach for assisted teleoperation of a robot arm, formulated within a real-time stack-of-tasks (SoT) whole-body motion control framework. The approach leverages the hierarchical nature of the SoT framework to integrate operator commands with assistive tasks, such as joint limit and obstacle avoidance or automatic gripper alignment. Thereby some aspects of the teleoperation problem are delegated to the controller and carried out autonomously. The key contributions of this work are two-fold: the first is a method for unobtrusive integration of autonomy in a telemanipulation system; and the second is a user study evaluation of the proposed system in the context of teleoperated pick-and-place tasks. The proposed approach of assistive control was found to result in higher grasp success rates and shorter trajectories than achieved through manual control, without incurring additional cognitive load to the operator.

Place, publisher, year, edition, pages
IEEE Press, 2018
Series
IEEE International Conference on Intelligent Robots and Systems. Proceedings, ISSN 2153-0858, E-ISSN 2153-0866
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-71310 (URN)10.1109/IROS.2018.8594457 (DOI)000458872706014 ()978-1-5386-8094-0 (ISBN)978-1-5386-8095-7 (ISBN)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, October 1-5, 2018
Funder
Knowledge FoundationSwedish Foundation for Strategic Research
Available from: 2019-01-09 Created: 2019-01-09 Last updated: 2019-03-13Bibliographically approved
Akalin, N., Kiselev, A., Kristoffersson, A. & Loutfi, A. (2018). Enhancing Social Human-Robot Interaction with Deep Reinforcement Learning.. In: Proc. FAIM/ISCA Workshop on Artificial Intelligence for Multimodal Human Robot Interaction, 2018: . Paper presented at FAIM/ISCA Workshop on Artificial Intelligence for Multimodal Human Robot Interaction (AI-MHRI), Stockholm, Sweden 14-15 July, 2018 (pp. 48-50). MHRI
Open this publication in new window or tab >>Enhancing Social Human-Robot Interaction with Deep Reinforcement Learning.
2018 (English)In: Proc. FAIM/ISCA Workshop on Artificial Intelligence for Multimodal Human Robot Interaction, 2018, MHRI , 2018, p. 48-50Conference paper, Published paper (Refereed)
Abstract [en]

This research aims to develop an autonomous social robot for elderly individuals. The robot will learn from the interaction and change its behaviors in order to enhance the interaction and improve the user experience. For this purpose, we aim to use Deep Reinforcement Learning. The robot will observe the user’s verbal and nonverbal social cues by using its camera and microphone, the reward will be positive valence and engagement of the user.

Place, publisher, year, edition, pages
MHRI, 2018
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-68709 (URN)10.21437/AI-MHRI.2018-12 (DOI)
Conference
FAIM/ISCA Workshop on Artificial Intelligence for Multimodal Human Robot Interaction (AI-MHRI), Stockholm, Sweden 14-15 July, 2018
Projects
SOCRATES
Available from: 2018-09-03 Created: 2018-09-03 Last updated: 2020-01-28Bibliographically approved
Akalin, N., Kiselev, A., Kristoffersson, A. & Loutfi, A. (2018). The Relevance of Social Cues in Assistive Training with a Social Robot. In: Ge, S.S., Cabibihan, J.-J., Salichs, M.A., Broadbent, E., He, H., Wagner, A., Castro-González, Á. (Ed.), 10th International Conference on Social Robotics, ICSR 2018, Proceedings: . Paper presented at 10th International Conference on Social Robotics (ICSR 2018), Qingdao, China, November 28-30, 2018 (pp. 462-471). Springer
Open this publication in new window or tab >>The Relevance of Social Cues in Assistive Training with a Social Robot
2018 (English)In: 10th International Conference on Social Robotics, ICSR 2018, Proceedings / [ed] Ge, S.S., Cabibihan, J.-J., Salichs, M.A., Broadbent, E., He, H., Wagner, A., Castro-González, Á., Springer, 2018, p. 462-471Conference paper, Published paper (Refereed)
Abstract [en]

This paper examines whether social cues, such as facial expressions, can be used to adapt and tailor a robot-assisted training in order to maximize performance and comfort. Specifically, this paper serves as a basis in determining whether key facial signals, including emotions and facial actions, are common among participants during a physical and cognitive training scenario. In the experiment, participants performed basic arm exercises with a social robot as a guide. We extracted facial features from video recordings of participants and applied a recursive feature elimination algorithm to select a subset of discriminating facial features. These features are correlated with the performance of the user and the level of difficulty of the exercises. The long-term aim of this work, building upon the work presented here, is to develop an algorithm that can eventually be used in robot-assisted training to allow a robot to tailor a training program based on the physical capabilities as well as the social cues of the users.

Place, publisher, year, edition, pages
Springer, 2018
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 11357
Keywords
Social cues, Facial signals, Robot-assisted training
National Category
Computer Systems Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-70817 (URN)10.1007/978-3-030-05204-1_45 (DOI)2-s2.0-85058342671 (Scopus ID)978-3-030-05203-4 (ISBN)978-3-030-05204-1 (ISBN)
Conference
10th International Conference on Social Robotics (ICSR 2018), Qingdao, China, November 28-30, 2018
Projects
SOCRATES
Funder
EU, Horizon 2020, 721619
Available from: 2018-12-19 Created: 2018-12-19 Last updated: 2020-01-28Bibliographically approved
Akalin, N., Kiselev, A., Kristoffersson, A. & Loutfi, A. (2017). An Evaluation Tool of the Effect of Robots in Eldercare on the Sense of Safety and Security. In: Kheddar, Abderrahmane; Yoshida, Eiichi; Ge, Shuzhi Sam; Suzuki, Kenji; Cabibihan, John-John; Eyssel, Friederike; He, Hongsheng (Ed.), Kheddar, A.; Yoshida, E.; Ge, S.S.; Suzuki, K.; Cabibihan, J-J:, Eyssel, F:, He, H. (Ed.), Social Robotics: 9th International Conference, ICSR 2017, Tsukuba, Japan, November 22-24, 2017, Proceedings. Paper presented at 9th International Conference on Social Robotics (ICSR 2017), Tsukuba, Japan, November 22-24, 2017 (pp. 628-637). Springer International Publishing
Open this publication in new window or tab >>An Evaluation Tool of the Effect of Robots in Eldercare on the Sense of Safety and Security
2017 (English)In: Social Robotics: 9th International Conference, ICSR 2017, Tsukuba, Japan, November 22-24, 2017, Proceedings / [ed] Kheddar, A.; Yoshida, E.; Ge, S.S.; Suzuki, K.; Cabibihan, J-J:, Eyssel, F:, He, H., Springer International Publishing , 2017, p. 628-637Conference paper, Published paper (Refereed)
Abstract [en]

The aim of the study presented in this paper is to develop a quantitative evaluation tool of the sense of safety and security for robots in eldercare. By investigating the literature on measurement of safety and security in human-robot interaction, we propose new evaluation tools. These tools are semantic differential scale questionnaires. In experimental validation, we used the Pepper robot, programmed in the way to exhibit social behaviors, and constructed four experimental conditions varying the degree of the robot’s non-verbal behaviors from no gestures at all to full head and hand movements. The experimental results suggest that both questionnaires (for the sense of safety and the sense of security) have good internal consistency.

Place, publisher, year, edition, pages
Springer International Publishing, 2017
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 10652
Keywords
Sense of safety, Sense of security, Eldercare, Video-based evaluation, Quantitative evaluation tool
National Category
Computer Systems Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-62768 (URN)10.1007/978-3-319-70022-9_62 (DOI)000449941100062 ()2-s2.0-85035814295 (Scopus ID)978-3-319-70022-9 (ISBN)978-3-319-70021-2 (ISBN)
Conference
9th International Conference on Social Robotics (ICSR 2017), Tsukuba, Japan, November 22-24, 2017
Projects
SOCRATES
Funder
EU, Horizon 2020, 721619
Available from: 2017-11-22 Created: 2017-11-22 Last updated: 2020-01-28Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-0305-3728

Search in DiVA

Show all publications