oru.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 30) Show all publications
Krishna, S., Kiselev, A., Kristoffersson, A., Repsilber, D. & Loutfi, A. (2019). A Novel Method for Estimating Distances from a Robot to Humans Using Egocentric RGB Camera. Sensors, 19(14), Article ID E3142.
Open this publication in new window or tab >>A Novel Method for Estimating Distances from a Robot to Humans Using Egocentric RGB Camera
Show others...
2019 (English)In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 19, no 14, article id E3142Article in journal (Refereed) Published
Abstract [en]

Estimating distances between people and robots plays a crucial role in understanding social Human-Robot Interaction (HRI) from an egocentric view. It is a key step if robots should engage in social interactions, and to collaborate with people as part of human-robot teams. For distance estimation between a person and a robot, different sensors can be employed, and the number of challenges to be addressed by the distance estimation methods rise with the simplicity of the technology of a sensor. In the case of estimating distances using individual images from a single camera in a egocentric position, it is often required that individuals in the scene are facing the camera, do not occlude each other, and are fairly visible so specific facial or body features can be identified. In this paper, we propose a novel method for estimating distances between a robot and people using single images from a single egocentric camera. The method is based on previously proven 2D pose estimation, which allows partial occlusions, cluttered background, and relatively low resolution. The method estimates distance with respect to the camera based on the Euclidean distance between ear and torso of people in the image plane. Ear and torso characteristic points has been selected based on their relatively high visibility regardless of a person orientation and a certain degree of uniformity with regard to the age and gender. Experimental validation demonstrates effectiveness of the proposed method.

Place, publisher, year, edition, pages
MDPI, 2019
Keywords
Human–Robot Interaction, distance estimation, single RGB image, social interaction
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-75583 (URN)10.3390/s19143142 (DOI)000479160300109 ()31319523 (PubMedID)2-s2.0-85070083052 (Scopus ID)
Note

Funding Agency:

Örebro University

Available from: 2019-08-16 Created: 2019-08-16 Last updated: 2019-08-29Bibliographically approved
Sun, D., Liao, Q., Stoyanov, T., Kiselev, A. & Loutfi, A. (2019). Bilateral telerobotic system using Type-2 fuzzy neural network based moving horizon estimation force observer for enhancement of environmental force compliance and human perception. Automatica, 106, 358-373
Open this publication in new window or tab >>Bilateral telerobotic system using Type-2 fuzzy neural network based moving horizon estimation force observer for enhancement of environmental force compliance and human perception
Show others...
2019 (English)In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 106, p. 358-373Article in journal (Refereed) Published
Abstract [en]

This paper firstly develops a novel force observer using Type-2 Fuzzy Neural Network (T2FNN)-based Moving Horizon Estimation (MHE) to estimate external force/torque information and simultaneously filter out the system disturbances. Then, by using the proposed force observer, a new bilateral teleoperation system is proposed that allows the slave industrial robot to be more compliant to the environment and enhances the situational awareness of the human operator by providing multi-level force feedback. Compared with existing force observer algorithms that highly rely on knowing exact mathematical models, the proposed force estimation strategy can derive more accurate external force/torque information of the robots with complex mechanism and with unknown dynamics. Applying the estimated force information, an external-force-regulated Sliding Mode Control (SMC) strategy with the support of machine vision is proposed to enhance the adaptability of the slave robot and the perception of the operator about various scenarios by virtue of the detected location of the task object. The proposed control system is validated by the experiment platform consisting of a universal robot (UR10), a haptic device and an RGB-D sensor.

Place, publisher, year, edition, pages
Pergamon Press, 2019
Keywords
Force estimation and control, Type-2 fuzzy neural network, Moving horizon estimation, Bilateral teleoperation, Machine vision
National Category
Control Engineering
Research subject
Computer and Systems Science
Identifiers
urn:nbn:se:oru:diva-74377 (URN)10.1016/j.automatica.2019.04.033 (DOI)000473380000041 ()2-s2.0-85065901728 (Scopus ID)
Funder
Swedish Research Council
Available from: 2019-05-23 Created: 2019-05-23 Last updated: 2019-07-24Bibliographically approved
Edebol-Carlman, H., Rode, J., König, J., Hutchinson, A., Repsilber, D., Kiselev, A., . . . Brummer, R. J. (2019). Evaluating the effects of probiotic intake on brain activity during an emotional attention task and blood markers related to stress in healthy subjects. In: : . Paper presented at Mind, Mood & Microbes, 2nd International Conference on Microbiota-Gut-Brain Axis, Amsterdam, The Netherlands, 17-18 January, 2019.
Open this publication in new window or tab >>Evaluating the effects of probiotic intake on brain activity during an emotional attention task and blood markers related to stress in healthy subjects
Show others...
2019 (English)Conference paper, Oral presentation with published abstract (Refereed)
National Category
Biochemistry and Molecular Biology
Identifiers
urn:nbn:se:oru:diva-73848 (URN)
Conference
Mind, Mood & Microbes, 2nd International Conference on Microbiota-Gut-Brain Axis, Amsterdam, The Netherlands, 17-18 January, 2019
Available from: 2019-04-17 Created: 2019-04-17 Last updated: 2019-04-17Bibliographically approved
Stoyanov, T., Krug, R., Kiselev, A., Sun, D. & Loutfi, A. (2018). Assisted Telemanipulation: A Stack-Of-Tasks Approach to Remote Manipulator Control. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, October 1-5, 2018 (pp. 6640-6645). IEEE Press
Open this publication in new window or tab >>Assisted Telemanipulation: A Stack-Of-Tasks Approach to Remote Manipulator Control
Show others...
2018 (English)In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE Press, 2018, p. 6640-6645Conference paper, Published paper (Refereed)
Abstract [en]

This article presents an approach for assisted teleoperation of a robot arm, formulated within a real-time stack-of-tasks (SoT) whole-body motion control framework. The approach leverages the hierarchical nature of the SoT framework to integrate operator commands with assistive tasks, such as joint limit and obstacle avoidance or automatic gripper alignment. Thereby some aspects of the teleoperation problem are delegated to the controller and carried out autonomously. The key contributions of this work are two-fold: the first is a method for unobtrusive integration of autonomy in a telemanipulation system; and the second is a user study evaluation of the proposed system in the context of teleoperated pick-and-place tasks. The proposed approach of assistive control was found to result in higher grasp success rates and shorter trajectories than achieved through manual control, without incurring additional cognitive load to the operator.

Place, publisher, year, edition, pages
IEEE Press, 2018
Series
IEEE International Conference on Intelligent Robots and Systems. Proceedings, ISSN 2153-0858, E-ISSN 2153-0866
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-71310 (URN)10.1109/IROS.2018.8594457 (DOI)000458872706014 ()978-1-5386-8094-0 (ISBN)978-1-5386-8095-7 (ISBN)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, October 1-5, 2018
Funder
Knowledge FoundationSwedish Foundation for Strategic Research
Available from: 2019-01-09 Created: 2019-01-09 Last updated: 2019-03-13Bibliographically approved
Akalin, N., Kiselev, A., Kristoffersson, A. & Loutfi, A. (2018). Enhancing Social Human-Robot Interaction with Deep Reinforcement Learning.. In: Proc. FAIM/ISCA Workshop on Artificial Intelligence for Multimodal Human Robot Interaction, 2018: . Paper presented at FAIM/ISCA Workshop on Artificial Intelligence for Multimodal Human Robot Interaction (AI-MHRI), Stockholm, Sweden 14-15 July, 2018 (pp. 48-50). MHRI
Open this publication in new window or tab >>Enhancing Social Human-Robot Interaction with Deep Reinforcement Learning.
2018 (English)In: Proc. FAIM/ISCA Workshop on Artificial Intelligence for Multimodal Human Robot Interaction, 2018, MHRI , 2018, p. 48-50Conference paper, Published paper (Refereed)
Abstract [en]

This research aims to develop an autonomous social robot for elderly individuals. The robot will learn from the interaction and change its behaviors in order to enhance the interaction and improve the user experience. For this purpose, we aim to use Deep Reinforcement Learning. The robot will observe the user’s verbal and nonverbal social cues by using its camera and microphone, the reward will be positive valence and engagement of the user.

Place, publisher, year, edition, pages
MHRI, 2018
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-68709 (URN)10.21437/AI-MHRI.2018-12 (DOI)
Conference
FAIM/ISCA Workshop on Artificial Intelligence for Multimodal Human Robot Interaction (AI-MHRI), Stockholm, Sweden 14-15 July, 2018
Projects
SOCRATES
Available from: 2018-09-03 Created: 2018-09-03 Last updated: 2018-09-04Bibliographically approved
Akalin, N., Kiselev, A., Kristoffersson, A. & Loutfi, A. (2018). The Relevance of Social Cues in Assistive Training with a Social Robot. In: Ge, S.S., Cabibihan, J.-J., Salichs, M.A., Broadbent, E., He, H., Wagner, A., Castro-González, Á. (Ed.), 10th International Conference on Social Robotics, ICSR 2018, Proceedings: . Paper presented at 10th International Conference on Social Robotics (ICSR 2018), Qingdao, China, November 28-30, 2018 (pp. 462-471). Springer
Open this publication in new window or tab >>The Relevance of Social Cues in Assistive Training with a Social Robot
2018 (English)In: 10th International Conference on Social Robotics, ICSR 2018, Proceedings / [ed] Ge, S.S., Cabibihan, J.-J., Salichs, M.A., Broadbent, E., He, H., Wagner, A., Castro-González, Á., Springer, 2018, p. 462-471Conference paper, Published paper (Refereed)
Abstract [en]

This paper examines whether social cues, such as facial expressions, can be used to adapt and tailor a robot-assisted training in order to maximize performance and comfort. Specifically, this paper serves as a basis in determining whether key facial signals, including emotions and facial actions, are common among participants during a physical and cognitive training scenario. In the experiment, participants performed basic arm exercises with a social robot as a guide. We extracted facial features from video recordings of participants and applied a recursive feature elimination algorithm to select a subset of discriminating facial features. These features are correlated with the performance of the user and the level of difficulty of the exercises. The long-term aim of this work, building upon the work presented here, is to develop an algorithm that can eventually be used in robot-assisted training to allow a robot to tailor a training program based on the physical capabilities as well as the social cues of the users.

Place, publisher, year, edition, pages
Springer, 2018
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 11357
Keywords
Social cues, Facial signals, Robot-assisted training
National Category
Computer Systems Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-70817 (URN)10.1007/978-3-030-05204-1_45 (DOI)2-s2.0-85058342671 (Scopus ID)978-3-030-05203-4 (ISBN)978-3-030-05204-1 (ISBN)
Conference
10th International Conference on Social Robotics (ICSR 2018), Qingdao, China, November 28-30, 2018
Projects
SOCRATES
Funder
EU, Horizon 2020, 721619
Available from: 2018-12-19 Created: 2018-12-19 Last updated: 2018-12-20Bibliographically approved
Akalin, N., Kiselev, A., Kristoffersson, A. & Loutfi, A. (2017). An Evaluation Tool of the Effect of Robots in Eldercare on the Sense of Safety and Security. In: Kheddar, Abderrahmane; Yoshida, Eiichi; Ge, Shuzhi Sam; Suzuki, Kenji; Cabibihan, John-John; Eyssel, Friederike; He, Hongsheng (Ed.), Kheddar, A.; Yoshida, E.; Ge, S.S.; Suzuki, K.; Cabibihan, J-J:, Eyssel, F:, He, H. (Ed.), Social Robotics: 9th International Conference, ICSR 2017, Tsukuba, Japan, November 22-24, 2017, Proceedings. Paper presented at 9th International Conference on Social Robotics (ICSR 2017), Tsukuba, Japan, November 22-24, 2017 (pp. 628-637). Springer International Publishing
Open this publication in new window or tab >>An Evaluation Tool of the Effect of Robots in Eldercare on the Sense of Safety and Security
2017 (English)In: Social Robotics: 9th International Conference, ICSR 2017, Tsukuba, Japan, November 22-24, 2017, Proceedings / [ed] Kheddar, A.; Yoshida, E.; Ge, S.S.; Suzuki, K.; Cabibihan, J-J:, Eyssel, F:, He, H., Springer International Publishing , 2017, p. 628-637Conference paper, Published paper (Refereed)
Abstract [en]

The aim of the study presented in this paper is to develop a quantitative evaluation tool of the sense of safety and security for robots in eldercare. By investigating the literature on measurement of safety and security in human-robot interaction, we propose new evaluation tools. These tools are semantic differential scale questionnaires. In experimental validation, we used the Pepper robot, programmed in the way to exhibit social behaviors, and constructed four experimental conditions varying the degree of the robot’s non-verbal behaviors from no gestures at all to full head and hand movements. The experimental results suggest that both questionnaires (for the sense of safety and the sense of security) have good internal consistency.

Place, publisher, year, edition, pages
Springer International Publishing, 2017
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 10652
Keywords
Sense of safety, Sense of security, Eldercare, Video-based evaluation, Quantitative evaluation tool
National Category
Computer Systems Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-62768 (URN)10.1007/978-3-319-70022-9_62 (DOI)000449941100062 ()2-s2.0-85035814295 (Scopus ID)978-3-319-70022-9 (ISBN)978-3-319-70021-2 (ISBN)
Conference
9th International Conference on Social Robotics (ICSR 2017), Tsukuba, Japan, November 22-24, 2017
Projects
SOCRATES
Funder
EU, Horizon 2020, 721619
Available from: 2017-11-22 Created: 2017-11-22 Last updated: 2018-11-23Bibliographically approved
Alirezaie, M., Kiselev, A., Längkvist, M., Klügl, F. & Loutfi, A. (2017). An Ontology-Based Reasoning Framework for Querying Satellite Images for Disaster Monitoring. Sensors, 17(11), Article ID 2545.
Open this publication in new window or tab >>An Ontology-Based Reasoning Framework for Querying Satellite Images for Disaster Monitoring
Show others...
2017 (English)In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 17, no 11, article id 2545Article in journal, Editorial material (Refereed) Published
Abstract [en]

This paper presents a framework in which satellite images are classified and augmented with additional semantic information to enable queries about what can be found on the map at a particular location, but also about paths that can be taken. This is achieved by a reasoning framework based on qualitative spatial reasoning that is able to find answers to high level queries that may vary on the current situation. This framework called SemCityMap, provides the full pipeline from enriching the raw image data with rudimentary labels to the integration of a knowledge representation and reasoning methods to user interfaces for high level querying. To illustrate the utility of SemCityMap in a disaster scenario, we use an urban environment—central Stockholm—in combination with a flood simulation. We show that the system provides useful answers to high-level queries also with respect to the current flood status. Examples of such queries concern path planning for vehicles or retrieval of safe regions such as “find all regions close to schools and far from the flooded area”. The particular advantage of our approach lies in the fact that ontological information and reasoning is explicitly integrated so that queries can be formulated in a natural way using concepts on appropriate level of abstraction, including additional constraints.

Place, publisher, year, edition, pages
M D P I AG, 2017
Keywords
satellite imagery data; natural hazards; ontology; reasoning; path finding
National Category
Computer Systems
Research subject
Computer and Systems Science
Identifiers
urn:nbn:se:oru:diva-62134 (URN)10.3390/s17112545 (DOI)000416790500107 ()29113073 (PubMedID)2-s2.0-85033372857 (Scopus ID)
Projects
Semantic Robot
Available from: 2017-11-05 Created: 2017-11-05 Last updated: 2018-01-03Bibliographically approved
Pathi, S. K., Kiselev, A. & Loutfi, A. (2017). Estimating F-Formations for Mobile Robotic Telepresence. In: Estimating F-Formations for Mobile Robotic Telepresence: . Paper presented at 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2017), March 6-9, 2017, Vienna, Austria (pp. 255-256). Vienna, Austria: ACM Digital Library, Article ID 1127.
Open this publication in new window or tab >>Estimating F-Formations for Mobile Robotic Telepresence
2017 (English)In: Estimating F-Formations for Mobile Robotic Telepresence, Vienna, Austria: ACM Digital Library, 2017, p. 255-256, article id 1127Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

In this paper, we present a method for the automatic detection of F-formations for mobile robot telepresence (MRP). The method consists of two phases a) estimating face orientation in video frames and b) estimating the F-formation based on detected faces. The method works in real time and is tailored for images of lower resolution that are typically collected from MRP units.

Place, publisher, year, edition, pages
Vienna, Austria: ACM Digital Library, 2017
Series
ACM/IEEE International Conference on Human-Robot Interaction, ISSN 2167-2148
Keywords
Face Orientation, F-formations, Mobile Robotic Telepresence
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-62829 (URN)10.1145/3029798.3038304 (DOI)2-s2.0-85016428725 (Scopus ID)
Conference
12th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2017), March 6-9, 2017, Vienna, Austria
Projects
Successful ageing
Available from: 2017-11-25 Created: 2017-11-25 Last updated: 2018-02-12Bibliographically approved
Alirezaie, M., Kiselev, A., Klügl, F., Längkvist, M. & Loutfi, A. (2017). Exploiting Context and Semantics for UAV Path-finding in an Urban Setting. In: Emanuele Bastianelli, Mathieu d'Aquin, Daniele Nardi (Ed.), Proceedings of the 1st International Workshop on Application of Semantic Web technologies in Robotics (AnSWeR 2017), Portoroz, Slovenia, May 29th, 2017: . Paper presented at International Workshop on Application of Semantic Web technologies in Robotics co-located with 14th Extended Semantic Web Conference (ESWC), Portoroz, Slovenia, 28th May-1st June, 2017 (pp. 11-20). Technical University Aachen
Open this publication in new window or tab >>Exploiting Context and Semantics for UAV Path-finding in an Urban Setting
Show others...
2017 (English)In: Proceedings of the 1st International Workshop on Application of Semantic Web technologies in Robotics (AnSWeR 2017), Portoroz, Slovenia, May 29th, 2017 / [ed] Emanuele Bastianelli, Mathieu d'Aquin, Daniele Nardi, Technical University Aachen , 2017, p. 11-20Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we propose an ontology pattern that represents paths in a geo-representation model to be used in an aerial path planning processes. This pattern provides semantics related to constraints (i.e., ight forbidden zones) in a path planning problem in order to generate collision free paths. Our proposed approach has been applied on an ontology containing geo-regions extracted from satellite imagery data from a large urban city as an illustrative example.

Place, publisher, year, edition, pages
Technical University Aachen, 2017
Series
CEUR Workshop Proceedings, ISSN 1613-0073 ; 1935
Keywords
Semantic Web for Robotics, Representation and reasoning for Robotics, Ontology Design Pattern, Path Planning
National Category
Engineering and Technology Computer Sciences
Research subject
Computer and Systems Science
Identifiers
urn:nbn:se:oru:diva-64603 (URN)2-s2.0-85030752502 (Scopus ID)
Conference
International Workshop on Application of Semantic Web technologies in Robotics co-located with 14th Extended Semantic Web Conference (ESWC), Portoroz, Slovenia, 28th May-1st June, 2017
Projects
Semantic Robot
Available from: 2018-01-29 Created: 2018-01-29 Last updated: 2018-09-10Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-0305-3728

Search in DiVA

Show all publications