oru.sePublications
Change search
Refine search result
1 - 30 of 30
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Akalin, Neziha
    et al.
    Örebro University, School of Science and Technology.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Kristoffersson, Annica
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    An Evaluation Tool of the Effect of Robots in Eldercare on the Sense of Safety and Security2017In: Social Robotics: 9th International Conference, ICSR 2017, Tsukuba, Japan, November 22-24, 2017, Proceedings / [ed] Kheddar, A.; Yoshida, E.; Ge, S.S.; Suzuki, K.; Cabibihan, J-J:, Eyssel, F:, He, H., Springer International Publishing , 2017, p. 628-637Conference paper (Refereed)
    Abstract [en]

    The aim of the study presented in this paper is to develop a quantitative evaluation tool of the sense of safety and security for robots in eldercare. By investigating the literature on measurement of safety and security in human-robot interaction, we propose new evaluation tools. These tools are semantic differential scale questionnaires. In experimental validation, we used the Pepper robot, programmed in the way to exhibit social behaviors, and constructed four experimental conditions varying the degree of the robot’s non-verbal behaviors from no gestures at all to full head and hand movements. The experimental results suggest that both questionnaires (for the sense of safety and the sense of security) have good internal consistency.

  • 2.
    Akalin, Neziha
    et al.
    Örebro University, School of Science and Technology.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Kristoffersson, Annica
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Enhancing Social Human-Robot Interaction with Deep Reinforcement Learning.2018In: Proc. FAIM/ISCA Workshop on Artificial Intelligence for Multimodal Human Robot Interaction, 2018, MHRI , 2018, p. 48-50Conference paper (Refereed)
    Abstract [en]

    This research aims to develop an autonomous social robot for elderly individuals. The robot will learn from the interaction and change its behaviors in order to enhance the interaction and improve the user experience. For this purpose, we aim to use Deep Reinforcement Learning. The robot will observe the user’s verbal and nonverbal social cues by using its camera and microphone, the reward will be positive valence and engagement of the user.

  • 3.
    Akalin, Neziha
    et al.
    Örebro University, School of Science and Technology.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Kristoffersson, Annica
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    The Relevance of Social Cues in Assistive Training with a Social Robot2018In: 10th International Conference on Social Robotics, ICSR 2018, Proceedings / [ed] Ge, S.S., Cabibihan, J.-J., Salichs, M.A., Broadbent, E., He, H., Wagner, A., Castro-González, Á., Springer, 2018, p. 462-471Conference paper (Refereed)
    Abstract [en]

    This paper examines whether social cues, such as facial expressions, can be used to adapt and tailor a robot-assisted training in order to maximize performance and comfort. Specifically, this paper serves as a basis in determining whether key facial signals, including emotions and facial actions, are common among participants during a physical and cognitive training scenario. In the experiment, participants performed basic arm exercises with a social robot as a guide. We extracted facial features from video recordings of participants and applied a recursive feature elimination algorithm to select a subset of discriminating facial features. These features are correlated with the performance of the user and the level of difficulty of the exercises. The long-term aim of this work, building upon the work presented here, is to develop an algorithm that can eventually be used in robot-assisted training to allow a robot to tailor a training program based on the physical capabilities as well as the social cues of the users.

  • 4.
    Alirezaie, Marjan
    et al.
    Örebro University, School of Science and Technology.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Klügl, Franziska
    Örebro University, School of Science and Technology. Örebro University, School of Law, Psychology and Social Work.
    Längkvist, Martin
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Exploiting Context and Semantics for UAV Path-finding in an Urban Setting2017In: Proceedings of the 1st International Workshop on Application of Semantic Web technologies in Robotics (AnSWeR 2017), Portoroz, Slovenia, May 29th, 2017 / [ed] Emanuele Bastianelli, Mathieu d'Aquin, Daniele Nardi, Technical University Aachen , 2017, p. 11-20Conference paper (Refereed)
    Abstract [en]

    In this paper we propose an ontology pattern that represents paths in a geo-representation model to be used in an aerial path planning processes. This pattern provides semantics related to constraints (i.e., ight forbidden zones) in a path planning problem in order to generate collision free paths. Our proposed approach has been applied on an ontology containing geo-regions extracted from satellite imagery data from a large urban city as an illustrative example.

  • 5.
    Alirezaie, Marjan
    et al.
    Örebro University, School of Science and Technology.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Längkvist, Martin
    Örebro University, School of Science and Technology.
    Klügl, Franziska
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    An Ontology-Based Reasoning Framework for Querying Satellite Images for Disaster Monitoring2017In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 17, no 11, article id 2545Article in journal (Refereed)
    Abstract [en]

    This paper presents a framework in which satellite images are classified and augmented with additional semantic information to enable queries about what can be found on the map at a particular location, but also about paths that can be taken. This is achieved by a reasoning framework based on qualitative spatial reasoning that is able to find answers to high level queries that may vary on the current situation. This framework called SemCityMap, provides the full pipeline from enriching the raw image data with rudimentary labels to the integration of a knowledge representation and reasoning methods to user interfaces for high level querying. To illustrate the utility of SemCityMap in a disaster scenario, we use an urban environment—central Stockholm—in combination with a flood simulation. We show that the system provides useful answers to high-level queries also with respect to the current flood status. Examples of such queries concern path planning for vehicles or retrieval of safe regions such as “find all regions close to schools and far from the flooded area”. The particular advantage of our approach lies in the fact that ontological information and reasoning is explicitly integrated so that queries can be formulated in a natural way using concepts on appropriate level of abstraction, including additional constraints.

  • 6.
    Alirezaie, Marjan
    et al.
    Örebro University, School of Science and Technology.
    Längkvist, Martin
    Örebro University, School of Science and Technology.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Open GeoSpatial Data as a Source of Ground Truth for Automated Labelling of Satellite Images2016In: SDW 2016: Spatial Data on the Web, Proceedings / [ed] Krzysztof Janowicz et al., CEUR Workshop Proceedings , 2016, p. 5-8Conference paper (Refereed)
  • 7.
    Edebol-Carlman, Hanna
    et al.
    Örebro University, School of Medical Sciences.
    Rode, Julia
    Örebro University, School of Medical Sciences.
    König, Julia
    Örebro University, School of Medical Sciences.
    Hutchinson, Ashley
    Örebro University, School of Medical Sciences.
    Repsilber, Dirk
    Örebro University, School of Medical Sciences.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Thunberg, Per
    Örebro University, School of Medical Sciences.
    Lathrop Stern, Lori
    Labus, Jennifer
    Brummer, Robert Jan
    Örebro University, School of Medical Sciences.
    Evaluating the effects of probiotic intake on brain activity during an emotional attention task and blood markers related to stress in healthy subjects2019Conference paper (Refereed)
  • 8.
    Efremova, Natalia
    et al.
    Plekhanov Russian University, Moskow, Russia.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Cognitive Architectures for Optimal Remote Image Representation for Driving a Telepresence Robot2014Conference paper (Refereed)
  • 9.
    Hacker, Benjamin Alexander
    et al.
    Kyoto University, Japan.
    Wankerl, Thomas
    Kyoto University, Japan.
    Kiselev, Andrey
    Kyoto University, Japan.
    Huang, Hung-Hsuan
    Kyoto University, Japan.
    Schlichter, Johann
    Technische Universität München, Germany.
    Abdikeev, Niyaz
    Plekhanov University, Moscow.
    Nishida, Toyoaki
    Kyoto University, Japan.
    Incorporating intentional and emotional behaviors into a Virtual Human for Better Customer-Engineer-Interaction2009Conference paper (Refereed)
    Abstract [en]

    Providing customer support for technical products means an essential effort for enterprises to satisfy the customer's needs and to challenge rivals in business. This paper introduces a virtual human framework for a better customer engineer interaction. We put emphasis on a preferably natural conversation achieved by continuously analyzing behaviors and emotions of the human user, suggesting his or her intentions and diversification of active and passive intentional behaviors. The underlying architecture is an extension to the generic embodied conversational agent framework which was developed to ease the integration of heterogeneous components into an embodied conversational agent system. These extensions are mainly influenced by SAIBA's architecture for a multimodal behavior generation framework. Although the system has only been accomplished to about 50% partial results show that our approach has the potential to create a more natural like conversational situation.

  • 10.
    Kiselev, Andrey
    et al.
    Kyoto University, Kyoto, Japan.
    Abdikeev, Niyaz
    Plekhanov Russian Academy of Economics, Moscow, Russia .
    Nishida, Toyoaki
    Kyoto University, Kyoto, Japan.
    Evaluating Humans’ Implicit Attitudes towards an Embodied Conversational Agent2011In: Advances in Neural Networks–ISNN 2011: 8th International Symposium on Neural Networks, ISNN 2011, Guilin, China, May 29–June 1, 2011, Proceedings, Part I, Springer Berlin/Heidelberg, 2011, p. -9Conference paper (Refereed)
    Abstract [en]

    This paper addresses the problem of evaluating embodied conversational agents in terms of their communicative performance. We show our attempt to evaluate humans’ implicit attitudes towards different kinds of information presenting by embodied conversational agents using the Implicit Association Test (IAT) rather than gathering explicit data using interviewing methods. We conducted an experiment in which we use the method of indirect measurements with the IAT. The conventional procedure and scoring algorithm of the IAT were used in order to discover possible issues and solutions for future experiments. We discuss key differences between the conventional usage of the IAT and using the IAT in our experiment for evaluating embodied conversational agents using unfamiliar information as test data.

  • 11.
    Kiselev, Andrey
    et al.
    Örebro University, School of Science and Technology. Kyoto University, Kyoto, Japan.
    Abdikeev, Niyaz
    Plekhanov Russian Academy of Economics, Moscow, Russia; Universität Ulm, Ulm, Germany.
    Nishida, Toyoaki
    Kyoto UniversityKyoto University, Kyoto, Japan.
    Measuring Implicit Attitudes in Human-Computer Interactions2011In: Rough Sets, Fuzzy Sets, Data Mining and Granular Computing / [ed] Kuznetsov, S.O.; Ślęzak, D.; Hepting, D.H.; Mirkin, B.G., Springer, 2011, p. 350-357Conference paper (Refereed)
    Abstract [en]

    This paper presents the ongoing project which attempts to solve the problem of measuring users' satisfaction by utilizing methods of discovering users' implicit attitudes. In the initial stage, authors attempted to use the Implicit Association Test (IAT) in order to discover users' implicit attitudes towards avirtual character. The conventional IAT procedure and scoring algorithm were used in order to find possible lacks of original method. Results of the initial experiment are shown in the paper along with method modification proposal and preliminary verification experiment.

  • 12.
    Kiselev, Andrey
    et al.
    Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University, Kyoto, Japan.
    Hacker, Benjamin Alexander
    Department of Informatics, Munich University of Technology, Munich, Germany.
    Wankerl, Thomas
    Department of Informatics, Munich University of Technology, Munich, Germany.
    Abdikeev, Niyaz
    Plekhanov Russian Academy of Economics, Moscow, Russia.
    Nishida, Toyoaki
    Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University, Kyoto, Japan.
    Toward incorporating emotions with rationality into a communicative virtual agent2011In: AI & Society: The Journal of Human-Centred Systems and Machine Intelligence, ISSN 0951-5666, E-ISSN 1435-5655, Vol. 26, no 3, p. 275-289Article in journal (Refereed)
    Abstract [en]

    This paper addresses the problem of human–computer interactions when the computer can interpret and express a kind of human-like behavior, offering natural communication. A conceptual framework for incorporating emotions with rationality is proposed. A model of affective social interactions is described. The model utilizes the SAIBA framework, which distinguishes among several stages of processing of information. The SAIBA framework is extended, and a model is realized in human behavior detection, human behavior interpretation, intention planning, attention tracking behavior planning, and behavior realization components. Two models of incorporating emotions with rationality into a virtual artifact are presented. The first one uses an implicit implementation of emotions. The second one has an explicit realization of a three-layered model of emotions, which is highly interconnected with other components of the system. Details of the model with implicit implementation of emotional behavior are shown as well as evaluation methodology and results. Discussions about the extended model of an agent are given in the final part of the paper.

  • 13.
    Kiselev, Andrey
    et al.
    Örebro University, School of Science and Technology.
    Kristoffersson, Annica
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Combining Semi-autonomous Navigation with Manned Behaviour in a Cooperative Driving System for Mobile Robotic Telepresence2015In: COMPUTER VISION - ECCV 2014 WORKSHOPS, PT IV, Berlin: Springer Berlin/Heidelberg, 2015, Vol. 8928, p. 17-28Conference paper (Refereed)
    Abstract [en]

    This paper presents an image-based cooperative driving system for telepresence robot, which allows safe operation in indoor environments and is meant to minimize the burden on novice users operating the robot. The paper focuses on one emerging telepresence robot, namely, mobile remote presence systems for social interaction. Such systems brings new opportunities for applications in healthcare and elderly care by allowing caregivers to communicate with patients and elderly from remote locations. However, using such systems can be a difficult task particularly for caregivers without proper training. The paper presents a first implementation of a vision-based cooperative driving enhancement to a telepresence robot. A preliminary evaluation in the laboratory environment is presented.

  • 14.
    Kiselev, Andrey
    et al.
    Örebro University, School of Science and Technology.
    Kristoffersson, Annica
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    The Effect of Field of View on Social Interaction in Mobile Robotic Telepresence Systems2014In: Proceedings of the 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2014), IEEE conference proceedings, 2014, p. 214-215Conference paper (Refereed)
    Abstract [en]

    One goal of mobile robotic telepresence for social interaction is to design robotic units that are easy to operate for novice users and promote good interaction between people. This paper presents an exploratory study on the effect of camera orientation and field of view on the interaction between a remote and local user. Our findings suggest that limiting the width of the field of view can lead to better interaction quality as it encourages remote users to orient the robot towards local users.

  • 15.
    Kiselev, Andrey
    et al.
    Örebro University, School of Science and Technology.
    Kristoffersson, Annica
    Örebro University, School of Science and Technology.
    Melendez, Francisco
    System Engineering and Automation Department, University of Malaga, Malaga, Spain.
    Galindo, Cipriano
    System Engineering and Automation Department, University of Malaga, Malaga, Spain.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Gonzalez-Jimenez, Javier
    System Engineering and Automation Department, University of Malaga, Malaga, Spain.
    Coradeschi, Silvia
    Örebro University, School of Science and Technology.
    Evaluation of using semi-autonomy features in mobile robotic telepresence systems2015In: Proceedings of the 2015 7th IEEE International Conference on Cybernetics and Intelligent Systems, CIS 2015 and Robotics, Automation and Mechatronics, RAM 2015, New York, USA: IEEE conference proceedings , 2015, p. 147-152Conference paper (Refereed)
    Abstract [en]

    Mobile robotic telepresence systems used for social interaction scenarios require that users steer robots in a remote environment. As a consequence, a heavy workload can be put on users if they are unfamiliar with using robotic telepresence units. One way to lessen this workload is to automate certain operations performed during a telepresence session in order to assist remote drivers in navigating the robot in new environments. Such operations include autonomous robot localization and navigation to certain points in the home and automatic docking of the robot to the charging station. In this paper we describe the implementation of such autonomous features along with user evaluation study. The evaluation scenario is focused on the first experience on using the system by novice users. Importantly, that the scenario taken in this study assumed that participants have as little as possible prior information about the system. Four different use-cases were identified from the user behaviour analysis.

  • 16.
    Kiselev, Andrey
    et al.
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Using a mental workload index as a measure of usability of a user interface for social robotic telepresence2012In: Workshop in Social Robotics Telepresence, 2012Conference paper (Refereed)
    Abstract [en]

    This position paper reports on the use of mentalworkload analysis to measure the usability of a remote user’sinterface in the context of social robotic telepresence. The paperdiscusses the importance of remote/pilot user’s interfaces for successful interaction and presents a study whereby a set of tools for evaluation are proposed. Preliminary experimental analysis is provided when evaluating a specific telepresencerobot, called the Giraff.

  • 17.
    Kiselev, Andrey
    et al.
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Sivakumar, Prasanna Kumar
    SASTRA University, Thanjavur, India.
    Swaminathan, Chittaranjan Srinivas
    SASTRA University, Thanjavur, India.
    Robot-human hand-overs in non-anthropomorphic robots2013In: Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction, HRI'13 / [ed] Hideaki Kuzuoka, Vanessa Evers, Michita Imai, Jodi Forlizzi, IEEE Press, 2013, p. 227-228Conference paper (Refereed)
    Abstract [en]

    Robots that assist and interact with humans will inevitably require to successfully achieve the task of handing over objects. Whether it is to deliver desired objects for the elderly living in their homes or hand tools to a worker in a factory, the process of robot hand-overs is one worthy study within the human robot interaction community. While the study of object hand-overs have been studied in previous works, these works have mainly considered anthropomorphic robots, that is, robots that appear and move similar to humans. However, recent trends within robotics, and in particular domestic robotics have witnessed an increase in non-anthropomorphic robotic platforms such as moving tables, teleconferencing robots and vacuum cleaners. The study of robot hand-over for nonanthropomorphic robots and in particular the study of what constitute a successful hand-over is at focus in this paper. For the purpose of investigation, the TurtleBot, which is a moving table like device is used in a home environment.

  • 18.
    Kiselev, Andrey
    et al.
    Örebro University, School of Science and Technology.
    Mosiello, Giovanni
    Örebro University, School of Science and Technology. Roma Tre University, Rome, Italy.
    Kristoffersson, Annica
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Semi-Autonomous Cooperative Driving for Mobile Robotic Telepresence Systems2014In: Proceedings of the 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2014), IEEE conference proceedings, 2014, p. 104-104Conference paper (Refereed)
    Abstract [en]

    Mobile robotic telepresence (MRP) has been introduced to allow communication from remote locations. Modern MRP systems offer rich capabilities for human-human interactions. However, simply driving a telepresence robot can become a burden especially for novice users, leaving no room for interaction at all. In this video we introduce a project which aims to incorporate advanced robotic algorithms into manned telepresence robots in a natural way to allow human-robot cooperation for safe driving. It also shows a very first implementation of cooperative driving based on extracting a safe drivable area in real time using the image stream received from the robot.

  • 19.
    Kiselev, Andrey
    et al.
    Örebro University, School of Science and Technology.
    Potenza, Andre
    Örebro University, School of Science and Technology.
    Bruno, Barbara
    DIBRIS, University Genova, Genova, Italy.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Towards Seamless Autonomous Function Handovers in Human-Robot Teams2017Conference paper (Refereed)
    Abstract [en]

    Various human-robot collaboration scenarios mayimpose different requirements on the robot’s autonomy, ranging from fully autonomous to fully manual operation. The paradigm of sliding autonomy has been introduced to allow adapting robots’ autonomy in real time, thus improving flexibility of a human-robot team. In sliding autonomy, functions can be handed over between the human and the robot to address environment changes and optimize performance and workload. This paper examines the process of handing over functions between humans by looking at a particular experiment scenario in which the same function has to be handed over multiple times during the experiment session. We hypothesize that the process of function handover is similar to already well-studied human-robot handovers which deal with physical objects. In the experiment, we attempt to discover natural similarities and differences between these two types of handovers and suggest further directions of work that are necessary to give the robot the ability to perform the function handover autonomously, without explicit instruction from the human counterpart.

  • 20.
    Kiselev, Andrey
    et al.
    Örebro University, School of Science and Technology.
    Scherlund, Mårten
    Giraff Technologies AB, Västerås, Sweden.
    Kristoffersson, Annica
    Örebro University, School of Science and Technology.
    Efremova, Natalia
    Plekhanom University, Moscow, Russia.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Auditory immersion with stereo sound in a mobile robotic telepresence system2015In: 10th ACM/IEEE International Conference on Human-Robot Interaction, 2015, Association for Computing Machinery (ACM), 2015Conference paper (Refereed)
    Abstract [en]

    Auditory immersion plays a significant role in generating a good feeling of presence for users driving a telepresence robot. In this paper, one of the key characteristics of auditory immersion - sound source localization (SSL) - is studied from the perspective of those who operate telepresence robots from remote locations. A prototype which is capable of delivering soundscape to the user through Interaural Time Difference (ITD) and Interaural Level Difference (ILD) using the ORTF stereo recording technique was developed. The prototype was evaluated in an experiment and the results suggest that the developed method is sufficient for sound source localization tasks.

  • 21.
    Krishna, Sai
    et al.
    Örebro University, School of Science and Technology.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Kristoffersson, Annica
    School of Innovation, Design and Engineering, Mälardalen University, Västerås, Sweden.
    Repsilber, Dirk
    Örebro University, School of Medical Sciences.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    A Novel Method for Estimating Distances from a Robot to Humans Using Egocentric RGB Camera2019In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 19, no 14, article id E3142Article in journal (Refereed)
    Abstract [en]

    Estimating distances between people and robots plays a crucial role in understanding social Human-Robot Interaction (HRI) from an egocentric view. It is a key step if robots should engage in social interactions, and to collaborate with people as part of human-robot teams. For distance estimation between a person and a robot, different sensors can be employed, and the number of challenges to be addressed by the distance estimation methods rise with the simplicity of the technology of a sensor. In the case of estimating distances using individual images from a single camera in a egocentric position, it is often required that individuals in the scene are facing the camera, do not occlude each other, and are fairly visible so specific facial or body features can be identified. In this paper, we propose a novel method for estimating distances between a robot and people using single images from a single egocentric camera. The method is based on previously proven 2D pose estimation, which allows partial occlusions, cluttered background, and relatively low resolution. The method estimates distance with respect to the camera based on the Euclidean distance between ear and torso of people in the image plane. Ear and torso characteristic points has been selected based on their relatively high visibility regardless of a person orientation and a certain degree of uniformity with regard to the age and gender. Experimental validation demonstrates effectiveness of the proposed method.

  • 22.
    Krishna, Sai
    et al.
    Örebro University, School of Science and Technology.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Towards a Method to Detect F-formations in Real-Time to Enable Social Robots to Join Groups2017In: Towards a Method to Detect F-formations in Real-Time to Enable Social Robots to Join Groups, Umeå, Sweden: Umeå University , 2017Conference paper (Refereed)
    Abstract [en]

    In this paper, we extend an algorithm to detect constraint based F-formations for a telepresence robot and also consider the situation when the robot is in motion. The proposed algorithm is computationally inexpensive, uses an egocentric (first-person) vision, low memory, low quality vision settings and also works in real time which is explicitly designed for a mobile robot. The proposed approach is a first step advancing in the direction of automatically detecting F-formations for the robotics community.

  • 23.
    Längkvist, Martin
    et al.
    Örebro University, School of Science and Technology.
    Alirezaie, Marjan
    Örebro University, School of Science and Technology.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Interactive Learning with Convolutional Neural Networks for Image Labeling2016In: International Joint Conference on Artificial Intelligence (IJCAI), 2016Conference paper (Refereed)
    Abstract [en]

    Recently, deep learning models, such as Convolutional Neural Networks, have shown to give good performance for various computer vision tasks. A pre-requisite for such models is to have access to lots of labeled data since the most successful ones are trained with supervised learning. The process of labeling data is expensive, time-consuming, tedious, and sometimes subjective, which can result in falsely labeled data, which has a negative effect on both the training and the validation. In this work, we propose a human-in-the-loop intelligent system that allows the agent and the human to collabo- rate to simultaneously solve the problem of labeling data and at the same time perform scene labeling of an unlabeled image data set with minimal guidance by a human teacher. We evaluate the proposed in- teractive learning system by comparing the labeled data set from the system to the human-provided labels. The results show that the learning system is capable of almost completely label an entire image data set starting from a few labeled examples provided by the human teacher.

  • 24.
    Längkvist, Martin
    et al.
    Örebro University, School of Science and Technology.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Alirezaie, Marjan
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Classification and Segmentation of Satellite Orthoimagery Using Convolutional Neural Networks2016In: Remote Sensing, ISSN 2072-4292, E-ISSN 2072-4292, Vol. 8, no 4, article id 329Article in journal (Refereed)
    Abstract [en]

    The availability of high-resolution remote sensing (HRRS) data has opened up the possibility for new interesting applications, such as per-pixel classification of individual objects in greater detail. This paper shows how a convolutional neural network (CNN) can be applied to multispectral orthoimagery and a digital surface model (DSM) of a small city for a full, fast and accurate per-pixel classification. The predicted low-level pixel classes are then used to improve the high-level segmentation. Various design choices of the CNN architecture are evaluated and analyzed. The investigated land area is fully manually labeled into five categories (vegetation, ground, roads, buildings and water), and the classification accuracy is compared to other per-pixel classification works on other land areas that have a similar choice of categories. The results of the full classification and segmentation on selected segments of the map show that CNNs are a viable tool for solving both the segmentation and object recognition task for remote sensing data.

  • 25.
    Mosiello, Giovanni
    et al.
    Örebro University, School of Science and Technology. Universitá degli Studi Roma Tre, Rome, Italy.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Using augmented reality to improve usability of the user interface for driving a telepresence robot2013In: Paladyn - Journal of Behavioral Robotics, ISSN 2080-9778, E-ISSN 2081-4836, Vol. 4, no 3, p. 174-181Article in journal (Refereed)
    Abstract [en]

    Mobile Robotic Telepresence (MRP) helps people to communicate in natural ways despite being physically located in different parts of the world. User interfaces of such systems are as critical as the design and functionality of the robot itself for creating conditions for natural interaction. This article presents an exploratory study analysing different robot teleoperation interfaces. The goals of this paper are to investigate the possible effect of using augmented reality as the means to drive a robot, to identify key factors of the user interface in order to improve the user experience through a driving interface, and to minimize interface familiarization time for non-experienced users. The study involved 23 participants whose robot driving attempts via different user interfaces were analysed. The results show that a user interface with an augmented reality interface resulted in better driving experience.

  • 26.
    Orlandini, Andrea
    et al.
    Consiglio NazionalInstitute of Cognitive Sciences and Technologies, Consiglio Nazionale delle Ricerche, Rome, Italy.
    Kristoffersson, Annica
    Örebro University, School of Science and Technology.
    Almquist, Lena
    Örebro City Council, Örebro, Sweden.
    Björkman, Patrik
    Giraff Technologies, Västerås, Sweden.
    Cesta, Amedeo
    Institute of Cognitive Sciences and Technologies, Consiglio Nazionale delle Ricerche ,Rome, Italy.
    Cortellessa, Gabriella
    Institute of Cognitive Sciences and Technologies, Consiglio Nazionale delle Ricerche , Rome,Italy.
    Galindo, Cipriano
    University of Malaga, Malaga, Spain.
    Gonzalez-Jimenez, Javier
    University of Malaga, Malaga, Spain.
    Gustafsson, Kalle
    Giraff Technologies, Västerås, Sweden.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Melendez, Francisco
    University of Malaga, Malaga, Spain.
    Nilsson, Malin
    Örebro City Council, Örebro, Sweden.
    Odens Hedman, Lasse
    Giraff Technologies, Västerås, Sweden.
    Odontidou, Eleni
    Giraff Technologies, Västerås, Sweden.
    Ruiz-Sarmiento, Jose-Raul
    University of Malaga, Malaga, Spain.
    Scherlund, Mårten
    Giraff Technologies, Västerås, Sweden.
    Tiberio, Lorenza
    Institute of Cognitive Sciences and Technologies, Consiglio Nazionale delle Ricerche , Rome, Italy.
    von Rump, Stephen
    Giraff Technologies, Västerås, Sweden.
    Coradeschi, Silvia
    Örebro University, Örebro, Sweden.
    ExCITE Project: A Review of Forty-Two Months of Robotic Telepresence Technology2016In: Presence - Teleoperators and Virtual Environments, ISSN 1054-7460, E-ISSN 1531-3263, Vol. 25, no 3, p. 204-221Article in journal (Refereed)
    Abstract [en]

    This article reports on the EU project ExCITE with specific focus on the technical development of the telepresence platform over a period of 42 months. The aim of the project was to assess the robustness and validity of the mobile robotic telepresence (MRP) system Giraff as a means to support elderly people and to foster their social interaction and participation. Embracing the idea of user-centered product refinement, the robot was tested over long periods of time in real homes. As such, the system development was driven by a strong involvement of elderly people and their caregivers but also by technical challenges associated with deploying the robot in real-world contexts. The results of the 42-months’ long evaluation is a system suitable for use in homes rather than a generic system suitable, for example, in office environments.

  • 27.
    Pathi, Sai Krishna
    et al.
    Institutionen för naturvetenskap och teknik, School of Science and Technology.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Estimating F-Formations for Mobile Robotic Telepresence2017In: Estimating F-Formations for Mobile Robotic Telepresence, Vienna, Austria: ACM Digital Library, 2017, p. 255-256, article id 1127Conference paper (Refereed)
    Abstract [en]

    In this paper, we present a method for the automatic detection of F-formations for mobile robot telepresence (MRP). The method consists of two phases a) estimating face orientation in video frames and b) estimating the F-formation based on detected faces. The method works in real time and is tailored for images of lower resolution that are typically collected from MRP units.

  • 28.
    Potenza, Andre
    et al.
    Örebro University, School of Science and Technology.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Towards Sliding Autonomy in Mobile Robotic Telepresence: A Position Paper2017Conference paper (Refereed)
    Abstract [en]

    Sliding autonomy is used in teleoperation to adjusting a robot's level of local autonomy to match the user's needs. We claim that sliding autonomy can also improve mobile robotic telepresence, but we argue that existing approaches cannot be adopted to this domain without adequate modifications. We address in particular the question of how the need for autonomy, and its appropriate degree, can be inferred from measurable information.

  • 29.
    Stoyanov, Todor
    et al.
    Örebro University, School of Science and Technology.
    Krug, Robert
    Robotics, Learning and Perception lab, Royal Institute of Technology, Stockholm, Sweden.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Sun, Da
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Assisted Telemanipulation: A Stack-Of-Tasks Approach to Remote Manipulator Control2018In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE Press, 2018, p. 6640-6645Conference paper (Refereed)
    Abstract [en]

    This article presents an approach for assisted teleoperation of a robot arm, formulated within a real-time stack-of-tasks (SoT) whole-body motion control framework. The approach leverages the hierarchical nature of the SoT framework to integrate operator commands with assistive tasks, such as joint limit and obstacle avoidance or automatic gripper alignment. Thereby some aspects of the teleoperation problem are delegated to the controller and carried out autonomously. The key contributions of this work are two-fold: the first is a method for unobtrusive integration of autonomy in a telemanipulation system; and the second is a user study evaluation of the proposed system in the context of teleoperated pick-and-place tasks. The proposed approach of assistive control was found to result in higher grasp success rates and shorter trajectories than achieved through manual control, without incurring additional cognitive load to the operator.

  • 30.
    Sun, Da
    et al.
    Örebro University, School of Science and Technology.
    Liao, Qianfang
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Bilateral telerobotic system using Type-2 fuzzy neural network based moving horizon estimation force observer for enhancement of environmental force compliance and human perception2019In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 106, p. 358-373Article in journal (Refereed)
    Abstract [en]

    This paper firstly develops a novel force observer using Type-2 Fuzzy Neural Network (T2FNN)-based Moving Horizon Estimation (MHE) to estimate external force/torque information and simultaneously filter out the system disturbances. Then, by using the proposed force observer, a new bilateral teleoperation system is proposed that allows the slave industrial robot to be more compliant to the environment and enhances the situational awareness of the human operator by providing multi-level force feedback. Compared with existing force observer algorithms that highly rely on knowing exact mathematical models, the proposed force estimation strategy can derive more accurate external force/torque information of the robots with complex mechanism and with unknown dynamics. Applying the estimated force information, an external-force-regulated Sliding Mode Control (SMC) strategy with the support of machine vision is proposed to enhance the adaptability of the slave robot and the perception of the operator about various scenarios by virtue of the detected location of the task object. The proposed control system is validated by the experiment platform consisting of a universal robot (UR10), a haptic device and an RGB-D sensor.

1 - 30 of 30
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf