oru.sePublikasjoner
Endre søk
Begrens søket
1 - 33 of 33
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Akalin, Neziha
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kiselev, Andrey
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kristoffersson, Annica
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    An Evaluation Tool of the Effect of Robots in Eldercare on the Sense of Safety and Security2017Inngår i: Social Robotics: 9th International Conference, ICSR 2017, Tsukuba, Japan, November 22-24, 2017, Proceedings / [ed] Kheddar, A.; Yoshida, E.; Ge, S.S.; Suzuki, K.; Cabibihan, J-J:, Eyssel, F:, He, H., Springer International Publishing , 2017, s. 628-637Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The aim of the study presented in this paper is to develop a quantitative evaluation tool of the sense of safety and security for robots in eldercare. By investigating the literature on measurement of safety and security in human-robot interaction, we propose new evaluation tools. These tools are semantic differential scale questionnaires. In experimental validation, we used the Pepper robot, programmed in the way to exhibit social behaviors, and constructed four experimental conditions varying the degree of the robot’s non-verbal behaviors from no gestures at all to full head and hand movements. The experimental results suggest that both questionnaires (for the sense of safety and the sense of security) have good internal consistency.

  • 2.
    Akalin, Neziha
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kiselev, Andrey
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kristoffersson, Annica
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Enhancing Social Human-Robot Interaction with Deep Reinforcement Learning.2018Inngår i: Proc. FAIM/ISCA Workshop on Artificial Intelligence for Multimodal Human Robot Interaction, 2018, MHRI , 2018, s. 48-50Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This research aims to develop an autonomous social robot for elderly individuals. The robot will learn from the interaction and change its behaviors in order to enhance the interaction and improve the user experience. For this purpose, we aim to use Deep Reinforcement Learning. The robot will observe the user’s verbal and nonverbal social cues by using its camera and microphone, the reward will be positive valence and engagement of the user.

  • 3.
    Akalin, Neziha
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kiselev, Andrey
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kristoffersson, Annica
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    The Relevance of Social Cues in Assistive Training with a Social Robot2018Inngår i: 10th International Conference on Social Robotics, ICSR 2018, Proceedings / [ed] Ge, S.S., Cabibihan, J.-J., Salichs, M.A., Broadbent, E., He, H., Wagner, A., Castro-González, Á., Springer, 2018, s. 462-471Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper examines whether social cues, such as facial expressions, can be used to adapt and tailor a robot-assisted training in order to maximize performance and comfort. Specifically, this paper serves as a basis in determining whether key facial signals, including emotions and facial actions, are common among participants during a physical and cognitive training scenario. In the experiment, participants performed basic arm exercises with a social robot as a guide. We extracted facial features from video recordings of participants and applied a recursive feature elimination algorithm to select a subset of discriminating facial features. These features are correlated with the performance of the user and the level of difficulty of the exercises. The long-term aim of this work, building upon the work presented here, is to develop an algorithm that can eventually be used in robot-assisted training to allow a robot to tailor a training program based on the physical capabilities as well as the social cues of the users.

  • 4.
    Alirezaie, Marjan
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kiselev, Andrey
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Klügl, Franziska
    Örebro universitet, Institutionen för naturvetenskap och teknik. Örebro universitet, Institutionen för juridik, psykologi och socialt arbete.
    Längkvist, Martin
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Exploiting Context and Semantics for UAV Path-finding in an Urban Setting2017Inngår i: Proceedings of the 1st International Workshop on Application of Semantic Web technologies in Robotics (AnSWeR 2017), Portoroz, Slovenia, May 29th, 2017 / [ed] Emanuele Bastianelli, Mathieu d'Aquin, Daniele Nardi, Technical University Aachen , 2017, s. 11-20Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper we propose an ontology pattern that represents paths in a geo-representation model to be used in an aerial path planning processes. This pattern provides semantics related to constraints (i.e., ight forbidden zones) in a path planning problem in order to generate collision free paths. Our proposed approach has been applied on an ontology containing geo-regions extracted from satellite imagery data from a large urban city as an illustrative example.

  • 5.
    Alirezaie, Marjan
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kiselev, Andrey
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Längkvist, Martin
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Klügl, Franziska
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    An Ontology-Based Reasoning Framework for Querying Satellite Images for Disaster Monitoring2017Inngår i: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 17, nr 11, artikkel-id 2545Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This paper presents a framework in which satellite images are classified and augmented with additional semantic information to enable queries about what can be found on the map at a particular location, but also about paths that can be taken. This is achieved by a reasoning framework based on qualitative spatial reasoning that is able to find answers to high level queries that may vary on the current situation. This framework called SemCityMap, provides the full pipeline from enriching the raw image data with rudimentary labels to the integration of a knowledge representation and reasoning methods to user interfaces for high level querying. To illustrate the utility of SemCityMap in a disaster scenario, we use an urban environment—central Stockholm—in combination with a flood simulation. We show that the system provides useful answers to high-level queries also with respect to the current flood status. Examples of such queries concern path planning for vehicles or retrieval of safe regions such as “find all regions close to schools and far from the flooded area”. The particular advantage of our approach lies in the fact that ontological information and reasoning is explicitly integrated so that queries can be formulated in a natural way using concepts on appropriate level of abstraction, including additional constraints.

  • 6.
    Alirezaie, Marjan
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Längkvist, Martin
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kiselev, Andrey
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Open GeoSpatial Data as a Source of Ground Truth for Automated Labelling of Satellite Images2016Inngår i: SDW 2016: Spatial Data on the Web, Proceedings / [ed] Krzysztof Janowicz et al., CEUR Workshop Proceedings , 2016, s. 5-8Konferansepaper (Fagfellevurdert)
  • 7.
    Edebol-Carlman, Hanna
    et al.
    Örebro universitet, Institutionen för medicinska vetenskaper.
    Rode, Julia
    Örebro universitet, Institutionen för medicinska vetenskaper.
    König, Julia
    Örebro universitet, Institutionen för medicinska vetenskaper.
    Hutchinson, Ashley
    Örebro universitet, Institutionen för medicinska vetenskaper.
    Repsilber, Dirk
    Örebro universitet, Institutionen för medicinska vetenskaper.
    Kiselev, Andrey
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Thunberg, Per
    Örebro universitet, Institutionen för medicinska vetenskaper.
    Lathrop Stern, Lori
    Labus, Jennifer
    Brummer, Robert Jan
    Örebro universitet, Institutionen för medicinska vetenskaper.
    Evaluating the effects of probiotic intake on brain activity during an emotional attention task and blood markers related to stress in healthy subjects2019Konferansepaper (Fagfellevurdert)
  • 8.
    Efremova, Natalia
    et al.
    Plekhanov Russian University, Moskow, Russia.
    Kiselev, Andrey
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Cognitive Architectures for Optimal Remote Image Representation for Driving a Telepresence Robot2014Konferansepaper (Fagfellevurdert)
  • 9.
    Hacker, Benjamin Alexander
    et al.
    Kyoto University, Japan.
    Wankerl, Thomas
    Kyoto University, Japan.
    Kiselev, Andrey
    Kyoto University, Japan.
    Huang, Hung-Hsuan
    Kyoto University, Japan.
    Schlichter, Johann
    Technische Universität München, Germany.
    Abdikeev, Niyaz
    Plekhanov University, Moscow.
    Nishida, Toyoaki
    Kyoto University, Japan.
    Incorporating intentional and emotional behaviors into a Virtual Human for Better Customer-Engineer-Interaction2009Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Providing customer support for technical products means an essential effort for enterprises to satisfy the customer's needs and to challenge rivals in business. This paper introduces a virtual human framework for a better customer engineer interaction. We put emphasis on a preferably natural conversation achieved by continuously analyzing behaviors and emotions of the human user, suggesting his or her intentions and diversification of active and passive intentional behaviors. The underlying architecture is an extension to the generic embodied conversational agent framework which was developed to ease the integration of heterogeneous components into an embodied conversational agent system. These extensions are mainly influenced by SAIBA's architecture for a multimodal behavior generation framework. Although the system has only been accomplished to about 50% partial results show that our approach has the potential to create a more natural like conversational situation.

  • 10.
    Kiselev, Andrey
    et al.
    Kyoto University, Kyoto, Japan.
    Abdikeev, Niyaz
    Plekhanov Russian Academy of Economics, Moscow, Russia .
    Nishida, Toyoaki
    Kyoto University, Kyoto, Japan.
    Evaluating Humans’ Implicit Attitudes towards an Embodied Conversational Agent2011Inngår i: Advances in Neural Networks–ISNN 2011: 8th International Symposium on Neural Networks, ISNN 2011, Guilin, China, May 29–June 1, 2011, Proceedings, Part I, Springer Berlin/Heidelberg, 2011, s. -9Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper addresses the problem of evaluating embodied conversational agents in terms of their communicative performance. We show our attempt to evaluate humans’ implicit attitudes towards different kinds of information presenting by embodied conversational agents using the Implicit Association Test (IAT) rather than gathering explicit data using interviewing methods. We conducted an experiment in which we use the method of indirect measurements with the IAT. The conventional procedure and scoring algorithm of the IAT were used in order to discover possible issues and solutions for future experiments. We discuss key differences between the conventional usage of the IAT and using the IAT in our experiment for evaluating embodied conversational agents using unfamiliar information as test data.

  • 11.
    Kiselev, Andrey
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik. Kyoto University, Kyoto, Japan.
    Abdikeev, Niyaz
    Plekhanov Russian Academy of Economics, Moscow, Russia; Universität Ulm, Ulm, Germany.
    Nishida, Toyoaki
    Kyoto UniversityKyoto University, Kyoto, Japan.
    Measuring Implicit Attitudes in Human-Computer Interactions2011Inngår i: Rough Sets, Fuzzy Sets, Data Mining and Granular Computing / [ed] Kuznetsov, S.O.; Ślęzak, D.; Hepting, D.H.; Mirkin, B.G., Springer, 2011, s. 350-357Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper presents the ongoing project which attempts to solve the problem of measuring users' satisfaction by utilizing methods of discovering users' implicit attitudes. In the initial stage, authors attempted to use the Implicit Association Test (IAT) in order to discover users' implicit attitudes towards avirtual character. The conventional IAT procedure and scoring algorithm were used in order to find possible lacks of original method. Results of the initial experiment are shown in the paper along with method modification proposal and preliminary verification experiment.

  • 12.
    Kiselev, Andrey
    et al.
    Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University, Kyoto, Japan.
    Hacker, Benjamin Alexander
    Department of Informatics, Munich University of Technology, Munich, Germany.
    Wankerl, Thomas
    Department of Informatics, Munich University of Technology, Munich, Germany.
    Abdikeev, Niyaz
    Plekhanov Russian Academy of Economics, Moscow, Russia.
    Nishida, Toyoaki
    Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University, Kyoto, Japan.
    Toward incorporating emotions with rationality into a communicative virtual agent2011Inngår i: AI & Society: The Journal of Human-Centred Systems and Machine Intelligence, ISSN 0951-5666, E-ISSN 1435-5655, Vol. 26, nr 3, s. 275-289Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This paper addresses the problem of human–computer interactions when the computer can interpret and express a kind of human-like behavior, offering natural communication. A conceptual framework for incorporating emotions with rationality is proposed. A model of affective social interactions is described. The model utilizes the SAIBA framework, which distinguishes among several stages of processing of information. The SAIBA framework is extended, and a model is realized in human behavior detection, human behavior interpretation, intention planning, attention tracking behavior planning, and behavior realization components. Two models of incorporating emotions with rationality into a virtual artifact are presented. The first one uses an implicit implementation of emotions. The second one has an explicit realization of a three-layered model of emotions, which is highly interconnected with other components of the system. Details of the model with implicit implementation of emotional behavior are shown as well as evaluation methodology and results. Discussions about the extended model of an agent are given in the final part of the paper.

  • 13.
    Kiselev, Andrey
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kristoffersson, Annica
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Combining Semi-autonomous Navigation with Manned Behaviour in a Cooperative Driving System for Mobile Robotic Telepresence2015Inngår i: COMPUTER VISION - ECCV 2014 WORKSHOPS, PT IV, Berlin: Springer Berlin/Heidelberg, 2015, Vol. 8928, s. 17-28Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper presents an image-based cooperative driving system for telepresence robot, which allows safe operation in indoor environments and is meant to minimize the burden on novice users operating the robot. The paper focuses on one emerging telepresence robot, namely, mobile remote presence systems for social interaction. Such systems brings new opportunities for applications in healthcare and elderly care by allowing caregivers to communicate with patients and elderly from remote locations. However, using such systems can be a difficult task particularly for caregivers without proper training. The paper presents a first implementation of a vision-based cooperative driving enhancement to a telepresence robot. A preliminary evaluation in the laboratory environment is presented.

  • 14.
    Kiselev, Andrey
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kristoffersson, Annica
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    The Effect of Field of View on Social Interaction in Mobile Robotic Telepresence Systems2014Inngår i: Proceedings of the 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2014), IEEE conference proceedings, 2014, s. 214-215Konferansepaper (Fagfellevurdert)
    Abstract [en]

    One goal of mobile robotic telepresence for social interaction is to design robotic units that are easy to operate for novice users and promote good interaction between people. This paper presents an exploratory study on the effect of camera orientation and field of view on the interaction between a remote and local user. Our findings suggest that limiting the width of the field of view can lead to better interaction quality as it encourages remote users to orient the robot towards local users.

  • 15.
    Kiselev, Andrey
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kristoffersson, Annica
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Melendez, Francisco
    System Engineering and Automation Department, University of Malaga, Malaga, Spain.
    Galindo, Cipriano
    System Engineering and Automation Department, University of Malaga, Malaga, Spain.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Gonzalez-Jimenez, Javier
    System Engineering and Automation Department, University of Malaga, Malaga, Spain.
    Coradeschi, Silvia
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Evaluation of using semi-autonomy features in mobile robotic telepresence systems2015Inngår i: Proceedings of the 2015 7th IEEE International Conference on Cybernetics and Intelligent Systems, CIS 2015 and Robotics, Automation and Mechatronics, RAM 2015, New York, USA: IEEE conference proceedings , 2015, s. 147-152Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Mobile robotic telepresence systems used for social interaction scenarios require that users steer robots in a remote environment. As a consequence, a heavy workload can be put on users if they are unfamiliar with using robotic telepresence units. One way to lessen this workload is to automate certain operations performed during a telepresence session in order to assist remote drivers in navigating the robot in new environments. Such operations include autonomous robot localization and navigation to certain points in the home and automatic docking of the robot to the charging station. In this paper we describe the implementation of such autonomous features along with user evaluation study. The evaluation scenario is focused on the first experience on using the system by novice users. Importantly, that the scenario taken in this study assumed that participants have as little as possible prior information about the system. Four different use-cases were identified from the user behaviour analysis.

  • 16.
    Kiselev, Andrey
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Using a mental workload index as a measure of usability of a user interface for social robotic telepresence2012Inngår i: Workshop in Social Robotics Telepresence, 2012Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This position paper reports on the use of mentalworkload analysis to measure the usability of a remote user’sinterface in the context of social robotic telepresence. The paperdiscusses the importance of remote/pilot user’s interfaces for successful interaction and presents a study whereby a set of tools for evaluation are proposed. Preliminary experimental analysis is provided when evaluating a specific telepresencerobot, called the Giraff.

  • 17.
    Kiselev, Andrey
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Sivakumar, Prasanna Kumar
    SASTRA University, Thanjavur, India.
    Swaminathan, Chittaranjan Srinivas
    SASTRA University, Thanjavur, India.
    Robot-human hand-overs in non-anthropomorphic robots2013Inngår i: Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction, HRI'13 / [ed] Hideaki Kuzuoka, Vanessa Evers, Michita Imai, Jodi Forlizzi, IEEE Press, 2013, s. 227-228Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Robots that assist and interact with humans will inevitably require to successfully achieve the task of handing over objects. Whether it is to deliver desired objects for the elderly living in their homes or hand tools to a worker in a factory, the process of robot hand-overs is one worthy study within the human robot interaction community. While the study of object hand-overs have been studied in previous works, these works have mainly considered anthropomorphic robots, that is, robots that appear and move similar to humans. However, recent trends within robotics, and in particular domestic robotics have witnessed an increase in non-anthropomorphic robotic platforms such as moving tables, teleconferencing robots and vacuum cleaners. The study of robot hand-over for nonanthropomorphic robots and in particular the study of what constitute a successful hand-over is at focus in this paper. For the purpose of investigation, the TurtleBot, which is a moving table like device is used in a home environment.

  • 18.
    Kiselev, Andrey
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Mosiello, Giovanni
    Örebro universitet, Institutionen för naturvetenskap och teknik. Roma Tre University, Rome, Italy.
    Kristoffersson, Annica
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Semi-Autonomous Cooperative Driving for Mobile Robotic Telepresence Systems2014Inngår i: Proceedings of the 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2014), IEEE conference proceedings, 2014, s. 104-104Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Mobile robotic telepresence (MRP) has been introduced to allow communication from remote locations. Modern MRP systems offer rich capabilities for human-human interactions. However, simply driving a telepresence robot can become a burden especially for novice users, leaving no room for interaction at all. In this video we introduce a project which aims to incorporate advanced robotic algorithms into manned telepresence robots in a natural way to allow human-robot cooperation for safe driving. It also shows a very first implementation of cooperative driving based on extracting a safe drivable area in real time using the image stream received from the robot.

  • 19.
    Kiselev, Andrey
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Potenza, Andre
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Bruno, Barbara
    DIBRIS, University Genova, Genova, Italy.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Towards Seamless Autonomous Function Handovers in Human-Robot Teams2017Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Various human-robot collaboration scenarios mayimpose different requirements on the robot’s autonomy, ranging from fully autonomous to fully manual operation. The paradigm of sliding autonomy has been introduced to allow adapting robots’ autonomy in real time, thus improving flexibility of a human-robot team. In sliding autonomy, functions can be handed over between the human and the robot to address environment changes and optimize performance and workload. This paper examines the process of handing over functions between humans by looking at a particular experiment scenario in which the same function has to be handed over multiple times during the experiment session. We hypothesize that the process of function handover is similar to already well-studied human-robot handovers which deal with physical objects. In the experiment, we attempt to discover natural similarities and differences between these two types of handovers and suggest further directions of work that are necessary to give the robot the ability to perform the function handover autonomously, without explicit instruction from the human counterpart.

  • 20.
    Kiselev, Andrey
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Scherlund, Mårten
    Giraff Technologies AB, Västerås, Sweden.
    Kristoffersson, Annica
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Efremova, Natalia
    Plekhanom University, Moscow, Russia.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Auditory immersion with stereo sound in a mobile robotic telepresence system2015Inngår i: 10th ACM/IEEE International Conference on Human-Robot Interaction, 2015, Association for Computing Machinery (ACM), 2015Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Auditory immersion plays a significant role in generating a good feeling of presence for users driving a telepresence robot. In this paper, one of the key characteristics of auditory immersion - sound source localization (SSL) - is studied from the perspective of those who operate telepresence robots from remote locations. A prototype which is capable of delivering soundscape to the user through Interaural Time Difference (ITD) and Interaural Level Difference (ILD) using the ORTF stereo recording technique was developed. The prototype was evaluated in an experiment and the results suggest that the developed method is sufficient for sound source localization tasks.

  • 21.
    Krishna, Sai
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kiselev, Andrey
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kristoffersson, Annica
    School of Innovation, Design and Engineering, Mälardalen University, Västerås, Sweden.
    Repsilber, Dirk
    Örebro universitet, Institutionen för medicinska vetenskaper.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    A Novel Method for Estimating Distances from a Robot to Humans Using Egocentric RGB Camera2019Inngår i: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 19, nr 14, artikkel-id E3142Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Estimating distances between people and robots plays a crucial role in understanding social Human-Robot Interaction (HRI) from an egocentric view. It is a key step if robots should engage in social interactions, and to collaborate with people as part of human-robot teams. For distance estimation between a person and a robot, different sensors can be employed, and the number of challenges to be addressed by the distance estimation methods rise with the simplicity of the technology of a sensor. In the case of estimating distances using individual images from a single camera in a egocentric position, it is often required that individuals in the scene are facing the camera, do not occlude each other, and are fairly visible so specific facial or body features can be identified. In this paper, we propose a novel method for estimating distances between a robot and people using single images from a single egocentric camera. The method is based on previously proven 2D pose estimation, which allows partial occlusions, cluttered background, and relatively low resolution. The method estimates distance with respect to the camera based on the Euclidean distance between ear and torso of people in the image plane. Ear and torso characteristic points has been selected based on their relatively high visibility regardless of a person orientation and a certain degree of uniformity with regard to the age and gender. Experimental validation demonstrates effectiveness of the proposed method.

  • 22.
    Krishna, Sai
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kiselev, Andrey
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Towards a Method to Detect F-formations in Real-Time to Enable Social Robots to Join Groups2017Inngår i: Towards a Method to Detect F-formations in Real-Time to Enable Social Robots to Join Groups, Umeå, Sweden: Umeå University , 2017Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we extend an algorithm to detect constraint based F-formations for a telepresence robot and also consider the situation when the robot is in motion. The proposed algorithm is computationally inexpensive, uses an egocentric (first-person) vision, low memory, low quality vision settings and also works in real time which is explicitly designed for a mobile robot. The proposed approach is a first step advancing in the direction of automatically detecting F-formations for the robotics community.

  • 23.
    Krishna, Sai
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kristoffersson, Annica
    Mälardalen University, Västerås, Sweden.
    Kiselev, Andrey
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Estimating Optimal Placement for a Robot in Social Group Interaction2019Inngår i: IEEE International Workshop on Robot and Human Communication (ROMAN), IEEE, 2019Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we present a model to propose anoptimal placement for a robot in a social group interaction. Ourmodel estimates the O-space according to the F-formation theory. The method automatically calculates a suitable placementfor the robot. An evaluation of the method has been performedby conducting an experiment where participants stand in differ-ent formations and a robot is teleoperated to join the group. Inone condition, the operator positions the robot according to thespecified location given by our algorithm. In another condition,operators have the freedom to position the robot according totheir personal choice. Follow-up questionnaires were performedto determine which of the placements were preferred by theparticipants. The results indicate that the proposed methodfor automatic placement of the robot is supported from theparticipants. The contribution of this work resides in a novelmethod to automatically estimate the best placement of therobot, as well as the results from user experiments to verify thequality of this method. These results suggest that teleoperatedrobots such as mobile robot telepresence systems could benefitfrom tools that assist operators in placing the robot in groupsin a socially accepted manner.

  • 24.
    Krishna, Sai
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kristoffersson, Annica
    School of Innovation, Design and Engineering, Mälardalen University, Västerås, Sweden.
    Kiselev, Andrey
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    F-Formations for Social Interaction in Simulation Using Virtual Agents and Mobile Robotic Telepresence Systems2019Inngår i: Multimodal Technologies and Interaction, ISSN 2414-4088, Vol. 3, nr 4, artikkel-id 69Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    F-formations are a set of possible patterns in which groups of people tend to spatially organize themselves while engaging in social interactions. In this paper, we study the behavior of teleoperators of mobile robotic telepresence systems to determine whether they adhere to spatial formations when navigating to groups. This work uses a simulated environment in which teleoperators are requested to navigate to different groups of virtual agents. The simulated environment represents a conference lobby scenario where multiple groups of Virtual Agents with varying group sizes are placed in different spatial formations. The task requires teleoperators to navigate a robot to join each group using an egocentric-perspective camera. In a second phase, teleoperators are allowed to evaluate their own performance by reviewing how they navigated the robot from an exocentric perspective. The two important outcomes from this study are, firstly, teleoperators inherently respect F-formations even when operating a mobile robotic telepresence system. Secondly, teleoperators prefer additional support in order to correctly navigate the robot into a preferred position that adheres to F-formations.

  • 25.
    Längkvist, Martin
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Alirezaie, Marjan
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kiselev, Andrey
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Interactive Learning with Convolutional Neural Networks for Image Labeling2016Inngår i: International Joint Conference on Artificial Intelligence (IJCAI), 2016Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Recently, deep learning models, such as Convolutional Neural Networks, have shown to give good performance for various computer vision tasks. A pre-requisite for such models is to have access to lots of labeled data since the most successful ones are trained with supervised learning. The process of labeling data is expensive, time-consuming, tedious, and sometimes subjective, which can result in falsely labeled data, which has a negative effect on both the training and the validation. In this work, we propose a human-in-the-loop intelligent system that allows the agent and the human to collabo- rate to simultaneously solve the problem of labeling data and at the same time perform scene labeling of an unlabeled image data set with minimal guidance by a human teacher. We evaluate the proposed in- teractive learning system by comparing the labeled data set from the system to the human-provided labels. The results show that the learning system is capable of almost completely label an entire image data set starting from a few labeled examples provided by the human teacher.

  • 26.
    Längkvist, Martin
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kiselev, Andrey
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Alirezaie, Marjan
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Classification and Segmentation of Satellite Orthoimagery Using Convolutional Neural Networks2016Inngår i: Remote Sensing, ISSN 2072-4292, E-ISSN 2072-4292, Vol. 8, nr 4, artikkel-id 329Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The availability of high-resolution remote sensing (HRRS) data has opened up the possibility for new interesting applications, such as per-pixel classification of individual objects in greater detail. This paper shows how a convolutional neural network (CNN) can be applied to multispectral orthoimagery and a digital surface model (DSM) of a small city for a full, fast and accurate per-pixel classification. The predicted low-level pixel classes are then used to improve the high-level segmentation. Various design choices of the CNN architecture are evaluated and analyzed. The investigated land area is fully manually labeled into five categories (vegetation, ground, roads, buildings and water), and the classification accuracy is compared to other per-pixel classification works on other land areas that have a similar choice of categories. The results of the full classification and segmentation on selected segments of the map show that CNNs are a viable tool for solving both the segmentation and object recognition task for remote sensing data.

  • 27.
    Mosiello, Giovanni
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik. Universitá degli Studi Roma Tre, Rome, Italy.
    Kiselev, Andrey
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Using augmented reality to improve usability of the user interface for driving a telepresence robot2013Inngår i: Paladyn - Journal of Behavioral Robotics, ISSN 2080-9778, E-ISSN 2081-4836, Vol. 4, nr 3, s. 174-181Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Mobile Robotic Telepresence (MRP) helps people to communicate in natural ways despite being physically located in different parts of the world. User interfaces of such systems are as critical as the design and functionality of the robot itself for creating conditions for natural interaction. This article presents an exploratory study analysing different robot teleoperation interfaces. The goals of this paper are to investigate the possible effect of using augmented reality as the means to drive a robot, to identify key factors of the user interface in order to improve the user experience through a driving interface, and to minimize interface familiarization time for non-experienced users. The study involved 23 participants whose robot driving attempts via different user interfaces were analysed. The results show that a user interface with an augmented reality interface resulted in better driving experience.

  • 28.
    Orlandini, Andrea
    et al.
    Consiglio NazionalInstitute of Cognitive Sciences and Technologies, Consiglio Nazionale delle Ricerche, Rome, Italy.
    Kristoffersson, Annica
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Almquist, Lena
    Örebro City Council, Örebro, Sweden.
    Björkman, Patrik
    Giraff Technologies, Västerås, Sweden.
    Cesta, Amedeo
    Institute of Cognitive Sciences and Technologies, Consiglio Nazionale delle Ricerche ,Rome, Italy.
    Cortellessa, Gabriella
    Institute of Cognitive Sciences and Technologies, Consiglio Nazionale delle Ricerche , Rome,Italy.
    Galindo, Cipriano
    University of Malaga, Malaga, Spain.
    Gonzalez-Jimenez, Javier
    University of Malaga, Malaga, Spain.
    Gustafsson, Kalle
    Giraff Technologies, Västerås, Sweden.
    Kiselev, Andrey
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Melendez, Francisco
    University of Malaga, Malaga, Spain.
    Nilsson, Malin
    Örebro City Council, Örebro, Sweden.
    Odens Hedman, Lasse
    Giraff Technologies, Västerås, Sweden.
    Odontidou, Eleni
    Giraff Technologies, Västerås, Sweden.
    Ruiz-Sarmiento, Jose-Raul
    University of Malaga, Malaga, Spain.
    Scherlund, Mårten
    Giraff Technologies, Västerås, Sweden.
    Tiberio, Lorenza
    Institute of Cognitive Sciences and Technologies, Consiglio Nazionale delle Ricerche , Rome, Italy.
    von Rump, Stephen
    Giraff Technologies, Västerås, Sweden.
    Coradeschi, Silvia
    Örebro University, Örebro, Sweden.
    ExCITE Project: A Review of Forty-Two Months of Robotic Telepresence Technology2016Inngår i: Presence - Teleoperators and Virtual Environments, ISSN 1054-7460, E-ISSN 1531-3263, Vol. 25, nr 3, s. 204-221Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This article reports on the EU project ExCITE with specific focus on the technical development of the telepresence platform over a period of 42 months. The aim of the project was to assess the robustness and validity of the mobile robotic telepresence (MRP) system Giraff as a means to support elderly people and to foster their social interaction and participation. Embracing the idea of user-centered product refinement, the robot was tested over long periods of time in real homes. As such, the system development was driven by a strong involvement of elderly people and their caregivers but also by technical challenges associated with deploying the robot in real-world contexts. The results of the 42-months’ long evaluation is a system suitable for use in homes rather than a generic system suitable, for example, in office environments.

  • 29.
    Pathi, Sai Krishna
    et al.
    Institutionen för naturvetenskap och teknik, School of Science and Technology.
    Kiselev, Andrey
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Estimating F-Formations for Mobile Robotic Telepresence2017Inngår i: Estimating F-Formations for Mobile Robotic Telepresence, Vienna, Austria: ACM Digital Library, 2017, s. 255-256, artikkel-id 1127Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we present a method for the automatic detection of F-formations for mobile robot telepresence (MRP). The method consists of two phases a) estimating face orientation in video frames and b) estimating the F-formation based on detected faces. The method works in real time and is tailored for images of lower resolution that are typically collected from MRP units.

  • 30.
    Potenza, Andre
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kiselev, Andrey
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Saffiotti, Alessandro
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Towards Sliding Autonomy in Mobile Robotic Telepresence: A Position Paper2017Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Sliding autonomy is used in teleoperation to adjusting a robot's level of local autonomy to match the user's needs. We claim that sliding autonomy can also improve mobile robotic telepresence, but we argue that existing approaches cannot be adopted to this domain without adequate modifications. We address in particular the question of how the need for autonomy, and its appropriate degree, can be inferred from measurable information.

  • 31.
    Stoyanov, Todor
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Krug, Robert
    Robotics, Learning and Perception lab, Royal Institute of Technology, Stockholm, Sweden.
    Kiselev, Andrey
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Sun, Da
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Assisted Telemanipulation: A Stack-Of-Tasks Approach to Remote Manipulator Control2018Inngår i: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE Press, 2018, s. 6640-6645Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This article presents an approach for assisted teleoperation of a robot arm, formulated within a real-time stack-of-tasks (SoT) whole-body motion control framework. The approach leverages the hierarchical nature of the SoT framework to integrate operator commands with assistive tasks, such as joint limit and obstacle avoidance or automatic gripper alignment. Thereby some aspects of the teleoperation problem are delegated to the controller and carried out autonomously. The key contributions of this work are two-fold: the first is a method for unobtrusive integration of autonomy in a telemanipulation system; and the second is a user study evaluation of the proposed system in the context of teleoperated pick-and-place tasks. The proposed approach of assistive control was found to result in higher grasp success rates and shorter trajectories than achieved through manual control, without incurring additional cognitive load to the operator.

  • 32.
    Sun, Da
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Liao, Qianfang
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kiselev, Andrey
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    A New Mixed Reality - based Teleoperation System for Telepresence and Maneuverability Enhancement2019Inngår i: Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Virtual Reality (VR) is regarded as a useful tool for teleoperation system that provides operators an immersive visual feedback on the robot and the environment. However, without any haptic feedback or physical constructions, VR-based teleoperation systems normally have poor maneuverability and may cause operational faults in some fine movements. In this paper, we employ Mixed Reality (MR), which combines real and virtual worlds, to develop a novel teleoperation system. New system design and control algorithms are proposed. For the system design, a MR interface is developed based on a virtual environment augmented with real-time data from the task space with a goal to enhance the operator’s visual perception. To allow the operator to be freely decoupled from the control loop and offload the operator’s burden, a new interaction proxy is proposed to control the robot. For the control algorithms, two control modes are introduced to improve long-distance movements and fine movements of the MR-based teleoperation. In addition, a set of fuzzy logic based methods are proposed to regulate the position, velocity and force of the robot in order to enhance the system maneuverability and deal with the potential operational faults. Barrier Lyapunov Function (BLF) and back-stepping methods are leveraged to design the control laws and simultaneously guarantee the system stability under state constraints.  Experiments conducted using a 6-Degree of Freedom (DoF) robotic arm prove the feasibility of the system.

  • 33.
    Sun, Da
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Liao, Qianfang
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kiselev, Andrey
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Bilateral telerobotic system using Type-2 fuzzy neural network based moving horizon estimation force observer for enhancement of environmental force compliance and human perception2019Inngår i: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 106, s. 358-373Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This paper firstly develops a novel force observer using Type-2 Fuzzy Neural Network (T2FNN)-based Moving Horizon Estimation (MHE) to estimate external force/torque information and simultaneously filter out the system disturbances. Then, by using the proposed force observer, a new bilateral teleoperation system is proposed that allows the slave industrial robot to be more compliant to the environment and enhances the situational awareness of the human operator by providing multi-level force feedback. Compared with existing force observer algorithms that highly rely on knowing exact mathematical models, the proposed force estimation strategy can derive more accurate external force/torque information of the robots with complex mechanism and with unknown dynamics. Applying the estimated force information, an external-force-regulated Sliding Mode Control (SMC) strategy with the support of machine vision is proposed to enhance the adaptability of the slave robot and the perception of the operator about various scenarios by virtue of the detected location of the task object. The proposed control system is validated by the experiment platform consisting of a universal robot (UR10), a haptic device and an RGB-D sensor.

1 - 33 of 33
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf