oru.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 145) Show all publications
Sun, D., Liao, Q., Stoyanov, T., Kiselev, A. & Loutfi, A. (2019). A New Mixed Reality - based Teleoperation System for Telepresence and Maneuverability Enhancement.
Open this publication in new window or tab >>A New Mixed Reality - based Teleoperation System for Telepresence and Maneuverability Enhancement
Show others...
2019 (English)In: Article in journal (Refereed) Accepted
Abstract [en]

Virtual Reality (VR) is regarded as a useful tool for teleoperation system that provides operators an immersive visual feedback on the robot and the environment. However, without any haptic feedback or physical constructions, VR-based teleoperation systems normally have poor maneuverability and may cause operational faults in some fine movements. In this paper, we employ Mixed Reality (MR), which combines real and virtual worlds, to develop a novel teleoperation system. New system design and control algorithms are proposed. For the system design, a MR interface is developed based on a virtual environment augmented with real-time data from the task space with a goal to enhance the operator’s visual perception. To allow the operator to be freely decoupled from the control loop and offload the operator’s burden, a new interaction proxy is proposed to control the robot. For the control algorithms, two control modes are introduced to improve long-distance movements and fine movements of the MR-based teleoperation. In addition, a set of fuzzy logic based methods are proposed to regulate the position, velocity and force of the robot in order to enhance the system maneuverability and deal with the potential operational faults. Barrier Lyapunov Function (BLF) and back-stepping methods are leveraged to design the control laws and simultaneously guarantee the system stability under state constraints.  Experiments conducted using a 6-Degree of Freedom (DoF) robotic arm prove the feasibility of the system.

National Category
Robotics
Identifiers
urn:nbn:se:oru:diva-77829 (URN)
Note

This paper is accepted in 10/11/2019, and will be online one month later.

Available from: 2019-11-11 Created: 2019-11-11 Last updated: 2019-11-11
Krishna, S., Kiselev, A., Kristoffersson, A., Repsilber, D. & Loutfi, A. (2019). A Novel Method for Estimating Distances from a Robot to Humans Using Egocentric RGB Camera. Sensors, 19(14), Article ID E3142.
Open this publication in new window or tab >>A Novel Method for Estimating Distances from a Robot to Humans Using Egocentric RGB Camera
Show others...
2019 (English)In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 19, no 14, article id E3142Article in journal (Refereed) Published
Abstract [en]

Estimating distances between people and robots plays a crucial role in understanding social Human-Robot Interaction (HRI) from an egocentric view. It is a key step if robots should engage in social interactions, and to collaborate with people as part of human-robot teams. For distance estimation between a person and a robot, different sensors can be employed, and the number of challenges to be addressed by the distance estimation methods rise with the simplicity of the technology of a sensor. In the case of estimating distances using individual images from a single camera in a egocentric position, it is often required that individuals in the scene are facing the camera, do not occlude each other, and are fairly visible so specific facial or body features can be identified. In this paper, we propose a novel method for estimating distances between a robot and people using single images from a single egocentric camera. The method is based on previously proven 2D pose estimation, which allows partial occlusions, cluttered background, and relatively low resolution. The method estimates distance with respect to the camera based on the Euclidean distance between ear and torso of people in the image plane. Ear and torso characteristic points has been selected based on their relatively high visibility regardless of a person orientation and a certain degree of uniformity with regard to the age and gender. Experimental validation demonstrates effectiveness of the proposed method.

Place, publisher, year, edition, pages
MDPI, 2019
Keywords
Human–Robot Interaction, distance estimation, single RGB image, social interaction
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-75583 (URN)10.3390/s19143142 (DOI)000479160300109 ()31319523 (PubMedID)2-s2.0-85070083052 (Scopus ID)
Note

Funding Agency:

Örebro University

Available from: 2019-08-16 Created: 2019-08-16 Last updated: 2019-11-15Bibliographically approved
Sun, D., Liao, Q., Stoyanov, T., Kiselev, A. & Loutfi, A. (2019). Bilateral telerobotic system using Type-2 fuzzy neural network based moving horizon estimation force observer for enhancement of environmental force compliance and human perception. Automatica, 106, 358-373
Open this publication in new window or tab >>Bilateral telerobotic system using Type-2 fuzzy neural network based moving horizon estimation force observer for enhancement of environmental force compliance and human perception
Show others...
2019 (English)In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 106, p. 358-373Article in journal (Refereed) Published
Abstract [en]

This paper firstly develops a novel force observer using Type-2 Fuzzy Neural Network (T2FNN)-based Moving Horizon Estimation (MHE) to estimate external force/torque information and simultaneously filter out the system disturbances. Then, by using the proposed force observer, a new bilateral teleoperation system is proposed that allows the slave industrial robot to be more compliant to the environment and enhances the situational awareness of the human operator by providing multi-level force feedback. Compared with existing force observer algorithms that highly rely on knowing exact mathematical models, the proposed force estimation strategy can derive more accurate external force/torque information of the robots with complex mechanism and with unknown dynamics. Applying the estimated force information, an external-force-regulated Sliding Mode Control (SMC) strategy with the support of machine vision is proposed to enhance the adaptability of the slave robot and the perception of the operator about various scenarios by virtue of the detected location of the task object. The proposed control system is validated by the experiment platform consisting of a universal robot (UR10), a haptic device and an RGB-D sensor.

Place, publisher, year, edition, pages
Pergamon Press, 2019
Keywords
Force estimation and control, Type-2 fuzzy neural network, Moving horizon estimation, Bilateral teleoperation, Machine vision
National Category
Control Engineering
Research subject
Computer and Systems Science
Identifiers
urn:nbn:se:oru:diva-74377 (URN)10.1016/j.automatica.2019.04.033 (DOI)000473380000041 ()2-s2.0-85065901728 (Scopus ID)
Funder
Swedish Research Council
Available from: 2019-05-23 Created: 2019-05-23 Last updated: 2019-11-13Bibliographically approved
Akalin, N., Kristoffersson, A. & Loutfi, A. (2019). Evaluating the Sense of Safety and Security in Human - Robot Interaction with Older People. In: Oliver Korn (Ed.), Social Robots: Technological, Societal and Ethical Aspects of Human-Robot Interaction (pp. 237-264). Springer
Open this publication in new window or tab >>Evaluating the Sense of Safety and Security in Human - Robot Interaction with Older People
2019 (English)In: Social Robots: Technological, Societal and Ethical Aspects of Human-Robot Interaction / [ed] Oliver Korn, Springer, 2019, p. 237-264Chapter in book (Refereed)
Abstract [en]

For many applications where interaction between robots and older people takes place, safety and security are key dimensions to consider. ‘Safety’ refers to a perceived threat of physical harm, whereas ‘security’ is a broad term which refers to many aspects related to health, well-being, and aging. This chapter presents a quantitative evaluation tool of the sense of safety and security for robots in elder care. By investigating the literature on measurement of safety and security in human–robot interaction, we propose new evaluation tools specially tailored to assess interaction between robots and older people.

Place, publisher, year, edition, pages
Springer, 2019
Series
Human-Computer Interaction Series, ISSN 1571-5035, E-ISSN 2524-4477
Keywords
Sense of safety and security, Quantitative evaluation tool, Social robots, Elder care
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-78493 (URN)10.1007/978-3-030-17107-0_12 (DOI)978-3-030-17106-3 (ISBN)978-3-030-17107-0 (ISBN)
Available from: 2019-12-08 Created: 2019-12-08 Last updated: 2019-12-11Bibliographically approved
Alirezaie, M., Längkvist, M., Sioutis, M. & Loutfi, A. (2019). Semantic Referee: A Neural-Symbolic Framework for Enhancing Geospatial Semantic Segmentation. Semantic Web, 10(5), 863-880
Open this publication in new window or tab >>Semantic Referee: A Neural-Symbolic Framework for Enhancing Geospatial Semantic Segmentation
2019 (English)In: Semantic Web, ISSN 1570-0844, E-ISSN 2210-4968, Vol. 10, no 5, p. 863-880Article in journal (Refereed) Published
Abstract [en]

Understanding why machine learning algorithms may fail is usually the task of the human expert that uses domain knowledge and contextual information to discover systematic shortcomings in either the data or the algorithm. In this paper, we propose a semantic referee, which is able to extract qualitative features of the errors emerging from deep machine learning frameworks and suggest corrections. The semantic referee relies on ontological reasoning about spatial knowledge in order to characterize errors in terms of their spatial relations with the environment. Using semantics, the reasoner interacts with the learning algorithm as a supervisor. In this paper, the proposed method of the interaction between a neural network classifier and a semantic referee shows how to improve the performance of semantic segmentation for satellite imagery data.

Place, publisher, year, edition, pages
IOS Press, 2019
Keywords
Deep Neural Network, Semantic Referee, Ontological and Spatial Reasoning, Semantic Segmentation, OntoCity, Geo Data
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-77266 (URN)10.3233/SW-190362 (DOI)000488082100003 ()
Projects
Semantic Robot
Funder
Swedish Research Council
Note

Funding Agency:

Swedish Knowledge Foundation under the research profile on Semantic Robots  20140033

Available from: 2019-10-14 Created: 2019-10-14 Last updated: 2019-10-25Bibliographically approved
Kristoffersson, A., Ulfvarson, J. & Loutfi, A. (2019). Teknik i hemmet - tekniska förutsättningar (1ed.). In: Mirjam Ekstedt & Maria Flink (Ed.), Hemsjukvård: olika perspektiv på trygg och säker vård (pp. 396-421). Liber
Open this publication in new window or tab >>Teknik i hemmet - tekniska förutsättningar
2019 (Swedish)In: Hemsjukvård: olika perspektiv på trygg och säker vård / [ed] Mirjam Ekstedt & Maria Flink, Liber, 2019, 1, p. 396-421Chapter in book (Other (popular science, discussion, etc.))
Abstract [sv]

I och med den fjärde industriella revolutionen – Industri 4.0 – kommer en nygeneration av teknik att finnas tillgänglig. Det förutspås att robotik och virtualreality kommer att transformera inte bara arbetsplatser utan även utvecklaandra domäner, såsom smarta städer och möjligheten till livsstils- och hälsomonitoreringhemma.Antalet tillgängliga konsument- och medicintekniska produkter ökar irask takt. Hälso- och sjukvårdssystemet ställs inför utmaningar, såsom behovetav att utveckla verktyg för att hantera ny teknologi men också att förändraarbetsprocesser och anpassa organisationen för att kunna hantera teknologin.

Det här kapitlet ger en översikt över kommande teknologier, förslag på hurteknologi kan användas i hemmiljöer, en översikt över hur sådan teknik utvärderatssamt inte minst en reflektion kring hur dessa teknologier kan harmoniseramed nuvarande organisatoriska processer.

Place, publisher, year, edition, pages
Liber, 2019 Edition: 1
Keywords
E-hälsa, Välfärdsteknologi, Smarta hem
National Category
Gerontology, specialising in Medical and Health Sciences Other Health Sciences Computer Sciences
Research subject
Computer Science; Caring sciences
Identifiers
urn:nbn:se:oru:diva-73447 (URN)978-91-47-11277-7 (ISBN)
Available from: 2019-04-02 Created: 2019-04-02 Last updated: 2019-04-02Bibliographically approved
Akalin, N., Kristoffersson, A. & Loutfi, A. (2019). The Influence of Feedback Type in Robot-Assisted Training. Multimodal Technologies and Interaction, 3(4)
Open this publication in new window or tab >>The Influence of Feedback Type in Robot-Assisted Training
2019 (English)In: Multimodal Technologies and Interaction, E-ISSN 2414-4088, Vol. 3, no 4Article in journal (Refereed) Published
Abstract [en]

Robot-assisted training, where social robots can be used as motivational coaches, provides an interesting application area. This paper examines how feedback given by a robot agent influences the various facets of participant experience in robot-assisted training. Specifically, we investigated the effects of feedback type on robot acceptance, sense of safety and security, attitude towards robots and task performance. In the experiment, 23 older participants performed basic arm exercises with a social robot as a guide and received feedback. Different feedback conditions were administered, such as flattering, positive and negative feedback. Our results suggest that the robot with flattering and positive feedback was appreciated by older people in general, even if the feedback did not necessarily correspond to objective measures such as performance. Participants in these groups felt better about the interaction and the robot.

Place, publisher, year, edition, pages
Multidisciplinary Digital Publishing Institute, 2019
Keywords
feedback, acceptance, flattering robot, sense of safety and security, robot-assisted training
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-78492 (URN)10.3390/mti3040067 (DOI)
Funder
EU, Horizon 2020, 721619
Available from: 2019-12-08 Created: 2019-12-08 Last updated: 2019-12-11Bibliographically approved
Alirezaie, M., Längkvist, M., Sioutis, M. & Loutfi, A. (2018). A Symbolic Approach for Explaining Errors in Image Classification Tasks. In: : . Paper presented at 27th International Joint Conference on Artificial Intelligence (IJCAI), Stockholm, Sweden, July 13-19, 2018.
Open this publication in new window or tab >>A Symbolic Approach for Explaining Errors in Image Classification Tasks
2018 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Machine learning algorithms, despite their increasing success in handling object recognition tasks, still seldom perform without error. Often the process of understanding why the algorithm has failed is the task of the human who, using domain knowledge and contextual information, can discover systematic shortcomings in either the data or the algorithm. This paper presents an approach where the process of reasoning about errors emerging from a machine learning framework is automated using symbolic techniques. By utilizing spatial and geometrical reasoning between objects in a scene, the system is able to describe misclassified regions in relation to its context. The system is demonstrated in the remote sensing domain where objects and entities are detected in satellite images.

National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-68000 (URN)
Conference
27th International Joint Conference on Artificial Intelligence (IJCAI), Stockholm, Sweden, July 13-19, 2018
Note

IJCAI Workshop on Learning and Reasoning: Principles & Applications to Everyday Spatial and Temporal Knowledge

Available from: 2018-07-18 Created: 2018-07-18 Last updated: 2018-07-26Bibliographically approved
Stoyanov, T., Krug, R., Kiselev, A., Sun, D. & Loutfi, A. (2018). Assisted Telemanipulation: A Stack-Of-Tasks Approach to Remote Manipulator Control. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, October 1-5, 2018 (pp. 6640-6645). IEEE Press
Open this publication in new window or tab >>Assisted Telemanipulation: A Stack-Of-Tasks Approach to Remote Manipulator Control
Show others...
2018 (English)In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE Press, 2018, p. 6640-6645Conference paper, Published paper (Refereed)
Abstract [en]

This article presents an approach for assisted teleoperation of a robot arm, formulated within a real-time stack-of-tasks (SoT) whole-body motion control framework. The approach leverages the hierarchical nature of the SoT framework to integrate operator commands with assistive tasks, such as joint limit and obstacle avoidance or automatic gripper alignment. Thereby some aspects of the teleoperation problem are delegated to the controller and carried out autonomously. The key contributions of this work are two-fold: the first is a method for unobtrusive integration of autonomy in a telemanipulation system; and the second is a user study evaluation of the proposed system in the context of teleoperated pick-and-place tasks. The proposed approach of assistive control was found to result in higher grasp success rates and shorter trajectories than achieved through manual control, without incurring additional cognitive load to the operator.

Place, publisher, year, edition, pages
IEEE Press, 2018
Series
IEEE International Conference on Intelligent Robots and Systems. Proceedings, ISSN 2153-0858, E-ISSN 2153-0866
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-71310 (URN)10.1109/IROS.2018.8594457 (DOI)000458872706014 ()978-1-5386-8094-0 (ISBN)978-1-5386-8095-7 (ISBN)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, October 1-5, 2018
Funder
Knowledge FoundationSwedish Foundation for Strategic Research
Available from: 2019-01-09 Created: 2019-01-09 Last updated: 2019-03-13Bibliographically approved
Längkvist, M., Jendeberg, J., Thunberg, P., Loutfi, A. & Lidén, M. (2018). Computer aided detection of ureteral stones in thin slice computed tomography volumes using Convolutional Neural Networks. Computers in Biology and Medicine, 97, 153-160
Open this publication in new window or tab >>Computer aided detection of ureteral stones in thin slice computed tomography volumes using Convolutional Neural Networks
Show others...
2018 (English)In: Computers in Biology and Medicine, ISSN 0010-4825, E-ISSN 1879-0534, Vol. 97, p. 153-160Article in journal (Refereed) Published
Abstract [en]

Computed tomography (CT) is the method of choice for diagnosing ureteral stones - kidney stones that obstruct the ureter. The purpose of this study is to develop a computer aided detection (CAD) algorithm for identifying a ureteral stone in thin slice CT volumes. The challenge in CAD for urinary stones lies in the similarity in shape and intensity of stones with non-stone structures and how to efficiently deal with large high-resolution CT volumes. We address these challenges by using a Convolutional Neural Network (CNN) that works directly on the high resolution CT volumes. The method is evaluated on a large data base of 465 clinically acquired high-resolution CT volumes of the urinary tract with labeling of ureteral stones performed by a radiologist. The best model using 2.5D input data and anatomical information achieved a sensitivity of 100% and an average of 2.68 false-positives per patient on a test set of 88 scans.

Place, publisher, year, edition, pages
Elsevier, 2018
Keywords
Computer aided detection, Ureteral stone, Convolutional neural networks, Computed tomography, Training set selection, False positive reduction
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:oru:diva-67139 (URN)10.1016/j.compbiomed.2018.04.021 (DOI)000435623700015 ()29730498 (PubMedID)2-s2.0-85046800526 (Scopus ID)
Note

Funding Agencies:

Nyckelfonden  OLL-597511 

Vinnova under the project "Interactive Deep Learning for 3D image analysis"  

Available from: 2018-06-04 Created: 2018-06-04 Last updated: 2018-08-30Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-3122-693X

Search in DiVA

Show all publications