oru.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 142) Show all publications
Krishna, S., Kiselev, A., Kristoffersson, A., Repsilber, D. & Loutfi, A. (2019). A Novel Method for Estimating Distances from a Robot to Humans Using Egocentric RGB Camera. Sensors, 19(14), Article ID E3142.
Open this publication in new window or tab >>A Novel Method for Estimating Distances from a Robot to Humans Using Egocentric RGB Camera
Show others...
2019 (English)In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 19, no 14, article id E3142Article in journal (Refereed) Published
Abstract [en]

Estimating distances between people and robots plays a crucial role in understanding social Human-Robot Interaction (HRI) from an egocentric view. It is a key step if robots should engage in social interactions, and to collaborate with people as part of human-robot teams. For distance estimation between a person and a robot, different sensors can be employed, and the number of challenges to be addressed by the distance estimation methods rise with the simplicity of the technology of a sensor. In the case of estimating distances using individual images from a single camera in a egocentric position, it is often required that individuals in the scene are facing the camera, do not occlude each other, and are fairly visible so specific facial or body features can be identified. In this paper, we propose a novel method for estimating distances between a robot and people using single images from a single egocentric camera. The method is based on previously proven 2D pose estimation, which allows partial occlusions, cluttered background, and relatively low resolution. The method estimates distance with respect to the camera based on the Euclidean distance between ear and torso of people in the image plane. Ear and torso characteristic points has been selected based on their relatively high visibility regardless of a person orientation and a certain degree of uniformity with regard to the age and gender. Experimental validation demonstrates effectiveness of the proposed method.

Place, publisher, year, edition, pages
MDPI, 2019
Keywords
Human–Robot Interaction, distance estimation, single RGB image, social interaction
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-75583 (URN)10.3390/s19143142 (DOI)000479160300109 ()31319523 (PubMedID)2-s2.0-85070083052 (Scopus ID)
Note

Funding Agency:

Örebro University

Available from: 2019-08-16 Created: 2019-08-16 Last updated: 2019-08-29Bibliographically approved
Sun, D., Liao, Q., Stoyanov, T., Kiselev, A. & Loutfi, A. (2019). Bilateral telerobotic system using Type-2 fuzzy neural network based moving horizon estimation force observer for enhancement of environmental force compliance and human perception. Automatica, 106, 358-373
Open this publication in new window or tab >>Bilateral telerobotic system using Type-2 fuzzy neural network based moving horizon estimation force observer for enhancement of environmental force compliance and human perception
Show others...
2019 (English)In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 106, p. 358-373Article in journal (Refereed) Published
Abstract [en]

This paper firstly develops a novel force observer using Type-2 Fuzzy Neural Network (T2FNN)-based Moving Horizon Estimation (MHE) to estimate external force/torque information and simultaneously filter out the system disturbances. Then, by using the proposed force observer, a new bilateral teleoperation system is proposed that allows the slave industrial robot to be more compliant to the environment and enhances the situational awareness of the human operator by providing multi-level force feedback. Compared with existing force observer algorithms that highly rely on knowing exact mathematical models, the proposed force estimation strategy can derive more accurate external force/torque information of the robots with complex mechanism and with unknown dynamics. Applying the estimated force information, an external-force-regulated Sliding Mode Control (SMC) strategy with the support of machine vision is proposed to enhance the adaptability of the slave robot and the perception of the operator about various scenarios by virtue of the detected location of the task object. The proposed control system is validated by the experiment platform consisting of a universal robot (UR10), a haptic device and an RGB-D sensor.

Place, publisher, year, edition, pages
Pergamon Press, 2019
Keywords
Force estimation and control, Type-2 fuzzy neural network, Moving horizon estimation, Bilateral teleoperation, Machine vision
National Category
Control Engineering
Research subject
Computer and Systems Science
Identifiers
urn:nbn:se:oru:diva-74377 (URN)10.1016/j.automatica.2019.04.033 (DOI)000473380000041 ()2-s2.0-85065901728 (Scopus ID)
Funder
Swedish Research Council
Available from: 2019-05-23 Created: 2019-05-23 Last updated: 2019-07-24Bibliographically approved
Alirezaie, M., Längkvist, M., Sioutis, M. & Loutfi, A. (2019). Semantic Referee: A Neural-Symbolic Framework for Enhancing Geospatial Semantic Segmentation. Semantic Web, 10(5), 863-880
Open this publication in new window or tab >>Semantic Referee: A Neural-Symbolic Framework for Enhancing Geospatial Semantic Segmentation
2019 (English)In: Semantic Web, ISSN 1570-0844, E-ISSN 2210-4968, Vol. 10, no 5, p. 863-880Article in journal (Refereed) Published
Abstract [en]

Understanding why machine learning algorithms may fail is usually the task of the human expert that uses domain knowledge and contextual information to discover systematic shortcomings in either the data or the algorithm. In this paper, we propose a semantic referee, which is able to extract qualitative features of the errors emerging from deep machine learning frameworks and suggest corrections. The semantic referee relies on ontological reasoning about spatial knowledge in order to characterize errors in terms of their spatial relations with the environment. Using semantics, the reasoner interacts with the learning algorithm as a supervisor. In this paper, the proposed method of the interaction between a neural network classifier and a semantic referee shows how to improve the performance of semantic segmentation for satellite imagery data.

Place, publisher, year, edition, pages
IOS Press, 2019
Keywords
Deep Neural Network, Semantic Referee, Ontological and Spatial Reasoning, Semantic Segmentation, OntoCity, Geo Data
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-77266 (URN)10.3233/SW-190362 (DOI)000488082100003 ()
Projects
Semantic Robot
Funder
Swedish Research Council
Note

Funding Agency:

Swedish Knowledge Foundation under the research profile on Semantic Robots  20140033

Available from: 2019-10-14 Created: 2019-10-14 Last updated: 2019-10-18Bibliographically approved
Kristoffersson, A., Ulfvarson, J. & Loutfi, A. (2019). Teknik i hemmet - tekniska förutsättningar (1ed.). In: Mirjam Ekstedt & Maria Flink (Ed.), Hemsjukvård: olika perspektiv på trygg och säker vård (pp. 396-421). Liber
Open this publication in new window or tab >>Teknik i hemmet - tekniska förutsättningar
2019 (Swedish)In: Hemsjukvård: olika perspektiv på trygg och säker vård / [ed] Mirjam Ekstedt & Maria Flink, Liber, 2019, 1, p. 396-421Chapter in book (Other (popular science, discussion, etc.))
Abstract [sv]

I och med den fjärde industriella revolutionen – Industri 4.0 – kommer en nygeneration av teknik att finnas tillgänglig. Det förutspås att robotik och virtualreality kommer att transformera inte bara arbetsplatser utan även utvecklaandra domäner, såsom smarta städer och möjligheten till livsstils- och hälsomonitoreringhemma.Antalet tillgängliga konsument- och medicintekniska produkter ökar irask takt. Hälso- och sjukvårdssystemet ställs inför utmaningar, såsom behovetav att utveckla verktyg för att hantera ny teknologi men också att förändraarbetsprocesser och anpassa organisationen för att kunna hantera teknologin.

Det här kapitlet ger en översikt över kommande teknologier, förslag på hurteknologi kan användas i hemmiljöer, en översikt över hur sådan teknik utvärderatssamt inte minst en reflektion kring hur dessa teknologier kan harmoniseramed nuvarande organisatoriska processer.

Place, publisher, year, edition, pages
Liber, 2019 Edition: 1
Keywords
E-hälsa, Välfärdsteknologi, Smarta hem
National Category
Gerontology, specialising in Medical and Health Sciences Other Health Sciences Computer Sciences
Research subject
Computer Science; Caring sciences
Identifiers
urn:nbn:se:oru:diva-73447 (URN)978-91-47-11277-7 (ISBN)
Available from: 2019-04-02 Created: 2019-04-02 Last updated: 2019-04-02Bibliographically approved
Alirezaie, M., Längkvist, M., Sioutis, M. & Loutfi, A. (2018). A Symbolic Approach for Explaining Errors in Image Classification Tasks. In: : . Paper presented at 27th International Joint Conference on Artificial Intelligence (IJCAI), Stockholm, Sweden, July 13-19, 2018.
Open this publication in new window or tab >>A Symbolic Approach for Explaining Errors in Image Classification Tasks
2018 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Machine learning algorithms, despite their increasing success in handling object recognition tasks, still seldom perform without error. Often the process of understanding why the algorithm has failed is the task of the human who, using domain knowledge and contextual information, can discover systematic shortcomings in either the data or the algorithm. This paper presents an approach where the process of reasoning about errors emerging from a machine learning framework is automated using symbolic techniques. By utilizing spatial and geometrical reasoning between objects in a scene, the system is able to describe misclassified regions in relation to its context. The system is demonstrated in the remote sensing domain where objects and entities are detected in satellite images.

National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-68000 (URN)
Conference
27th International Joint Conference on Artificial Intelligence (IJCAI), Stockholm, Sweden, July 13-19, 2018
Note

IJCAI Workshop on Learning and Reasoning: Principles & Applications to Everyday Spatial and Temporal Knowledge

Available from: 2018-07-18 Created: 2018-07-18 Last updated: 2018-07-26Bibliographically approved
Stoyanov, T., Krug, R., Kiselev, A., Sun, D. & Loutfi, A. (2018). Assisted Telemanipulation: A Stack-Of-Tasks Approach to Remote Manipulator Control. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, October 1-5, 2018 (pp. 6640-6645). IEEE Press
Open this publication in new window or tab >>Assisted Telemanipulation: A Stack-Of-Tasks Approach to Remote Manipulator Control
Show others...
2018 (English)In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE Press, 2018, p. 6640-6645Conference paper, Published paper (Refereed)
Abstract [en]

This article presents an approach for assisted teleoperation of a robot arm, formulated within a real-time stack-of-tasks (SoT) whole-body motion control framework. The approach leverages the hierarchical nature of the SoT framework to integrate operator commands with assistive tasks, such as joint limit and obstacle avoidance or automatic gripper alignment. Thereby some aspects of the teleoperation problem are delegated to the controller and carried out autonomously. The key contributions of this work are two-fold: the first is a method for unobtrusive integration of autonomy in a telemanipulation system; and the second is a user study evaluation of the proposed system in the context of teleoperated pick-and-place tasks. The proposed approach of assistive control was found to result in higher grasp success rates and shorter trajectories than achieved through manual control, without incurring additional cognitive load to the operator.

Place, publisher, year, edition, pages
IEEE Press, 2018
Series
IEEE International Conference on Intelligent Robots and Systems. Proceedings, ISSN 2153-0858, E-ISSN 2153-0866
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-71310 (URN)10.1109/IROS.2018.8594457 (DOI)000458872706014 ()978-1-5386-8094-0 (ISBN)978-1-5386-8095-7 (ISBN)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, October 1-5, 2018
Funder
Knowledge FoundationSwedish Foundation for Strategic Research
Available from: 2019-01-09 Created: 2019-01-09 Last updated: 2019-03-13Bibliographically approved
Längkvist, M., Jendeberg, J., Thunberg, P., Loutfi, A. & Lidén, M. (2018). Computer aided detection of ureteral stones in thin slice computed tomography volumes using Convolutional Neural Networks. Computers in Biology and Medicine, 97, 153-160
Open this publication in new window or tab >>Computer aided detection of ureteral stones in thin slice computed tomography volumes using Convolutional Neural Networks
Show others...
2018 (English)In: Computers in Biology and Medicine, ISSN 0010-4825, E-ISSN 1879-0534, Vol. 97, p. 153-160Article in journal (Refereed) Published
Abstract [en]

Computed tomography (CT) is the method of choice for diagnosing ureteral stones - kidney stones that obstruct the ureter. The purpose of this study is to develop a computer aided detection (CAD) algorithm for identifying a ureteral stone in thin slice CT volumes. The challenge in CAD for urinary stones lies in the similarity in shape and intensity of stones with non-stone structures and how to efficiently deal with large high-resolution CT volumes. We address these challenges by using a Convolutional Neural Network (CNN) that works directly on the high resolution CT volumes. The method is evaluated on a large data base of 465 clinically acquired high-resolution CT volumes of the urinary tract with labeling of ureteral stones performed by a radiologist. The best model using 2.5D input data and anatomical information achieved a sensitivity of 100% and an average of 2.68 false-positives per patient on a test set of 88 scans.

Place, publisher, year, edition, pages
Elsevier, 2018
Keywords
Computer aided detection, Ureteral stone, Convolutional neural networks, Computed tomography, Training set selection, False positive reduction
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:oru:diva-67139 (URN)10.1016/j.compbiomed.2018.04.021 (DOI)000435623700015 ()29730498 (PubMedID)2-s2.0-85046800526 (Scopus ID)
Note

Funding Agencies:

Nyckelfonden  OLL-597511 

Vinnova under the project "Interactive Deep Learning for 3D image analysis"  

Available from: 2018-06-04 Created: 2018-06-04 Last updated: 2018-08-30Bibliographically approved
Banaee, H., Schaffernicht, E. & Loutfi, A. (2018). Data-Driven Conceptual Spaces: Creating Semantic Representations for Linguistic Descriptions of Numerical Data. The journal of artificial intelligence research, 63, 691-742
Open this publication in new window or tab >>Data-Driven Conceptual Spaces: Creating Semantic Representations for Linguistic Descriptions of Numerical Data
2018 (English)In: The journal of artificial intelligence research, ISSN 1076-9757, E-ISSN 1943-5037, Vol. 63, p. 691-742Article in journal (Refereed) Published
Abstract [en]

There is an increasing need to derive semantics from real-world observations to facilitate natural information sharing between machine and human. Conceptual spaces theory is a possible approach and has been proposed as mid-level representation between symbolic and sub-symbolic representations, whereby concepts are represented in a geometrical space that is characterised by a number of quality dimensions. Currently, much of the work has demonstrated how conceptual spaces are created in a knowledge-driven manner, relying on prior knowledge to form concepts and identify quality dimensions. This paper presents a method to create semantic representations using data-driven conceptual spaces which are then used to derive linguistic descriptions of numerical data. Our contribution is a principled approach to automatically construct a conceptual space from a set of known observations wherein the quality dimensions and domains are not known a priori. This novelty of the approach is the ability to select and group semantic features to discriminate between concepts in a data-driven manner while preserving the semantic interpretation that is needed to infer linguistic descriptions for interaction with humans. Two data sets representing leaf images and time series signals are used to evaluate the method. An empirical evaluation for each case study assesses how well linguistic descriptions generated from the conceptual spaces identify unknown observations. Furthermore,  comparisons are made with descriptions derived on alternative approaches for generating semantic models.

Place, publisher, year, edition, pages
AAAI Press, 2018
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-70433 (URN)10.1613/jair.1.11258 (DOI)000455091500015 ()2-s2.0-85057746407 (Scopus ID)
Available from: 2018-12-04 Created: 2018-12-04 Last updated: 2019-01-23Bibliographically approved
Lidén, M., Jendeberg, J., Längkvist, M., Loutfi, A. & Thunberg, P. (2018). Discrimination between distal ureteral stones and pelvic phleboliths in CT using a deep neural network: more than local features needed. In: : . Paper presented at European Congress of Radiology (ECR) 2018, Vienna, Austria, 28 Feb.-4 Mar., 2018.
Open this publication in new window or tab >>Discrimination between distal ureteral stones and pelvic phleboliths in CT using a deep neural network: more than local features needed
Show others...
2018 (English)Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

Purpose: To develop a deep learning method for assisting radiologists in the discrimination between distal ureteral stones and pelvic phleboliths in thin slice CT images, and to evaluate whether this differentiation is possible using only local features.

Methods and materials: A limited field-of-view image data bank was retrospectively created, consisting of 5x5x5 cm selections from 1 mm thick unenhanced CT images centered around 218 pelvis phleboliths and 267 distal ureteral stones in 336 patients. 50 stones and 50 phleboliths formed a validation cohort and the remainder a training cohort. Ground truth was established by a radiologist using the complete CT examination during inclusion.The limited field-of-view CT stacks were independently reviewed and classified as containing a distal ureteral stone or a phlebolith by seven radiologists. Each cropped stack consisted of 50 slices (5x5 cm field-of-view) and was displayed in a standard PACS reading environment. A convolutional neural network using three perpendicular images (2.5D-CNN) from the limited field-of-view CT stacks was trained for classification.

Results: The 2.5D-CNN obtained 89% accuracy (95% confidence interval 81%-94%) for the classification in the unseen validation cohort while the accuracy of radiologists reviewing the same cohort was 86% (range 76%-91%). There was no statistically significant difference between 2.5D-CNN and radiologists.

Conclusion: The 2.5D-CNN achieved radiologist level classification accuracy between distal ureteral stones and pelvic phleboliths when only using the local features. The mean accuracy of 86% for radiologists using limited field-of-view indicates that distant anatomical information that helps identifying the ureter’s course is needed.

National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:oru:diva-67372 (URN)
Conference
European Congress of Radiology (ECR) 2018, Vienna, Austria, 28 Feb.-4 Mar., 2018
Available from: 2018-06-20 Created: 2018-06-20 Last updated: 2018-06-20Bibliographically approved
Akalin, N., Kiselev, A., Kristoffersson, A. & Loutfi, A. (2018). Enhancing Social Human-Robot Interaction with Deep Reinforcement Learning.. In: Proc. FAIM/ISCA Workshop on Artificial Intelligence for Multimodal Human Robot Interaction, 2018: . Paper presented at FAIM/ISCA Workshop on Artificial Intelligence for Multimodal Human Robot Interaction (AI-MHRI), Stockholm, Sweden 14-15 July, 2018 (pp. 48-50). MHRI
Open this publication in new window or tab >>Enhancing Social Human-Robot Interaction with Deep Reinforcement Learning.
2018 (English)In: Proc. FAIM/ISCA Workshop on Artificial Intelligence for Multimodal Human Robot Interaction, 2018, MHRI , 2018, p. 48-50Conference paper, Published paper (Refereed)
Abstract [en]

This research aims to develop an autonomous social robot for elderly individuals. The robot will learn from the interaction and change its behaviors in order to enhance the interaction and improve the user experience. For this purpose, we aim to use Deep Reinforcement Learning. The robot will observe the user’s verbal and nonverbal social cues by using its camera and microphone, the reward will be positive valence and engagement of the user.

Place, publisher, year, edition, pages
MHRI, 2018
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-68709 (URN)10.21437/AI-MHRI.2018-12 (DOI)
Conference
FAIM/ISCA Workshop on Artificial Intelligence for Multimodal Human Robot Interaction (AI-MHRI), Stockholm, Sweden 14-15 July, 2018
Projects
SOCRATES
Available from: 2018-09-03 Created: 2018-09-03 Last updated: 2018-09-04Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-3122-693X

Search in DiVA

Show all publications