oru.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 152) Show all publications
Sun, D., Kiselev, A., Liao, Q., Stoyanov, T. & Loutfi, A. (2020). A New Mixed Reality - based Teleoperation System for Telepresence and Maneuverability Enhancement. IEEE Transactions on Human-Machine Systems, 50(1), 55-67
Open this publication in new window or tab >>A New Mixed Reality - based Teleoperation System for Telepresence and Maneuverability Enhancement
Show others...
2020 (English)In: IEEE Transactions on Human-Machine Systems, ISSN 2168-2305, Vol. 50, no 1, p. 55-67Article in journal (Refereed) Published
Abstract [en]

Virtual Reality (VR) is regarded as a useful tool for teleoperation system that provides operators an immersive visual feedback on the robot and the environment. However, without any haptic feedback or physical constructions, VR-based teleoperation systems normally have poor maneuverability and may cause operational faults in some fine movements. In this paper, we employ Mixed Reality (MR), which combines real and virtual worlds, to develop a novel teleoperation system. New system design and control algorithms are proposed. For the system design, a MR interface is developed based on a virtual environment augmented with real-time data from the task space with a goal to enhance the operator’s visual perception. To allow the operator to be freely decoupled from the control loop and offload the operator’s burden, a new interaction proxy is proposed to control the robot. For the control algorithms, two control modes are introduced to improve long-distance movements and fine movements of the MR-based teleoperation. In addition, a set of fuzzy logic based methods are proposed to regulate the position, velocity and force of the robot in order to enhance the system maneuverability and deal with the potential operational faults. Barrier Lyapunov Function (BLF) and back-stepping methods are leveraged to design the control laws and simultaneously guarantee the system stability under state constraints.  Experiments conducted using a 6-Degree of Freedom (DoF) robotic arm prove the feasibility of the system.

Place, publisher, year, edition, pages
IEEE, 2020
Keywords
Force control, motion regulation, telerobotics, virtual reality
National Category
Robotics
Identifiers
urn:nbn:se:oru:diva-77829 (URN)10.1109/THMS.2019.2960676 (DOI)000508380700005 ()2-s2.0-85077905008 (Scopus ID)
Funder
Knowledge Foundation
Available from: 2019-11-11 Created: 2019-11-11 Last updated: 2020-02-07Bibliographically approved
Köckemann, U., Alirezaie, M., Renoux, J., Tsiftes, N., Ahmed, M. U., Morberg, D., . . . Loutfi, A. (2020). Open-Source Data Collection and Data Sets for Activity Recognition in Smart Homes. Sensors, 20(3), Article ID E879.
Open this publication in new window or tab >>Open-Source Data Collection and Data Sets for Activity Recognition in Smart Homes
Show others...
2020 (English)In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 20, no 3, article id E879Article in journal (Refereed) Published
Abstract [en]

As research in smart homes and activity recognition is increasing, it is of ever increasing importance to have benchmarks systems and data upon which researchers can compare methods. While synthetic data can be useful for certain method developments, real data sets that are open and shared are equally as important. This paper presents the E-care@home system, its installation in a real home setting, and a series of data sets that were collected using the E-care@home system. Our first contribution, the E-care@home system, is a collection of software modules for data collection, labeling, and various reasoning tasks such as activity recognition, person counting, and configuration planning. It supports a heterogeneous set of sensors that can be extended easily and connects collected sensor data to higher-level Artificial Intelligence (AI) reasoning modules. Our second contribution is a series of open data sets which can be used to recognize activities of daily living. In addition to these data sets, we describe the technical infrastructure that we have developed to collect the data and the physical environment. Each data set is annotated with ground-truth information, making it relevant for researchers interested in benchmarking different algorithms for activity recognition.

Place, publisher, year, edition, pages
MDPI, 2020
Keywords
Data collection software, prototype installation, smart home data sets
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-79928 (URN)10.3390/s20030879 (DOI)32041376 (PubMedID)
Available from: 2020-02-20 Created: 2020-02-20 Last updated: 2020-02-20
Krishna, S., Kiselev, A., Kristoffersson, A., Repsilber, D. & Loutfi, A. (2019). A Novel Method for Estimating Distances from a Robot to Humans Using Egocentric RGB Camera. Sensors, 19(14), Article ID E3142.
Open this publication in new window or tab >>A Novel Method for Estimating Distances from a Robot to Humans Using Egocentric RGB Camera
Show others...
2019 (English)In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 19, no 14, article id E3142Article in journal (Refereed) Published
Abstract [en]

Estimating distances between people and robots plays a crucial role in understanding social Human-Robot Interaction (HRI) from an egocentric view. It is a key step if robots should engage in social interactions, and to collaborate with people as part of human-robot teams. For distance estimation between a person and a robot, different sensors can be employed, and the number of challenges to be addressed by the distance estimation methods rise with the simplicity of the technology of a sensor. In the case of estimating distances using individual images from a single camera in a egocentric position, it is often required that individuals in the scene are facing the camera, do not occlude each other, and are fairly visible so specific facial or body features can be identified. In this paper, we propose a novel method for estimating distances between a robot and people using single images from a single egocentric camera. The method is based on previously proven 2D pose estimation, which allows partial occlusions, cluttered background, and relatively low resolution. The method estimates distance with respect to the camera based on the Euclidean distance between ear and torso of people in the image plane. Ear and torso characteristic points has been selected based on their relatively high visibility regardless of a person orientation and a certain degree of uniformity with regard to the age and gender. Experimental validation demonstrates effectiveness of the proposed method.

Place, publisher, year, edition, pages
MDPI, 2019
Keywords
Human–Robot Interaction, distance estimation, single RGB image, social interaction
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-75583 (URN)10.3390/s19143142 (DOI)000479160300109 ()31319523 (PubMedID)2-s2.0-85070083052 (Scopus ID)
Note

Funding Agency:

Örebro University

Available from: 2019-08-16 Created: 2019-08-16 Last updated: 2019-11-15Bibliographically approved
Sun, D., Liao, Q., Stoyanov, T., Kiselev, A. & Loutfi, A. (2019). Bilateral telerobotic system using Type-2 fuzzy neural network based moving horizon estimation force observer for enhancement of environmental force compliance and human perception. Automatica, 106, 358-373
Open this publication in new window or tab >>Bilateral telerobotic system using Type-2 fuzzy neural network based moving horizon estimation force observer for enhancement of environmental force compliance and human perception
Show others...
2019 (English)In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 106, p. 358-373Article in journal (Refereed) Published
Abstract [en]

This paper firstly develops a novel force observer using Type-2 Fuzzy Neural Network (T2FNN)-based Moving Horizon Estimation (MHE) to estimate external force/torque information and simultaneously filter out the system disturbances. Then, by using the proposed force observer, a new bilateral teleoperation system is proposed that allows the slave industrial robot to be more compliant to the environment and enhances the situational awareness of the human operator by providing multi-level force feedback. Compared with existing force observer algorithms that highly rely on knowing exact mathematical models, the proposed force estimation strategy can derive more accurate external force/torque information of the robots with complex mechanism and with unknown dynamics. Applying the estimated force information, an external-force-regulated Sliding Mode Control (SMC) strategy with the support of machine vision is proposed to enhance the adaptability of the slave robot and the perception of the operator about various scenarios by virtue of the detected location of the task object. The proposed control system is validated by the experiment platform consisting of a universal robot (UR10), a haptic device and an RGB-D sensor.

Place, publisher, year, edition, pages
Pergamon Press, 2019
Keywords
Force estimation and control, Type-2 fuzzy neural network, Moving horizon estimation, Bilateral teleoperation, Machine vision
National Category
Control Engineering
Research subject
Computer and Systems Science
Identifiers
urn:nbn:se:oru:diva-74377 (URN)10.1016/j.automatica.2019.04.033 (DOI)000473380000041 ()2-s2.0-85065901728 (Scopus ID)
Funder
Swedish Research Council
Available from: 2019-05-23 Created: 2019-05-23 Last updated: 2019-11-13Bibliographically approved
Krishna, S., Kristoffersson, A., Kiselev, A. & Loutfi, A. (2019). Estimating Optimal Placement for a Robot in Social Group Interaction. In: IEEE International Workshop on Robot and Human Communication (ROMAN): . Paper presented at The 28th IEEE International Conference on Robot and Human Interactive Communication – RO-MAN 2019, New Delhi, India, October 14-18, 2019.. IEEE
Open this publication in new window or tab >>Estimating Optimal Placement for a Robot in Social Group Interaction
2019 (English)In: IEEE International Workshop on Robot and Human Communication (ROMAN), IEEE, 2019Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we present a model to propose anoptimal placement for a robot in a social group interaction. Ourmodel estimates the O-space according to the F-formation theory. The method automatically calculates a suitable placementfor the robot. An evaluation of the method has been performedby conducting an experiment where participants stand in differ-ent formations and a robot is teleoperated to join the group. Inone condition, the operator positions the robot according to thespecified location given by our algorithm. In another condition,operators have the freedom to position the robot according totheir personal choice. Follow-up questionnaires were performedto determine which of the placements were preferred by theparticipants. The results indicate that the proposed methodfor automatic placement of the robot is supported from theparticipants. The contribution of this work resides in a novelmethod to automatically estimate the best placement of therobot, as well as the results from user experiments to verify thequality of this method. These results suggest that teleoperatedrobots such as mobile robot telepresence systems could benefitfrom tools that assist operators in placing the robot in groupsin a socially accepted manner.

Place, publisher, year, edition, pages
IEEE, 2019
Keywords
F-formations, Robot Positioning Spot, Mobile Robotic Telepresence, HRI
National Category
Engineering and Technology Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-78832 (URN)10.1109/RO-MAN46459.2019.8956318 (DOI)978-1-7281-2622-7 (ISBN)978-1-7281-2623-4 (ISBN)
Conference
The 28th IEEE International Conference on Robot and Human Interactive Communication – RO-MAN 2019, New Delhi, India, October 14-18, 2019.
Projects
Successful Ageing
Available from: 2019-12-20 Created: 2019-12-20 Last updated: 2020-02-14Bibliographically approved
Akalin, N., Kristoffersson, A. & Loutfi, A. (2019). Evaluating the Sense of Safety and Security in Human - Robot Interaction with Older People. In: Oliver Korn (Ed.), Social Robots: Technological, Societal and Ethical Aspects of Human-Robot Interaction (pp. 237-264). Springer
Open this publication in new window or tab >>Evaluating the Sense of Safety and Security in Human - Robot Interaction with Older People
2019 (English)In: Social Robots: Technological, Societal and Ethical Aspects of Human-Robot Interaction / [ed] Oliver Korn, Springer, 2019, p. 237-264Chapter in book (Refereed)
Abstract [en]

For many applications where interaction between robots and older people takes place, safety and security are key dimensions to consider. ‘Safety’ refers to a perceived threat of physical harm, whereas ‘security’ is a broad term which refers to many aspects related to health, well-being, and aging. This chapter presents a quantitative evaluation tool of the sense of safety and security for robots in elder care. By investigating the literature on measurement of safety and security in human–robot interaction, we propose new evaluation tools specially tailored to assess interaction between robots and older people.

Place, publisher, year, edition, pages
Springer, 2019
Series
Human-Computer Interaction Series, ISSN 1571-5035, E-ISSN 2524-4477
Keywords
Sense of safety and security, Quantitative evaluation tool, Social robots, Elder care
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-78493 (URN)10.1007/978-3-030-17107-0_12 (DOI)978-3-030-17106-3 (ISBN)978-3-030-17107-0 (ISBN)
Available from: 2019-12-08 Created: 2019-12-08 Last updated: 2019-12-11Bibliographically approved
Krishna, S., Kristoffersson, A., Kiselev, A. & Loutfi, A. (2019). F-Formations for Social Interaction in Simulation Using Virtual Agents and Mobile Robotic Telepresence Systems. Multimodal Technologies and Interaction, 3(4), Article ID 69.
Open this publication in new window or tab >>F-Formations for Social Interaction in Simulation Using Virtual Agents and Mobile Robotic Telepresence Systems
2019 (English)In: Multimodal Technologies and Interaction, ISSN 2414-4088, Vol. 3, no 4, article id 69Article in journal (Refereed) Published
Abstract [en]

F-formations are a set of possible patterns in which groups of people tend to spatially organize themselves while engaging in social interactions. In this paper, we study the behavior of teleoperators of mobile robotic telepresence systems to determine whether they adhere to spatial formations when navigating to groups. This work uses a simulated environment in which teleoperators are requested to navigate to different groups of virtual agents. The simulated environment represents a conference lobby scenario where multiple groups of Virtual Agents with varying group sizes are placed in different spatial formations. The task requires teleoperators to navigate a robot to join each group using an egocentric-perspective camera. In a second phase, teleoperators are allowed to evaluate their own performance by reviewing how they navigated the robot from an exocentric perspective. The two important outcomes from this study are, firstly, teleoperators inherently respect F-formations even when operating a mobile robotic telepresence system. Secondly, teleoperators prefer additional support in order to correctly navigate the robot into a preferred position that adheres to F-formations.

Place, publisher, year, edition, pages
MDPI, 2019
Keywords
telepresence, mobile robotic telepresence, F-formations, simulation, virtual agents, HRI
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-78830 (URN)10.3390/mti3040069 (DOI)
Note

Funding Agency:

Örebro University

Available from: 2019-12-20 Created: 2019-12-20 Last updated: 2020-01-28Bibliographically approved
Can, O. A., Zuidberg Dos Martires, P., Persson, A., Gaal, J., Loutfi, A., De Raedt, L., . . . Saffiotti, A. (2019). Learning from Implicit Information in Natural Language Instructions for Robotic Manipulations. In: Archna Bhatia, Yonatan Bisk, Parisa Kordjamshidi, Jesse Thomason (Ed.), Proceedings of the Combined Workshop on Spatial Language Understanding (SpLU) and Grounded Communication for Robotics (RoboNLP): . Paper presented at Combined Workshop on Spatial Language Understanding (SpLU) and Grounded Communication for Robotics (RoboNLP), Minneapolis, Minnesota, USA, June, 2019 (pp. 29-39). Association for Computational Linguistics, Article ID W19-1604.
Open this publication in new window or tab >>Learning from Implicit Information in Natural Language Instructions for Robotic Manipulations
Show others...
2019 (English)In: Proceedings of the Combined Workshop on Spatial Language Understanding (SpLU) and Grounded Communication for Robotics (RoboNLP) / [ed] Archna Bhatia, Yonatan Bisk, Parisa Kordjamshidi, Jesse Thomason, Association for Computational Linguistics , 2019, p. 29-39, article id W19-1604Conference paper, Published paper (Refereed)
Abstract [en]

Human-robot interaction often occurs in the form of instructions given from a human to a robot. For a robot to successfully follow instructions, a common representation of the world and objects in it should be shared between humans and the robot so that the instructions can be grounded. Achieving this representation can be done via learning, where both the world representation and the language grounding are learned simultaneously. However, in robotics this can be a difficult task due to the cost and scarcity of data. In this paper, we tackle the problem by separately learning the world representation of the robot and the language grounding. While this approach can address the challenges in getting sufficient data, it may give rise to inconsistencies between both learned components. Therefore, we further propose Bayesian learning to resolve such inconsistencies between the natural language grounding and a robot’s world representation by exploiting spatio-relational information that is implicitly present in instructions given by a human. Moreover, we demonstrate the feasibility of our approach on a scenario involving a robotic arm in the physical world.

Place, publisher, year, edition, pages
Association for Computational Linguistics, 2019
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems) Human Computer Interaction
Identifiers
urn:nbn:se:oru:diva-79501 (URN)10.18653/v1/W19-1604 (DOI)
Conference
Combined Workshop on Spatial Language Understanding (SpLU) and Grounded Communication for Robotics (RoboNLP), Minneapolis, Minnesota, USA, June, 2019
Funder
Swedish Research Council, 2016-05321EU, Horizon 2020
Note

This work has been supported by the ReGROUND project (http://reground.cs.kuleuven.be), which is a CHISTERA project funded by the EU H2020 framework program, the Research Foundation - Flanders, the Swedish Research Council (Vetenskapsrådet), and the Scientific and Technological Research Council of Turkey (TUBITAK). The work is also supported by Vetenskapsrådet under the grant number: 2016-05321 and by TUBITAK under the grants 114E628 and 215E201.

Available from: 2020-01-29 Created: 2020-01-29 Last updated: 2020-01-29Bibliographically approved
Alirezaie, M., Längkvist, M., Sioutis, M. & Loutfi, A. (2019). Semantic Referee: A Neural-Symbolic Framework for Enhancing Geospatial Semantic Segmentation. Semantic Web, 10(5), 863-880
Open this publication in new window or tab >>Semantic Referee: A Neural-Symbolic Framework for Enhancing Geospatial Semantic Segmentation
2019 (English)In: Semantic Web, ISSN 1570-0844, E-ISSN 2210-4968, Vol. 10, no 5, p. 863-880Article in journal (Refereed) Published
Abstract [en]

Understanding why machine learning algorithms may fail is usually the task of the human expert that uses domain knowledge and contextual information to discover systematic shortcomings in either the data or the algorithm. In this paper, we propose a semantic referee, which is able to extract qualitative features of the errors emerging from deep machine learning frameworks and suggest corrections. The semantic referee relies on ontological reasoning about spatial knowledge in order to characterize errors in terms of their spatial relations with the environment. Using semantics, the reasoner interacts with the learning algorithm as a supervisor. In this paper, the proposed method of the interaction between a neural network classifier and a semantic referee shows how to improve the performance of semantic segmentation for satellite imagery data.

Place, publisher, year, edition, pages
IOS Press, 2019
Keywords
Deep Neural Network, Semantic Referee, Ontological and Spatial Reasoning, Semantic Segmentation, OntoCity, Geo Data
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-77266 (URN)10.3233/SW-190362 (DOI)000488082100003 ()
Projects
Semantic Robot
Funder
Swedish Research Council
Note

Funding Agency:

Swedish Knowledge Foundation under the research profile on Semantic Robots  20140033

Available from: 2019-10-14 Created: 2019-10-14 Last updated: 2019-10-25Bibliographically approved
Persson, A., Zuidberg Dos Martires, P., Loutfi, A. & De Raedt, L. (2019). Semantic Relational Object Tracking. IEEE Transactions on Cognitive and Developmental Systems
Open this publication in new window or tab >>Semantic Relational Object Tracking
2019 (English)In: IEEE Transactions on Cognitive and Developmental Systems, ISSN 2379-8920Article in journal (Refereed) Epub ahead of print
Abstract [en]

This paper addresses the topic of semantic world modeling by conjoining probabilistic reasoning and object anchoring. The proposed approach uses a so-called bottom-up object anchoring method that relies on rich continuous attribute values measured from perceptual sensor data. A novel anchoring matching function learns to maintain object entities in space and time and is validated using a large set of trained humanly annotated ground truth data of real-world objects. For more complex scenarios, a high-level probabilistic object tracker has been integrated with the anchoring framework and handles the tracking of occluded objects via reasoning about the state of unobserved objects. We demonstrate the performance of our integrated approach through scenarios such as the shell game scenario, where we illustrate how anchored objects are retained by preserving relations through probabilistic reasoning.

Keywords
Semantic World Modeling, Perceptual Anchoring, Probabilistic Reasoning, Probabilistic Logic Programming, Object Tracking, Relational Particle Filtering
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-73529 (URN)10.1109/TCDS.2019.2915763 (DOI)2-s2.0-85068148528 (Scopus ID)
Available from: 2019-04-05 Created: 2019-04-05 Last updated: 2020-02-14Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-3122-693X

Search in DiVA

Show all publications