oru.sePublications
Change search
Refine search result
12 1 - 50 of 89
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Agrawal, Vikas
    et al.
    IBM Research, , India.
    Archibald, Christopher
    Mississippi State University, Starkville, United States.
    Bhatt, Mehul
    University of Bremen, Bremen, Germany.
    Bui, Hung Hai
    Laboratory for Natural Language Understanding, Sunnyvale CA, United States.
    Cook, Diane J.
    Washington State University, Pullman WA, United States.
    Cortés, Juan
    University of Toulouse, Toulouse, France.
    Geib, Christopher W.
    Drexel University, Philadelphia PA, United States.
    Gogate, Vibhav
    Department of Computer Science, University of Texas, Dallas, United States.
    Guesgen, Hans W.
    Massey University, Palmerston North, New Zealand.
    Jannach, Dietmar
    Technical university Dortmund, Dortmund, Germany.
    Johanson, Michael
    University of Alberta, Edmonton, Canada.
    Kersting, Kristian
    Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme (IAIS), Sankt Augustin, Germany; The University of Bonn, Bonn, Germany.
    Konidaris, George
    Massachusetts Institute of Technology (MIT), Cambridge MA, United States.
    Kotthoff, Lars
    INSIGHT Centre for Data Analytics, University College Cork, Cork, Ireland.
    Michalowski, Martin
    Adventium Labs, Minneapolis MN, United States.
    Natarajan, Sriraam
    Indiana University, Bloomington IN, United States.
    O’Sullivan, Barry
    INSIGHT Centre for Data Analytics, University College Cork, Cork, Ireland.
    Pickett, Marc
    Naval Research Laboratory, Washington DC, United States.
    Podobnik, Vedran
    Telecommunication Department of the Faculty of Electrical Engineering and Computing, University of University of Zagreb, Zagreb, Croatia.
    Poole, David
    Department of Computer Science, University of British Columbia, Vancouver, Canada.
    Shastri, Lokendra
    Infosys, , India.
    Shehu, Amarda
    George Mason University, Washington, United States.
    Sukthankar, Gita
    University of Central Florida, Orlando FL, United States.
    The AAAI-13 Conference Workshops2013In: The AI Magazine, ISSN 0738-4602, Vol. 34, no 4, p. 108-115Article in journal (Refereed)
    Abstract [en]

    The AAAI-13 Workshop Program, a part of the 27th AAAI Conference on Artificial Intelligence, was held Sunday and Monday, July 14-15, 2013, at the Hyatt Regency Bellevue Hotel in Bellevue, Washington, USA. The program included 12 workshops covering a wide range of topics in artificial intelligence, including Activity Context-Aware System Architectures (WS-13-05); Artificial Intelligence and Robotics Methods in Computational Biology (WS-13-06); Combining Constraint Solving with Mining and Learning (WS-13-07); Computer Poker and Imperfect Information (WS-13-08); Expanding the Boundaries of Health Informatics Using Artificial Intelligence (WS-13-09); Intelligent Robotic Systems (WS-13-10); Intelligent Techniques for Web Personalization and Recommendation (WS-13-11); Learning Rich Representations from Low-Level Sensors (WS-13-12); Plan, Activity,, and Intent Recognition (WS-13-13); Space, Time, and Ambient Intelligence (WS-13-14); Trading Agent Design and Analysis (WS-13-15); and Statistical Relational Artificial Intelligence (WS-13-16)

  • 2.
    Ahtiainen, Juhana
    et al.
    Department of Electrical Engineering and Automation, Aalto University, Espoo, Finland.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Saarinen, Jari
    GIM Ltd., Espoo, Finland.
    Normal Distributions Transform Traversability Maps: LIDAR-Only Approach for Traversability Mapping in Outdoor Environments2017In: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 34, no 3, p. 600-621Article in journal (Refereed)
    Abstract [en]

    Safe and reliable autonomous navigation in unstructured environments remains a challenge for field robots. In particular, operating on vegetated terrain is problematic, because simple purely geometric traversability analysis methods typically classify dense foliage as nontraversable. As traversing through vegetated terrain is often possible and even preferable in some cases (e.g., to avoid executing longer paths), more complex multimodal traversability analysis methods are necessary. In this article, we propose a three-dimensional (3D) traversability mapping algorithm for outdoor environments, able to classify sparsely vegetated areas as traversable, without compromising accuracy on other terrain types. The proposed normal distributions transform traversability mapping (NDT-TM) representation exploits 3D LIDAR sensor data to incrementally expand normal distributions transform occupancy (NDT-OM) maps. In addition to geometrical information, we propose to augment the NDT-OM representation with statistical data of the permeability and reflectivity of each cell. Using these additional features, we train a support-vector machine classifier to discriminate between traversable and nondrivable areas of the NDT-TM maps. We evaluate classifier performance on a set of challenging outdoor environments and note improvements over previous purely geometrical traversability analysis approaches.

  • 3.
    Akalin, Neziha
    et al.
    Örebro University, School of Science and Technology.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Kristoffersson, Annica
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    An Evaluation Tool of the Effect of Robots in Eldercare on the Sense of Safety and Security2017In: Social Robotics: 9th International Conference, ICSR 2017, Tsukuba, Japan, November 22-24, 2017, Proceedings / [ed] Kheddar, A.; Yoshida, E.; Ge, S.S.; Suzuki, K.; Cabibihan, J-J:, Eyssel, F:, He, H., Springer International Publishing , 2017, p. 628-637Conference paper (Refereed)
    Abstract [en]

    The aim of the study presented in this paper is to develop a quantitative evaluation tool of the sense of safety and security for robots in eldercare. By investigating the literature on measurement of safety and security in human-robot interaction, we propose new evaluation tools. These tools are semantic differential scale questionnaires. In experimental validation, we used the Pepper robot, programmed in the way to exhibit social behaviors, and constructed four experimental conditions varying the degree of the robot’s non-verbal behaviors from no gestures at all to full head and hand movements. The experimental results suggest that both questionnaires (for the sense of safety and the sense of security) have good internal consistency.

  • 4.
    Akalin, Neziha
    et al.
    Örebro University, School of Science and Technology.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Kristoffersson, Annica
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    The Relevance of Social Cues in Assistive Training with a Social Robot2018In: 10th International Conference on Social Robotics, ICSR 2018, Proceedings / [ed] Ge, S.S., Cabibihan, J.-J., Salichs, M.A., Broadbent, E., He, H., Wagner, A., Castro-González, Á., Springer, 2018, p. 462-471Conference paper (Refereed)
    Abstract [en]

    This paper examines whether social cues, such as facial expressions, can be used to adapt and tailor a robot-assisted training in order to maximize performance and comfort. Specifically, this paper serves as a basis in determining whether key facial signals, including emotions and facial actions, are common among participants during a physical and cognitive training scenario. In the experiment, participants performed basic arm exercises with a social robot as a guide. We extracted facial features from video recordings of participants and applied a recursive feature elimination algorithm to select a subset of discriminating facial features. These features are correlated with the performance of the user and the level of difficulty of the exercises. The long-term aim of this work, building upon the work presented here, is to develop an algorithm that can eventually be used in robot-assisted training to allow a robot to tailor a training program based on the physical capabilities as well as the social cues of the users.

  • 5.
    Akbari, Aliakbar
    et al.
    Institute of Industrial and Control Engineering (IOC), Universitat Politècnica de Catalunya (UPC)—Barcelona Tech, Barcelona, Spain.
    Lagriffoul, Fabien
    Örebro University, School of Science and Technology.
    Rosell, Jan
    Institute of Industrial and Control Engineering (IOC), Universitat Politècnica de Catalunya (UPC)—Barcelona Tech, Barcelona, Spain.
    Combined heuristic task and motion planning for bi-manual robots2019In: Autonomous Robots, ISSN 0929-5593, E-ISSN 1573-7527, Vol. 43, no 6, p. 1575-1590Article in journal (Refereed)
    Abstract [en]

    Planning efficiently at task and motion levels allows the setting of new challenges for robotic manipulation problems, like for instance constrained table-top problems for bi-manual robots. In this scope, the appropriate combination of task and motion planning levels plays an important role. Accordingly, a heuristic-based task and motion planning approach is proposed, in which the computation of the heuristic addresses a geometrically relaxed problem, i.e., it only reasons upon objects placements, grasp poses, and inverse kinematics solutions. Motion paths are evaluated lazily, i.e., only after an action has been selected by the heuristic. This reduces the number of calls to the motion planner, while backtracking is reduced because the heuristic captures most of the geometric constraints. The approach has been validated in simulation and on a real robot, with different classes of table-top manipulation problems. Empirical comparison with recent approaches solving similar problems is also reported, showing that the proposed approach results in significant improvement both in terms of planing time and success rate.

  • 6.
    Almqvist, Håkan
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Kucner, Tomasz Piotr
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Learning to detect misaligned point clouds2018In: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 35, no 5, p. 662-677Article in journal (Refereed)
    Abstract [en]

    Matching and merging overlapping point clouds is a common procedure in many applications, including mobile robotics, three-dimensional mapping, and object visualization. However, fully automatic point-cloud matching, without manual verification, is still not possible because no matching algorithms exist today that can provide any certain methods for detecting misaligned point clouds. In this article, we make a comparative evaluation of geometric consistency methods for classifying aligned and nonaligned point-cloud pairs. We also propose a method that combines the results of the evaluated methods to further improve the classification of the point clouds. We compare a range of methods on two data sets from different environments related to mobile robotics and mapping. The results show that methods based on a Normal Distributions Transform representation of the point clouds perform best under the circumstances presented herein.

  • 7.
    Amigoni, Francesco
    et al.
    Politecnico di Milano, Milan, Italy.
    Yu, Wonpil
    Electronics and Telecommunications Research Institute (ETRI), Daejeon, South Korea.
    Andre, Torsten
    University of Klagenfurt, Klagenfurt, Austria.
    Holz, Dirk
    University of Bonn, Bonn, Germany.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Matteucci, Matteo
    Politecnico di Milano, Milan, Italy.
    Moon, Hyungpil
    Sungkyunkwan University, Suwon, South Korea.
    Yokozuka, Masashi
    Nat. Inst. of Advanced Industrial Science and Technology, Tsukuba, Japan.
    Biggs, Geoffrey
    Nat. Inst. of Advanced Industrial Science and Technology, Tsukuba, Japan.
    Madhavan, Raj
    Amrita University, Clarksburg MD, United States of America.
    A Standard for Map Data Representation: IEEE 1873-2015 Facilitates Interoperability Between Robots2018In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 25, no 1, p. 65-76Article in journal (Refereed)
    Abstract [en]

    The availability of environment maps for autonomous robots enables them to complete several tasks. A new IEEE standard, IEEE 1873-2015, Robot Map Data Representation for Navigation (MDR) [15], sponsored by the IEEE Robotics and Automation Society (RAS) and approved by the IEEE Standards Association Standards Board in September 2015, defines a common representation for two-dimensional (2-D) robot maps and is intended to facilitate interoperability among navigating robots. The standard defines an extensible markup language (XML) data format for exchanging maps between different systems. This article illustrates how metric maps, topological maps, and their combinations can be represented according to the standard.

  • 8.
    Antonova, Rika
    et al.
    Robotics, Perception and Learning, CSC, Royal Institute of Technology, Stockholm, Sweden.
    Kokic, Mia
    Robotics, Perception and Learning, CSC, Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Robotics, Perception and Learning, CSC, Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Robotics, Perception and Learning, CSC, Royal Institute of Technology, Stockholm, Sweden.
    Global Search with Bernoulli Alternation Kernel for Task-oriented Grasping Informed by Simulation2018In: Proceedings of Machine Learning Research: Conference on Robot Learning 2018, PMLR , 2018, Vol. 87, p. 641-650Conference paper (Refereed)
    Abstract [en]

    We develop an approach that benefits from large simulated datasets and takes full advantage of the limited online data that is most relevant. We propose a variant of Bayesian optimization that alternates between using informed and uninformed kernels. With this Bernoulli Alternation Kernel we ensure that discrepancies between simulation and reality do not hinder adapting robot control policies online. The proposed approach is applied to a challenging real-world problem of task-oriented grasping with novel objects. Our further contribution is a neural network architecture and training pipeline that use experience from grasping objects in simulation to learn grasp stability scores. We learn task scores from a labeled dataset with a convolutional network, which is used to construct an informed kernel for our variant of Bayesian optimization. Experiments on an ABB Yumi robot with real sensor data demonstrate success of our approach, despite the challenge of fulfilling task requirements and high uncertainty over physical properties of objects.

  • 9.
    Arnekvist, Isac
    et al.
    Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
    VPE: Variational Policy Embedding for Transfer Reinforcement Learning2018Manuscript (preprint) (Other academic)
  • 10.
    Asadi, Sahar
    et al.
    Örebro University, School of Science and Technology.
    Fan, Han
    Örebro University, School of Science and Technology.
    Hernandez Bennetts, Victor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Time-dependent gas distribution modelling2017In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 96, p. 157-170Article in journal (Refereed)
    Abstract [en]

    Artificial olfaction can help to address pressing environmental problems due to unwanted gas emissions. Sensor networks and mobile robots equipped with gas sensors can be used for e.g. air pollution monitoring. Key in this context is the ability to derive truthful models of gas distribution from a set of sparse measurements. Most statistical gas distribution modelling methods assume that gas dispersion is a time constant random process. While this assumption approximately holds in some situations, it is necessary to model variations over time in order to enable applications of gas distribution modelling in a wider range of realistic scenarios. Time-invariant approaches cannot model well evolving gas plumes, for example, or major changes in gas dispersion due to a sudden change of the environmental conditions. This paper presents two approaches to gas distribution modelling, which introduce a time-dependency and a relation to a time-scale in generating the gas distribution model either by sub-sampling or by introducing a recency weight that relates measurement and prediction time. We evaluated these approaches in experiments performed in two real environments as well as on several simulated experiments. As expected, the comparison of different sub-sampling strategies revealed that more recent measurements are more informative to derive an estimate of the current gas distribution as long as a sufficient spatial coverage is given. Next, we compared a time-dependent gas distribution modelling approach (TD Kernel DM+V), which includes a recency weight, to the state-of-the-art gas distribution modelling approach (Kernel DM+V), which does not consider sampling times. The results indicate a consistent improvement in the prediction of unseen measurements, particularly in dynamic scenarios. Furthermore, this paper discusses the impact of meta-parameters in model selection and compares the performance of time-dependent GDM in different plume conditions. Finally, we investigated how to set the target time for which the model is created. The results indicate that TD Kernel DM+V performs best when the target time is set to the maximum sampling time in the test set.

  • 11.
    Bekiroglu, Yasemin
    et al.
    School of Mechanical Engineering, University of Birmingham, Birmingham, UK.
    Damianou, Andreas
    Department of Computer Science, University of Sheffield, Sheffield, UK.
    Detry, Renaud
    Centre for Autonomous Systems, CSC, Royal Institute of Technology, Sweden.
    Stork, Johannes Andreas
    Centre for Autonomous Systems, CSC, Royal Institute of Technology, Sweden.
    Kragic, Danica
    Centre for Autonomous Systems, CSC, Royal Institute of Technology, Sweden.
    Ek, Carl Henrik
    Centre for Autonomous Systems, CSC, Royal Institute of Technology, Sweden.
    Probabilistic consolidation of grasp experience2016In: 2016 IEEE International Conference on Robotics and Automation (ICRA), IEEE conference proceedings, 2016, p. 193-200Conference paper (Refereed)
    Abstract [en]

    We present a probabilistic model for joint representation of several sensory modalities and action parameters in a robotic grasping scenario. Our non-linear probabilistic latent variable model encodes relationships between grasp-related parameters, learns the importance of features, and expresses confidence in estimates. The model learns associations between stable and unstable grasps that it experiences during an exploration phase. We demonstrate the applicability of the model for estimating grasp stability, correcting grasps, identifying objects based on tactile imprints and predicting tactile imprints from object-relative gripper poses. We performed experiments on a real platform with both known and novel objects, i.e., objects the robot trained with, and previously unseen objects. Grasp correction had a 75% success rate on known objects, and 73% on new objects. We compared our model to a traditional regression model that succeeded in correcting grasps in only 38% of cases.

  • 12.
    Bhatt, Mehul
    et al.
    SFB/TR 8 Spatial Cognition, University of Bremen, Bremen, Germany.
    Dylla, Frank
    SFB/TR 8 Spatial Cognition, University of Bremen, Bremen, Germany.
    A Qualitative Model of Dynamic Scene Analysis and Interpretation in Ambient Intelligence Systems2009In: International Journal of Robotics and Automation, ISSN 0826-8185, Vol. 24, no 3, p. 235-244Article in journal (Refereed)
    Abstract [en]

    Ambient intelligence environments necessitate representing and reasoning about dynamic spatial scenes and configurations. The ability to perform predictive and explanatory analyses of spatial scenes is crucial towards serving a useful intelligent function within such environments. We present a formal qualitative model that combines existing qualitative theories about space with it formal logic-based calculus suited to modelling dynamic environments, or reasoning about action and change in general. With this approach, it is possible to represent and reason about arbitrary dynamic spatial environments within a unified framework. We clarify and elaborate on our ideas with examples grounded in a smart environment.

  • 13.
    Bhatt, Mehul
    et al.
    Department of Computer Science, La Trobe University, Germany.
    Loke, Seng
    Department of Computer Science, La Trobe University, Germany.
    Modelling Dynamic Spatial Systems in the Situation Calculus2008In: Spatial Cognition and Computation, ISSN 1387-5868, E-ISSN 1573-9252, Vol. 8, no 1-2, p. 86-130Article in journal (Refereed)
    Abstract [en]

    We propose and systematically formalise a dynamical spatial systems approach for the modelling of changing spatial environments. The formalisation adheres to the semantics of the situation calculus and includes a systematic account of key aspects that are necessary to realize a domain-independent qualitative spatial theory that may be utilised across diverse application domains. The spatial theory is primarily derivable from the all-pervasive generic notion of "qualitative spatial calculi" that are representative of differing aspects of space. In addition, the theory also includes aspects, both ontological and phenomenal in nature, that are considered inherent in dynamic spatial systems. Foundational to the formalisation is a causal theory that adheres to the representational and computational semantics of the situation calculus. This foundational theory provides the necessary (general) mechanism required to represent and reason about changing spatial environments and also includes an account of the key fundamental epistemological issues concerning the frame and the ramification problems that arise whilst modelling change within such domains. The main advantage of the proposed approach is that based on the structure and semantics of the proposed framework, fundamental reasoning tasks such as projection and explanation directly follow. Within the specialised spatial reasoning domain, these translate to spatial planning/re-configuration, causal explanation and spatial simulation. Our approach is based on the hypothesis that alternate formalisations of existing qualitative spatial calculi using high-level tools such as the situation calculus are essential for their utilisation in diverse application domains such as intelligent systems, cognitive robotics and event-based GIS.

  • 14.
    Bruno, Barbara
    et al.
    University of Genova, Genova, Italy.
    Chong, Nak Young
    Japan Advanced Institute of Science and Technology, Nomi [Ishikawa], Japan.
    Kamide, Hiroko
    Nagoya University, Nagoya, Japan.
    Kanoria, Sanjeev
    Advinia Health Care Limited LTD, London, UK.
    Lee, Jaeryoung
    Chubu University, Kasugai, Japan.
    Lim, Yuto
    Japan Advanced Institute of Science and Technology, Nomi [Ishikawa], Japan.
    Kumar Pandey, Amit
    SoftBank Robotics.
    Papadopoulos, Chris
    University of Bedfordshire, Luton, UK.
    Papadopoulos, Irena
    Middlesex University Higher Education Corporation, London, UK.
    Pecora, Federico
    Örebro University, School of Science and Technology.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Sgorbissa, Antonio
    University of Genova, Genova, Italy.
    Paving the Way for Culturally Competent Robots: a Position Paper2017In: 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) / [ed] Howard, A; Suzuki, K; Zollo, L, New York: Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 553-560Conference paper (Refereed)
    Abstract [en]

    Cultural competence is a well known requirementfor an effective healthcare, widely investigated in thenursing literature. We claim that personal assistive robotsshould likewise be culturally competent, aware of generalcultural characteristics and of the different forms they take indifferent individuals, and sensitive to cultural differences whileperceiving, reasoning, and acting. Drawing inspiration fromexisting guidelines for culturally competent healthcare and thestate-of-the-art in culturally competent robotics, we identifythe key robot capabilities which enable culturally competentbehaviours and discuss methodologies for their developmentand evaluation.

  • 15.
    Bruno, Barbara
    et al.
    University of Genoa, Genoa, Italy.
    Recchiuto, Carmine Tommaso
    University of Genoa, Genoa, Italy.
    Papadopoulos, Irena
    Middlesex University Higher Education Corporation, The Burroughs, Hendon, London, UK.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Koulouglioti, Christina
    Middlesex University Higher Education Corporation, The Burroughs, Hendon, London, UK.
    Menicatti, Roberto
    University of Genoa, Genoa, Italy.
    Mastrogiovanni, Fulvio
    University of Genoa, Genoa, Italy.
    Zaccarial, Renato
    University of Genoa, Genoa, Italy.
    Sgorbissa, Antonio
    University of Genoa, Genoa, Italy.
    Knowledge Representation for Culturally Competent Personal Robots: Requirements, Design Principles, Implementation, and Assessment2019In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 11, no 3, p. 515-538Article in journal (Refereed)
    Abstract [en]

    Culture, intended as the set of beliefs, values, ideas, language, norms and customs which compose a person's life, is an essential element to know by any robot for personal assistance. Culture, intended as that person's background, can be an invaluable source of information to drive and speed up the process of discovering and adapting to the person's habits, preferences and needs. This article discusses the requirements posed by cultural competence on the knowledge management system of a robot. We propose a framework for cultural knowledge representation that relies on (i) a three-layer ontology for storing concepts of relevance, culture-specific information and statistics, person-specific information and preferences; (ii) an algorithm for the acquisition of person-specific knowledge, which uses culture-specific knowledge to drive the search; (iii) a Bayesian Network for speeding up the adaptation to the person by propagating the effects of acquiring one specific information onto interconnected concepts. We have conducted a preliminary evaluation of the framework involving 159 Italian and German volunteers and considering 122 among habits, attitudes and social norms.

  • 16.
    Canelhas, Daniel R.
    et al.
    Örebro University, School of Science and Technology.
    Schaffernicht, Erik
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Davison, Andrew J.
    Department of Computing, Imperial College London, London, United Kingdom.
    Compressed Voxel-Based Mapping Using Unsupervised Learning2017In: Robotics, E-ISSN 2218-6581, Vol. 6, no 3, article id 15Article in journal (Refereed)
    Abstract [en]

    In order to deal with the scaling problem of volumetric map representations, we propose spatially local methods for high-ratio compression of 3D maps, represented as truncated signed distance fields. We show that these compressed maps can be used as meaningful descriptors for selective decompression in scenarios relevant to robotic applications. As compression methods, we compare using PCA-derived low-dimensional bases to nonlinear auto-encoder networks. Selecting two application-oriented performance metrics, we evaluate the impact of different compression rates on reconstruction fidelity as well as to the task of map-aided ego-motion estimation. It is demonstrated that lossily reconstructed distance fields used as cost functions for ego-motion estimation can outperform the original maps in challenging scenarios from standard RGB-D (color plus depth) data sets due to the rejection of high-frequency noise content.

  • 17.
    Canelhas, Daniel R.
    et al.
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    From Feature Detection in Truncated Signed Distance Fields to Sparse Stable Scene Graphs2016In: IEEE Robotics and Automation Letters, ISSN 2377-3766, Vol. 1, no 2, p. 1148-1155Article in journal (Refereed)
    Abstract [en]

    With the increased availability of GPUs and multicore CPUs, volumetric map representations are an increasingly viable option for robotic applications. A particularly important representation is the truncated signed distance field (TSDF) that is at the core of recent advances in dense 3D mapping. However, there is relatively little literature exploring the characteristics of 3D feature detection in volumetric representations. In this paper we evaluate the performance of features extracted directly from a 3D TSDF representation. We compare the repeatability of Integral invariant features, specifically designed for volumetric images, to the 3D extensions of Harris and Shi & Tomasi corners. We also study the impact of different methods for obtaining gradients for their computation. We motivate our study with an example application for building sparse stable scene graphs, and present an efficient GPU-parallel algorithm to obtain the graphs, made possible by the combination of TSDF and 3D feature points. Our findings show that while the 3D extensions of 2D corner-detection perform as expected, integral invariants have shortcomings when applied to discrete TSDFs. We conclude with a discussion of the cause for these points of failure that sheds light on possible mitigation strategies.

  • 18.
    Daoutis, Marios
    Örebro University, School of Science and Technology.
    Knowledge based perceptual anchoring: grounding percepts to concepts in cognitive robots2013In: Künstliche Intelligenz, ISSN 0933-1875, E-ISSN 1610-1987, p. 1-4Article in journal (Refereed)
    Abstract [en]

    Perceptual anchoring is the process of creating and maintaining a connection between the sensor data corresponding to a physical object and its symbolic description. It is a subset of the symbol grounding problem, introduced by Harnad (Phys. D, Nonlinear Phenom. 42(1–3):335–346, 1990) and investigated over the past years in several disciplines including robotics. This PhD dissertation focuses on a method for grounding sensor data of physical objects to the corresponding semantic descriptions, in the context of cognitive robots where the challenge is to establish the connection between percepts and concepts referring to objects, their relations and properties. We examine how knowledge representation can be used together with an anchoring framework, so as to complement the meaning of percepts while supporting better linguistic interaction with the use of the corresponding concepts. The proposed method addresses the need to represent and process both perceptual and semantic knowledge, often expressed in different abstraction levels, while originating from different modalities. We then focus on the integration of anchoring with a large scale knowledge base system and with perceptual routines. This integration is applied in a number of studies, where in the context of a smart home, several evaluations spanning from spatial and commonsense reasoning to linguistic interaction and concept acquisition.

  • 19.
    Daoutis, Marios
    et al.
    Örebro University, School of Science and Technology.
    Coradeschi, Silvia
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Towards concept anchoring for cognitive robots2012In: Intelligent Service Robotics, ISSN 1861-2784, Vol. 5, no 4, p. 213-228Article in journal (Refereed)
    Abstract [en]

    We present a model for anchoring categorical conceptual information which originates from physical perception and the web. The model is an extension of the anchoring framework which is used to create and maintain over time semantically grounded sensor information. Using the augmented anchoring framework that employs complex symbolic knowledge from a commonsense knowledge base, we attempt to ground and integrate symbolic and perceptual data that are available on the web. We introduce conceptual anchors which are representations of general, concrete conceptual terms. We show in an example scenario how conceptual anchors can be coherently integrated with perceptual anchors and commonsense information for the acquisition of novel concepts.

  • 20.
    Della Corte, Bartolomeo
    et al.
    Department of Computer, Control, and Management Engineering “Antonio Ruberti” Sapienza, University of Rome, Rome, Italy.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Grisetti, Giorgio
    Department of Computer, Control, and Management Engineering “Antonio Ruberti” Sapienza, University of Rome, Rome, Italy.
    Unified Motion-Based Calibration of Mobile Multi-Sensor Platforms With Time Delay Estimation2019In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, no 2, p. 902-909Article in journal (Refereed)
    Abstract [en]

    The ability to maintain and continuously update geometric calibration parameters of a mobile platform is a key functionality for every robotic system. These parameters include the intrinsic kinematic parameters of the platform, the extrinsic parameters of the sensors mounted on it, and their time delays. In this letter, we present a unified pipeline for motion-based calibration of mobile platforms equipped with multiple heterogeneous sensors. We formulate a unified optimization problem to concurrently estimate the platform kinematic parameters, the sensors extrinsic parameters, and their time delays. We analyze the influence of the trajectory followed by the robot on the accuracy of the estimate. Our framework automatically selects appropriate trajectories to maximize the information gathered and to obtain a more accurate parameters estimate. In combination with that, our pipeline observes the parameters evolution in long-term operation to detect possible values change in the parameters set. The experiments conducted on real data show a smooth convergence along with the ability to detect changes in parameters value. We release an open-source version of our framework to the community.

  • 21.
    Dubba, Krishna Sandeep Reddy
    et al.
    School of Computing, University of Leeds, Leeds, UK.
    Cohn, Anthony G.
    School of Computing, University of Leeds, Leeds, UK.
    Hogg, David C.
    School of Computing, University of Leeds, Leeds, UK.
    Bhatt, Mehul
    Cognitive Systems, SFB/TR 8 Spatial Cognition, University of Bremen, Bremen, Germany.
    Dylla, Frank
    Cognitive Systems, SFB/TR 8 Spatial Cognition, University of Bremen, Bremen, Germany.
    Learning Relational Event Models from Video2015In: The journal of artificial intelligence research, ISSN 1076-9757, E-ISSN 1943-5037, Vol. 53, p. 41-90Article in journal (Refereed)
    Abstract [en]

    Event models obtained automatically from video can be used in applications ranging from abnormal event detection to content based video retrieval. When multiple agents are involved in the events, characterizing events naturally suggests encoding interactions as relations. Learning event models from this kind of relational spatio-temporal data using relational learning techniques such as Inductive Logic Programming (ILP) hold promise, but have not been successfully applied to very large datasets which result from video data. In this paper, we present a novel framework REMIND (Relational Event Model INDuction) for supervised relational learning of event models from large video datasets using ILP. Efficiency is achieved through the learning from interpretations setting and using a typing system that exploits the type hierarchy of objects in a domain. The use of types also helps prevent over generalization. Furthermore, we also present a type-refining operator and prove that it is optimal. The learned models can be used for recognizing events from previously unseen videos. We also present an extension to the framework by integrating an abduction step that improves the learning performance when there is noise in the input data. The experimental results on several hours of video data from two challenging real world domains (an airport domain and a physical action verbs domain) suggest that the techniques are suitable to real world scenarios.

  • 22.
    Echelmeyer, Wolfgang
    et al.
    University of Reutlingen, Reutlingen, Germany.
    Kirchheim, Alice
    School of Science and Technology, Örebro University, Örebro, Sweden.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Akbiyik, Hülya
    University of Reutlingen, Reutlingen, Germany.
    Bonini, Marco
    University of Reutlingen, Reutlingen, Germany.
    Performance Indicators for Robotics Systems in Logistics Applications2011Conference paper (Refereed)
    Abstract [en]

    The transfer of research results to market-ready products is often a costly and time-consuming process. In order to generate successful products, researchers must cooperate with industrial companies; both the industrial and academic partners need to have a detailed understanding of the requirements of all parties concerned. Academic researchers need to identify the performance indicators for technical systems within a business environment and be able to apply them.

    Inservice logistics today, nearly all standardized mass goods are unloaded manually with one reason for this being the undefined position and orientation of the goods in the carrier. A study regarding the qualitative and quantitative properties of goods that are transported in containers shows that there is a huge economic relevance for autonomous systems. In 2008, more than 8,4 billion Twenty-foot equivalent units (TEU) were imported and unloaded manually at European ports, corresponding to more than 331,000 billion single goods items.

    Besides the economic relevance, the opinion of market participants is an important factor for the success of new systems on the market. The main outcomes of a study regarding the challenges, opportunities and barriers in robotic-logistics, allow for the estimation of the economic efficiency of performance indicators, performance flexibility and soft factors. The economic efficiency of the performance parameters is applied to the parcel robot – a cognitive system to unload parcels autonomously from containers. In the following article, the results of the study are presented and the resultant conclusions discussed.

  • 23.
    Efremova, Natalia
    et al.
    Plekhanov Russian University, Moskow, Russia.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Cognitive Architectures for Optimal Remote Image Representation for Driving a Telepresence Robot2014Conference paper (Refereed)
  • 24.
    Fan, Hongqi
    et al.
    Örebro University, School of Science and Technology. National Laboratory of Science and Technology on Automatic Target Recognition, National University of Defense Technology, Changsha, China.
    Kucner, Tomasz Piotr
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Li, Tiancheng
    School of Sciences, University of Salamanca, Salamanca, Spain.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    A Dual PHD Filter for Effective Occupancy Filtering in a Highly Dynamic Environment2018In: IEEE transactions on intelligent transportation systems (Print), ISSN 1524-9050, E-ISSN 1558-0016, Vol. 19, no 9, p. 2977-2993Article in journal (Refereed)
    Abstract [en]

    Environment monitoring remains a major challenge for mobile robots, especially in densely cluttered or highly populated dynamic environments, where uncertainties originated from environment and sensor significantly challenge the robot's perception. This paper proposes an effective occupancy filtering method called the dual probability hypothesis density (DPHD) filter, which models uncertain phenomena, such as births, deaths, occlusions, false alarms, and miss detections, by using random finite sets. The key insight of our method lies in the connection of the idea of dynamic occupancy with the concepts of the phase space density in gas kinetic and the PHD in multiple target tracking. By modeling the environment as a mixture of static and dynamic parts, the DPHD filter separates the dynamic part from the static one with a unified filtering process, but has a higher computational efficiency than existing Bayesian Occupancy Filters (BOFs). Moreover, an adaptive newborn function and a detection model considering occlusions are proposed to improve the filtering efficiency further. Finally, a hybrid particle implementation of the DPHD filter is proposed, which uses a box particle filter with constant discrete states and an ordinary particle filter with a time-varying number of particles in a continuous state space to process the static part and the dynamic part, respectively. This filter has a linear complexity with respect to the number of grid cells occupied by dynamic obstacles. Real-world experiments on data collected by a lidar at a busy roundabout demonstrate that our approach can handle monitoring of a highly dynamic environment in real time.

  • 25.
    Ferri, Gabriele
    et al.
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Mondini, Alessio
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Manzi, Alessandro
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Mazzolai, Barbara
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Laschi, Cecilia
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Mattoli, Virgilio
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Reggente, Matteo
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Lettere, Marco
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Dario, Paolo.
    Scuola Superiore Sant'Anna, Pisa, Italy.
    DustCart, a Mobile Robot for Urban Environments: Experiments of Pollution Monitoring and Mapping during Autonomous Navigation in Urban Scenarios2010In: Proceedings of ICRA Workshop on Networked and Mobile Robot Olfaction in Natural, Dynamic Environments, 2010Conference paper (Refereed)
    Abstract [en]

    In the framework of DustBot European project, aimed at developing a new multi-robot system for urban hygiene management, we have developed a twowheeled robot: DustCart. DustCart aims at providing a solution to door-to-door garbage collection: the robot, called by a user, navigates autonomously to his/her house; collects the garbage from the user and discharges it in an apposite area. An additional feature of DustCart is the capability to monitor the air pollution by means of an on board Air Monitoring Module (AMM). The AMM integrates sensors to monitor several atmospheric pollutants, such as carbon monoxide (CO), particular matter (PM10), nitrogen dioxide (NO2), ozone (O3) plus temperature (T) and relative humidity (rHu). An Ambient Intelligence platform (AmI) manages the robots’ operations through a wireless connection. AmI is able to collect measurements taken by different robots and to process them to create a pollution distribution map. In this paper we describe the DustCart robot system, focusing on the AMM and on the process of creating the pollutant distribution maps. We report results of experiments of one DustCart robot moving in urban scenarios and producing gas distribution maps using the Kernel DM+V algorithm. These experiments can be considered as one of the first attempts to use robots as mobile monitoring devices that can complement the traditional fixed stations.

  • 26.
    Grosinger, Jasmin
    et al.
    Örebro University, School of Science and Technology.
    Pecora, Federico
    Örebro University, School of Science and Technology.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Making Robots Proactive through Equilibrium Maintenance2016In: 25th International Joint Conference on Artificial Intelligence, 2016Conference paper (Refereed)
  • 27. Hang, Kaiyu
    et al.
    Li, Miao
    Stork, Johannes Andreas
    Bekiroglu, Yasemin
    Billard, Aude
    Kragic, Danica
    Hierarchical Fingertip Space for Synthesizing Adaptable Fingertip Grasps2014Conference paper (Other academic)
  • 28.
    Hang, Kaiyu
    et al.
    Computer Vision and Active Perception Laboratory, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Li, Miao
    Learning Algorithms and Systems Laboratory, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
    Stork, Johannes Andreas
    Computer Vision and Active Perception Laboratory, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Bekiroglu, Yasemin
    Department of Mechanical Engineering, School of Engineering, University of Birmingham, Birmingham, UK.
    Pokorny, Florian T.
    Computer Vision and Active Perception Laboratory, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Billard, Aude
    Learning Algorithms and Systems Laboratory, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
    Kragic, Danica
    Computer Vision and Active Perception Laboratory, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Hierarchical fingertip space: A unified framework for grasp planning and in-hand grasp adaptation2016In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 32, no 4, p. 960-972Article in journal (Refereed)
    Abstract [en]

    We present a unified framework for grasp planning and in-hand grasp adaptation using visual, tactile, and proprioceptive feedback. The main objective of the proposed framework is to enable fingertip grasping by addressing problems of changed weight of the object, slippage, and external disturbances. For this purpose we introduce the Hierarchical Fingertip Space as a representation enabling optimization for both efficient grasp synthesis and online finger gaiting. Grasp synthesis is followed by a grasp adaptation step that consists of both grasp force adaptation through impedance control and regrasping/finger gaiting when the former is not sufficient. Experimental evaluation is conducted on an Allegro hand mounted on a Kuka LWR arm.

  • 29.
    Hang, Kaiyu
    et al.
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Hierarchical fingertip space for multi-fingered precision grasping2014In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE Press, 2014, p. 1641-1648Conference paper (Refereed)
    Abstract [en]

    Dexterous in-hand manipulation of objects benefits from the ability of a robot system to generate precision grasps. In this paper, we propose a concept of Fingertip Space and its use for precision grasp synthesis. Fingertip Space is a representation that takes into account both the local geometry of object surface as well as the fingertip geometry. As such, it is directly applicable to the object point cloud data and it establishes a basis for the grasp search space. We propose a model for a hierarchical encoding of the Fingertip Space that enables multilevel refinement for efficient grasp synthesis. The proposed method works at the grasp contact level while not neglecting object shape nor hand kinematics. Experimental evaluation is performed for the Barrett hand considering also noisy and incomplete point cloud data.

  • 30.
    Hang, Kaiyu
    et al.
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Pokorny, Florian T.
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Combinatorial optimization for hierarchical contact-level grasping2014In: 2014 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2014, p. 381-388Conference paper (Refereed)
    Abstract [en]

    We address the problem of generating force-closed point contact grasps on complex surfaces and model it as a combinatorial optimization problem. Using a multilevel refinement metaheuristic, we maximize the quality of a grasp subject to a reachability constraint by recursively forming a hierarchy of increasingly coarser optimization problems. A grasp is initialized at the top of the hierarchy and then locally refined until convergence at each level. Our approach efficiently addresses the high dimensional problem of synthesizing stable point contact grasps while resulting in stable grasps from arbitrary initial configurations. Compared to a sampling-based approach, our method yields grasps with higher grasp quality. Empirical results are presented for a set of different objects. We investigate the number of levels in the hierarchy, the computational complexity, and the performance relative to a random sampling baseline approach.

  • 31.
    Hang, Kaiyu
    et al.
    Robotics, Perception, and Learning Lab, KTH Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Robotics, Perception, and Learning Lab, KTH Royal Institute of Technology, Stockholm, Sweden.
    Pollard, Nancy S.
    Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA.
    Kragic, Danica
    Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA.
    A Framework For Optimal Grasp Contact Planning2017In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 2, no 2, p. 704-711Article in journal (Refereed)
    Abstract [en]

    We consider the problem of finding grasp contacts that are optimal under a given grasp quality function on arbitrary objects. Our approach formulates a framework for contact-level grasping as a path finding problem in the space of supercontact grasps. The initial supercontact grasp contains all grasps and in each step along a path grasps are removed. For this, we introduce and formally characterize search space structure and cost functions under which minimal cost paths correspond to optimal grasps. Our formulation avoids expensive exhaustive search and reduces computational cost by several orders of magnitude. We present admissible heuristic functions and exploit approximate heuristic search to further reduce the computational cost while maintaining bounded suboptimality for resulting grasps. We exemplify our formulation with point-contact grasping for which we define domain specific heuristics and demonstrate optimality and bounded suboptimality by comparing against exhaustive and uniform cost search on example objects. Furthermore, we explain how to restrict the search graph to satisfy grasp constraints for modeling hand kinematics. We also analyze our algorithm empirically in terms of created and visited search states and resultant effective branching factor.

  • 32.
    Haustein, Joshua A.
    et al.
    Robotics, Perception and Learning Lab (RPL), CAS, EECS, KTH Royal Institute of Technology, Stockholm, Sweden.
    Arnekvist, Isac
    Robotics, Perception and Learning Lab (RPL), CAS, EECS, KTH Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Robotics, Perception and Learning Lab (RPL), CAS, EECS, KTH Royal Institute of Technology, Stockholm, Sweden.
    Hang, Kaiyu
    GRAB Lab, Yale University, New Haven, USA.
    Kragic, Danica
    Robotics, Perception and Learning Lab (RPL), CAS, EECS, KTH Royal Institute of Technology, Stockholm, Sweden.
    Learning Manipulation States and Actions for Efficient Non-prehensile Rearrangement Planning2019Manuscript (preprint) (Other academic)
  • 33.
    Kamarudin, Kamarulzaman
    et al.
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia; School of Mechatronics Engineering, Universiti Malaysia Perlis (UniMAP), Arau, Malaysia.
    Shakaff, Ali Yeon Md
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia; School of Mechatronics Engineering, Universiti Malaysia Perlis (UniMAP), Arau, Malaysia.
    Hernandez Bennetts, Victor
    Örebro University, School of Science and Technology.
    Mamduh, Syed Muhammad
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia.
    Zakaria, Ammar
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia; School of Mechatronics Engineering, Universiti Malaysia Perlis (UniMAP), Arau, Malaysia.
    Visvanathan, Retnam
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia.
    Yeon, Ahmad Shakaff Ali
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia.
    Kamarudin, Latifah Munirah
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia.
    Integrating SLAM and gas distribution mapping (SLAM-GDM) for real-time gas source localization2018In: Advanced Robotics, ISSN 0169-1864, E-ISSN 1568-5535, Vol. 32, no 17, p. 903-917Article in journal (Refereed)
    Abstract [en]

    Gas distribution mapping (GDM) learns models of the spatial distribution of gas concentrations across 2D/3D environments, among others, for the purpose of localizing gas sources. GDM requires run-time robot positioning in order to associate measurements with locations in a global coordinate frame. Most approaches assume that the robot has perfect knowledge about its position, which does not necessarily hold in realistic scenarios. We argue that the simultaneous localization and mapping (SLAM) algorithm should be used together with GDM to allow operation in an unknown environment. This paper proposes an SLAM-GDM approach that combines Hector SLAM and Kernel DM+V through a map merging technique. We argue that Hector SLAM is suitable for the SLAM-GDM approach since it does not perform loop closure or global corrections, which in turn would require to re-compute the gas distribution map. Real-time experiments were conducted in an environment with single and multiple gas sources. The results showed that the predictions of gas source location in all trials were often correct to around 0.5-1.5 m for the large indoor area being tested. The results also verified that the proposed SLAM-GDM approach and the designed system were able to achieve real-time operation.

  • 34.
    Khaliq, Ali Abdul
    et al.
    Örebro University, School of Science and Technology.
    Köckemann, Uwe
    Örebro University, School of Science and Technology.
    Pecora, Federico
    Örebro University, School of Science and Technology.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Bruno, Barbara
    University of Genova, Genova, Italy.
    Recchiuto, Carmine Tommaso
    University of Genova, Genova, Italy.
    Sgorbissa, Antonio
    University of Genova, Genova, Italy.
    Bui, Ha-Duong
    Japan Advanced Institute of Science and Technology, Ishikawa, Japan.
    Chong, Nak Young
    Japan Advanced Institute of Science and Technology, Ishikawa, Japan.
    Culturally aware Planning and Execution of Robot Actions2018In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2018, p. 326-332Conference paper (Refereed)
    Abstract [en]

    The way in which humans behave, speak andinteract is deeply influenced by their culture. For example,greeting is done differently in France, in Sweden or in Japan;and the average interpersonal distance changes from onecultural group to the other. In order to successfully coexistwith humans, robots should also adapt their behavior to theculture, customs and manners of the persons they interact with.In this paper, we deal with an important ingredient of culturaladaptation: how to generate robot plans that respect givencultural preferences, and how to execute them in a way thatis sensitive to those preferences. We present initial results inthis direction in the context of the CARESSES project, a jointEU-Japan effort to build culturally competent assistive robots.

  • 35.
    Kiselev, Andrey
    et al.
    Örebro University, School of Science and Technology.
    Kristoffersson, Annica
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    The Effect of Field of View on Social Interaction in Mobile Robotic Telepresence Systems2014In: Proceedings of the 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2014), IEEE conference proceedings, 2014, p. 214-215Conference paper (Refereed)
    Abstract [en]

    One goal of mobile robotic telepresence for social interaction is to design robotic units that are easy to operate for novice users and promote good interaction between people. This paper presents an exploratory study on the effect of camera orientation and field of view on the interaction between a remote and local user. Our findings suggest that limiting the width of the field of view can lead to better interaction quality as it encourages remote users to orient the robot towards local users.

  • 36.
    Kiselev, Andrey
    et al.
    Örebro University, School of Science and Technology.
    Mosiello, Giovanni
    Örebro University, School of Science and Technology. Roma Tre University, Rome, Italy.
    Kristoffersson, Annica
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Semi-Autonomous Cooperative Driving for Mobile Robotic Telepresence Systems2014In: Proceedings of the 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2014), IEEE conference proceedings, 2014, p. 104-104Conference paper (Refereed)
    Abstract [en]

    Mobile robotic telepresence (MRP) has been introduced to allow communication from remote locations. Modern MRP systems offer rich capabilities for human-human interactions. However, simply driving a telepresence robot can become a burden especially for novice users, leaving no room for interaction at all. In this video we introduce a project which aims to incorporate advanced robotic algorithms into manned telepresence robots in a natural way to allow human-robot cooperation for safe driving. It also shows a very first implementation of cooperative driving based on extracting a safe drivable area in real time using the image stream received from the robot.

  • 37.
    Kokic, Mia
    et al.
    Robotics, Perception, and Learning lab, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Robotics, Perception, and Learning lab, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Haustein, Joshua A.
    Robotics, Perception, and Learning lab, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Robotics, Perception, and Learning lab, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Affordance detection for task-specific grasping using deep learning2017In: 2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), IEEE conference proceedings, 2017, p. 91-98Conference paper (Refereed)
    Abstract [en]

    In this paper we utilize the notion of affordances to model relations between task, object and a grasp to address the problem of task-specific robotic grasping. We use convolutional neural networks for encoding and detecting object affordances, class and orientation, which we utilize to formulate grasp constraints. Our approach applies to previously unseen objects from a fixed set of classes and facilitates reasoning about which tasks an object affords and how to grasp it for that task. We evaluate affordance detection on full-view and partial-view synthetic data and compute task-specific grasps for objects that belong to ten different classes and afford five different tasks. We demonstrate the feasibility of our approach by employing an optimization-based grasp planner to compute task-specific grasps.

  • 38.
    Krishna, Sai
    et al.
    Örebro University, School of Science and Technology.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Kristoffersson, Annica
    School of Innovation, Design and Engineering, Mälardalen University, Västerås, Sweden.
    Repsilber, Dirk
    Örebro University, School of Medical Sciences.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    A Novel Method for Estimating Distances from a Robot to Humans Using Egocentric RGB Camera2019In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 19, no 14, article id E3142Article in journal (Refereed)
    Abstract [en]

    Estimating distances between people and robots plays a crucial role in understanding social Human-Robot Interaction (HRI) from an egocentric view. It is a key step if robots should engage in social interactions, and to collaborate with people as part of human-robot teams. For distance estimation between a person and a robot, different sensors can be employed, and the number of challenges to be addressed by the distance estimation methods rise with the simplicity of the technology of a sensor. In the case of estimating distances using individual images from a single camera in a egocentric position, it is often required that individuals in the scene are facing the camera, do not occlude each other, and are fairly visible so specific facial or body features can be identified. In this paper, we propose a novel method for estimating distances between a robot and people using single images from a single egocentric camera. The method is based on previously proven 2D pose estimation, which allows partial occlusions, cluttered background, and relatively low resolution. The method estimates distance with respect to the camera based on the Euclidean distance between ear and torso of people in the image plane. Ear and torso characteristic points has been selected based on their relatively high visibility regardless of a person orientation and a certain degree of uniformity with regard to the age and gender. Experimental validation demonstrates effectiveness of the proposed method.

  • 39.
    Krug, Robert
    et al.
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Kragic, Danica
    Centre for Autonomous Systems, Computer Vision and Active Perception Lab, CSC, KTH Stockholm, Stockholm, Sweden.
    Bekiroglu, Yasemin
    School of Mechanical Engineering, University of Birmingham, Birmingham, United Kingdom.
    Analytic Grasp Success Prediction with Tactile Feedback2016In: 2016 IEEE International Conference on Robotics and Automation, ICRA 2016, New York, USA: IEEE , 2016, p. 165-171Conference paper (Refereed)
    Abstract [en]

    Predicting grasp success is useful for avoiding failures in many robotic applications. Based on reasoning in wrench space, we address the question of how well analytic grasp success prediction works if tactile feedback is incorporated. Tactile information can alleviate contact placement uncertainties and facilitates contact modeling. We introduce a wrench-based classifier and evaluate it on a large set of real grasps. The key finding of this work is that exploiting tactile information allows wrench-based reasoning to perform on a level with existing methods based on learning or simulation. Different from these methods, the suggested approach has no need for training data, requires little modeling effort and is computationally efficient. Furthermore, our method affords task generalization by considering the capabilities of the grasping device and expected disturbance forces/moments in a physically meaningful way.

  • 40.
    Krug, Robert
    et al.
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Tincani, Vinicio
    University of Pisa, Pisa, Italy.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Mosberger, Rafael
    Örebro University, School of Science and Technology.
    Fantoni, Gualtiero
    University of Pisa, Pisa, Italy.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    The Next Step in Robot Commissioning: Autonomous Picking and Palletizing2016In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 1, no 1, p. 546-553Article in journal (Refereed)
    Abstract [en]

    So far, autonomous order picking (commissioning) systems have not been able to meet the stringent demands regarding speed, safety, and accuracy of real-world warehouse automation, resulting in reliance on human workers. In this letter, we target the next step in autonomous robot commissioning: automatizing the currently manual order picking procedure. To this end, we investigate the use case of autonomous picking and palletizing with a dedicated research platform and discuss lessons learned during testing in simplified warehouse settings. The main theoretical contribution is a novel grasp representation scheme which allows for redundancy in the gripper pose placement. This redundancy is exploited by a local, prioritized kinematic controller which generates reactive manipulator motions on-the-fly. We validated our grasping approach by means of a large set of experiments, which yielded an average grasp acquisition time of 23.5 s at a success rate of 94.7%. Our system is able to autonomously carry out simple order picking tasks in a humansafe manner, and as such serves as an initial step toward future commercial-scale in-house logistics automation solutions.

  • 41.
    Lagriffoul, Fabien
    et al.
    Örebro University, School of Science and Technology.
    Dantam, Neil T.
    Colorado School of Mines, Golden CO, USA.
    Garrett, Caelan
    Massachusetts Institute of Technology, Cambridge MA, USA.
    Akbari, Aliakbar
    Universidad Politécnica de Catalunya, Barcelona, Spain.
    Srivastava, Siddharth
    Arizona State University, Tempe AZ, USA.
    Kavraki, Lydia E.
    Rice University, Houston TX, USA .
    Platform-Independent Benchmarks for Task and Motion Planning2018In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 3, no 4, p. 3765-3772Article in journal (Refereed)
    Abstract [en]

    We present the first platform-independent evaluation method for task and motion planning (TAMP). Previously point, various problems have been used to test individual planners for specific aspects of TAMP. However, no common set of metrics, formats, and problems have been accepted by the community. We propose a set of benchmark problems covering the challenging aspects of TAMP and a planner-independent specification format for these problems. Our objective is to better evaluate and compare TAMP planners, foster communication, and progress within the field, and lay a foundation to better understand this class of planning problems.

  • 42.
    Lagriffoul, Fabien
    et al.
    Örebro University, School of Science and Technology.
    Dimitrov, Dimitar
    Örebro University, School of Science and Technology.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Karlsson, Lars
    Örebro University, School of Science and Technology.
    Constraint propagation on interval bounds for dealing with geometric backtracking2012In: Proceedings of  the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2012), Institute of Electrical and Electronics Engineers (IEEE), 2012, p. 957-964Conference paper (Refereed)
    Abstract [en]

    The combination of task and motion planning presents us with a new problem that we call geometric backtracking. This problem arises from the fact that a single symbolic state or action can be geometrically instantiated in infinitely many ways. When a symbolic action cannot begeometrically validated, we may need to backtrack in thespace of geometric configurations, which greatly increases thecomplexity of the whole planning process. In this paper, weaddress this problem using intervals to represent geometricconfigurations, and constraint propagation techniques to shrinkthese intervals according to the geometric constraints of the problem. After propagation, either (i) the intervals are shrunk, thus reducing the search space in which geometric backtracking may occur, or (ii) the constraints are inconsistent, indicating then infeasibility of the sequence of actions without further effort. We illustrate our approach on scenarios in which a two-arm robot manipulates a set of objects, and report experiments that show how the search space is reduced.

  • 43.
    Lagriffoul, Fabien
    et al.
    Örebro University, School of Science and Technology.
    Karlsson, Lars
    Örebro University, School of Science and Technology.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Constraints on intervals for reducing the search space of geometric configurations2012In: Combining Task and Motion Planning for Real-World Applications (ICAPS workshop) / [ed] Marcello Cirillo, Brian Gerkey, Federico Pecora, Mike Stilman, 2012, p. 5-12Conference paper (Refereed)
  • 44.
    Lowry, Stephanie
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lightweight, Viewpoint-Invariant Visual Place Recognition in Changing Environments2018In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 3, no 2, p. 957-964Article in journal (Refereed)
    Abstract [en]

    This paper presents a viewpoint-invariant place recognition algorithm which is robust to changing environments while requiring only a small memory footprint. It demonstrates that condition-invariant local features can be combined with Vectors of Locally Aggregated Descriptors (VLAD) to reduce high-dimensional representations of images to compact binary signatures while retaining place matching capability across visually dissimilar conditions. This system provides a speed-up of two orders of magnitude over direct feature matching, and outperforms a bag-of-visual-words approach with near-identical computation speed and memory footprint. The experimental results show that single-image place matching from non-aligned images can be achieved in visually changing environments with as few as 256 bits (32 bytes) per image.

  • 45.
    Luber, Matthias
    et al.
    Social Robotics Lab, Department of Computer Science, University of Freiburg, Germany.
    Stork, Johannes Andreas
    Social Robotics Lab, Department of Computer Science, University of Freiburg, Germany.
    Tipaldi, Gian Diego
    Social Robotics Lab, Department of Computer Science, University of Freiburg, Germany.
    Arras, Kai O.
    Social Robotics Lab, Department of Computer Science, University of Freiburg, Germany.
    People tracking with human motion predictions from social forces2010In: 2010 IEEE International Conference on Robotics and Automation, Proceedings, IEEE conference proceedings, 2010, p. 464-469Conference paper (Refereed)
    Abstract [en]

    For many tasks in populated environments, robots need to keep track of current and future motion states of people. Most approaches to people tracking make weak assumptions on human motion such as constant velocity or acceleration. But even over a short period, human behavior is more complex and influenced by factors such as the intended goal, other people, objects in the environment, and social rules. This motivates the use of more sophisticated motion models for people tracking especially since humans frequently undergo lengthy occlusion events. In this paper, we consider computational models developed in the cognitive and social science communities that describe individual and collective pedestrian dynamics for tasks such as crowd behavior analysis. In particular, we integrate a model based on a social force concept into a multi-hypothesis target tracker. We show how the refined motion predictions translate into more informed probability distributions over hypotheses and finally into a more robust tracking behavior and better occlusion handling. In experiments in indoor and outdoor environments with data from a laser range finder, the social force model leads to more accurate tracking with up to two times fewer data association errors.

  • 46.
    Lundell, Jens
    et al.
    Intelligent Robotics Group, Aalto University, Helsinki, Finland.
    Krug, Robert
    Royal Institute of Technology, Stockholm, Sweden.
    Schaffernicht, Erik
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Kyrki, Ville
    Intelligent Robotics Group, Aalto University, Helsinki, Finland.
    Safe-To-Explore State Spaces: Ensuring Safe Exploration in Policy Search with Hierarchical Task Optimization2018In: IEEE-RAS Conference on Humanoid Robots / [ed] Asfour, T, IEEE, 2018, p. 132-138Conference paper (Refereed)
    Abstract [en]

    Policy search reinforcement learning allows robots to acquire skills by themselves. However, the learning procedure is inherently unsafe as the robot has no a-priori way to predict the consequences of the exploratory actions it takes. Therefore, exploration can lead to collisions with the potential to harm the robot and/or the environment. In this work we address the safety aspect by constraining the exploration to happen in safe-to-explore state spaces. These are formed by decomposing target skills (e.g., grasping) into higher ranked sub-tasks (e.g., collision avoidance, joint limit avoidance) and lower ranked movement tasks (e.g., reaching). Sub-tasks are defined as concurrent controllers (policies) in different operational spaces together with associated Jacobians representing their joint-space mapping. Safety is ensured by only learning policies corresponding to lower ranked sub-tasks in the redundant null space of higher ranked ones. As a side benefit, learning in sub-manifolds of the state-space also facilitates sample efficiency. Reaching skills performed in simulation and grasping skills performed on a real robot validate the usefulness of the proposed approach.

  • 47.
    Magnusson, Martin
    et al.
    Örebro University, School of Science and Technology.
    Kucner, Tomasz
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Quantitative Evaluation of Coarse-To-Fine Loading Strategies for Material Rehandling2015In: Proceedings of the IEEE International Conference on Automation Science and Engineering (CASE), New York: IEEE conference proceedings , 2015, p. 450-455Conference paper (Refereed)
    Abstract [en]

    Autonomous handling of piled materials is an emerging topic in automation science and engineering. A central question for material rehandling tasks (transporting materials that have been assembled in piles) is “where to dig, in order to optimise performance”? In particular, we are interested in the application of autonomous wheel loaders to handle piles of gravel. Still, the methodology proposed in this paper relates to granular materials in other applications too. Although initial work on suggesting strategies for where to dig has been done by a few other groups, there has been a lack of structured evaluation of the usefulness of the proposed strategies. In an attempt to further the field, we present a quantitative evaluation of loading strategies; both coarse ones, aiming to maintain a good pile shape over long-term operation; and refined ones, aiming to detect the locally best attack pose for acquiring a good fill grade in the bucket. Using real-world data from a semi-automated test platform, we present an assessment of how previously proposed pile shape measures can be mapped to the amount of material in the bucket after loading. We also present experimental data for long-term strategies, using simulations based on real-world 3D scan data from a production site.

  • 48.
    Mannucci, Anna
    et al.
    Research Center E. Piaggio, University of Pisa, Pisa, Italy.
    Pallottino, Lucia
    Research Center E. Piaggio, University of Pisa, Pisa, Italy.
    Pecora, Federico
    Örebro University, School of Science and Technology.
    Provably Safe Multi-Robot Coordination With Unreliable Communication2019In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, no 4, p. 3232-3239Article in journal (Refereed)
    Abstract [en]

    Coordination is a core problem in multi-robot systems, since it is a key to ensure safety and efficiency. Both centralized and decentralized solutions have been proposed, however, most assume perfect communication. This letter proposes a centralized method that removes this assumption, and is suitable for fleets of robots driven by generic second-order dynamics. We formally prove that: first, safety is guaranteed if communication errors are limited to delays; and second, the probability of unsafety is bounded by a function of the channel model in networks with packet loss. The approach exploits knowledge of the network's non-idealities to ensure the best possible performance of the fleet. The method is validated via several experiments with simulated robots.

  • 49.
    Marzinotto, Alejandro
    et al.
    Computer Vision and Active Perception Lab., Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Computer Vision and Active Perception Lab., Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Rope through Loop Insertion for Robotic Knotting: A Virtual Magnetic Field Formulation2016Report (Other academic)
    Abstract [en]

    Inserting an end of a rope through a loop is a common and important action that is required for creating most types of knots. To perform this action, we need to pass the end of the rope through an area that is enclosed by another segment of rope. As for all knotting actions, the robot must for this exercise control over a semi-compliant and flexible body whose complex 3d shape is difficult to perceive and follow. Additionally, the target loop often deforms during the insertion. We address this problem by defining a virtual magnetic field through the loop's interior and use the Biot Savart law to guide the robotic manipulator that holds the end of the rope. This approach directly defines, for any manipulator position, a motion vector that results in a path that passes through the loop. The motion vector is directly derived from the position of the loop and changes as soon as it moves or deforms. In simulation, we test the insertion action against dynamic loop deformation of different intensity. We also combine insertion with grasp and release actions, coordinated by a hybrid control system, to tie knots in simulation and with a NAO robot.

  • 50.
    Marzinotto, Alejandro
    et al.
    Computer Vision and Active Perception Lab., Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Computer Vision and Active Perception Lab., Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Dimarogonas, Dimos V.
    Computer Vision and Active Perception Lab., Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Computer Vision and Active Perception Lab., Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Cooperative grasping through topological object representation2014In: 2014 IEEE-RAS International Conference on Humanoid Robots, IEEE, 2014, p. 685-692Conference paper (Refereed)
    Abstract [en]

    We present a cooperative grasping approach based on a topological representation of objects. Using point cloud data we extract loops on objects suitable for generating entanglement. We use the Gauss Linking Integral to derive controllers for multi-agent systems that generate hooking grasps on such loops while minimizing the entanglement between robots. The approach copes well with noisy point cloud data, it is computationally simple and robust. We demonstrate the method for performing object grasping and transportation, through a hooking maneuver, with two coordinated NAO robots.

12 1 - 50 of 89
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf