oru.sePublications
Change search
Refine search result
123 1 - 50 of 104
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Agrawal, Vikas
    et al.
    IBM Research, , India.
    Archibald, Christopher
    Mississippi State University, Starkville, United States.
    Bhatt, Mehul
    University of Bremen, Bremen, Germany.
    Bui, Hung Hai
    Laboratory for Natural Language Understanding, Sunnyvale CA, United States.
    Cook, Diane J.
    Washington State University, Pullman WA, United States.
    Cortés, Juan
    University of Toulouse, Toulouse, France.
    Geib, Christopher W.
    Drexel University, Philadelphia PA, United States.
    Gogate, Vibhav
    Department of Computer Science, University of Texas, Dallas, United States.
    Guesgen, Hans W.
    Massey University, Palmerston North, New Zealand.
    Jannach, Dietmar
    Technical university Dortmund, Dortmund, Germany.
    Johanson, Michael
    University of Alberta, Edmonton, Canada.
    Kersting, Kristian
    Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme (IAIS), Sankt Augustin, Germany; The University of Bonn, Bonn, Germany.
    Konidaris, George
    Massachusetts Institute of Technology (MIT), Cambridge MA, United States.
    Kotthoff, Lars
    INSIGHT Centre for Data Analytics, University College Cork, Cork, Ireland.
    Michalowski, Martin
    Adventium Labs, Minneapolis MN, United States.
    Natarajan, Sriraam
    Indiana University, Bloomington IN, United States.
    O’Sullivan, Barry
    INSIGHT Centre for Data Analytics, University College Cork, Cork, Ireland.
    Pickett, Marc
    Naval Research Laboratory, Washington DC, United States.
    Podobnik, Vedran
    Telecommunication Department of the Faculty of Electrical Engineering and Computing, University of University of Zagreb, Zagreb, Croatia.
    Poole, David
    Department of Computer Science, University of British Columbia, Vancouver, Canada.
    Shastri, Lokendra
    Infosys, , India.
    Shehu, Amarda
    George Mason University, Washington, United States.
    Sukthankar, Gita
    University of Central Florida, Orlando FL, United States.
    The AAAI-13 Conference Workshops2013In: The AI Magazine, ISSN 0738-4602, Vol. 34, no 4, p. 108-115Article in journal (Refereed)
    Abstract [en]

    The AAAI-13 Workshop Program, a part of the 27th AAAI Conference on Artificial Intelligence, was held Sunday and Monday, July 14-15, 2013, at the Hyatt Regency Bellevue Hotel in Bellevue, Washington, USA. The program included 12 workshops covering a wide range of topics in artificial intelligence, including Activity Context-Aware System Architectures (WS-13-05); Artificial Intelligence and Robotics Methods in Computational Biology (WS-13-06); Combining Constraint Solving with Mining and Learning (WS-13-07); Computer Poker and Imperfect Information (WS-13-08); Expanding the Boundaries of Health Informatics Using Artificial Intelligence (WS-13-09); Intelligent Robotic Systems (WS-13-10); Intelligent Techniques for Web Personalization and Recommendation (WS-13-11); Learning Rich Representations from Low-Level Sensors (WS-13-12); Plan, Activity,, and Intent Recognition (WS-13-13); Space, Time, and Ambient Intelligence (WS-13-14); Trading Agent Design and Analysis (WS-13-15); and Statistical Relational Artificial Intelligence (WS-13-16)

  • 2.
    Ahtiainen, Juhana
    et al.
    Department of Electrical Engineering and Automation, Aalto University, Espoo, Finland.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Saarinen, Jari
    GIM Ltd., Espoo, Finland.
    Normal Distributions Transform Traversability Maps: LIDAR-Only Approach for Traversability Mapping in Outdoor Environments2017In: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 34, no 3, p. 600-621Article in journal (Refereed)
    Abstract [en]

    Safe and reliable autonomous navigation in unstructured environments remains a challenge for field robots. In particular, operating on vegetated terrain is problematic, because simple purely geometric traversability analysis methods typically classify dense foliage as nontraversable. As traversing through vegetated terrain is often possible and even preferable in some cases (e.g., to avoid executing longer paths), more complex multimodal traversability analysis methods are necessary. In this article, we propose a three-dimensional (3D) traversability mapping algorithm for outdoor environments, able to classify sparsely vegetated areas as traversable, without compromising accuracy on other terrain types. The proposed normal distributions transform traversability mapping (NDT-TM) representation exploits 3D LIDAR sensor data to incrementally expand normal distributions transform occupancy (NDT-OM) maps. In addition to geometrical information, we propose to augment the NDT-OM representation with statistical data of the permeability and reflectivity of each cell. Using these additional features, we train a support-vector machine classifier to discriminate between traversable and nondrivable areas of the NDT-TM maps. We evaluate classifier performance on a set of challenging outdoor environments and note improvements over previous purely geometrical traversability analysis approaches.

  • 3.
    Akalin, Neziha
    et al.
    Örebro University, School of Science and Technology.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Kristoffersson, Annica
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    An Evaluation Tool of the Effect of Robots in Eldercare on the Sense of Safety and Security2017In: Social Robotics: 9th International Conference, ICSR 2017, Tsukuba, Japan, November 22-24, 2017, Proceedings / [ed] Kheddar, A.; Yoshida, E.; Ge, S.S.; Suzuki, K.; Cabibihan, J-J:, Eyssel, F:, He, H., Springer International Publishing , 2017, p. 628-637Conference paper (Refereed)
    Abstract [en]

    The aim of the study presented in this paper is to develop a quantitative evaluation tool of the sense of safety and security for robots in eldercare. By investigating the literature on measurement of safety and security in human-robot interaction, we propose new evaluation tools. These tools are semantic differential scale questionnaires. In experimental validation, we used the Pepper robot, programmed in the way to exhibit social behaviors, and constructed four experimental conditions varying the degree of the robot’s non-verbal behaviors from no gestures at all to full head and hand movements. The experimental results suggest that both questionnaires (for the sense of safety and the sense of security) have good internal consistency.

  • 4.
    Akalin, Neziha
    et al.
    Örebro University, School of Science and Technology.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Kristoffersson, Annica
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    The Relevance of Social Cues in Assistive Training with a Social Robot2018In: 10th International Conference on Social Robotics, ICSR 2018, Proceedings / [ed] Ge, S.S., Cabibihan, J.-J., Salichs, M.A., Broadbent, E., He, H., Wagner, A., Castro-González, Á., Springer, 2018, p. 462-471Conference paper (Refereed)
    Abstract [en]

    This paper examines whether social cues, such as facial expressions, can be used to adapt and tailor a robot-assisted training in order to maximize performance and comfort. Specifically, this paper serves as a basis in determining whether key facial signals, including emotions and facial actions, are common among participants during a physical and cognitive training scenario. In the experiment, participants performed basic arm exercises with a social robot as a guide. We extracted facial features from video recordings of participants and applied a recursive feature elimination algorithm to select a subset of discriminating facial features. These features are correlated with the performance of the user and the level of difficulty of the exercises. The long-term aim of this work, building upon the work presented here, is to develop an algorithm that can eventually be used in robot-assisted training to allow a robot to tailor a training program based on the physical capabilities as well as the social cues of the users.

  • 5.
    Akalin, Neziha
    et al.
    Örebro University, School of Science and Technology.
    Kristoffersson, Annica
    School of Innovation, Design and Engineering, Mälardalen University, Västerås, Sweden.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Evaluating the Sense of Safety and Security in Human - Robot Interaction with Older People2019In: Social Robots: Technological, Societal and Ethical Aspects of Human-Robot Interaction / [ed] Oliver Korn, Springer, 2019, p. 237-264Chapter in book (Refereed)
    Abstract [en]

    For many applications where interaction between robots and older people takes place, safety and security are key dimensions to consider. ‘Safety’ refers to a perceived threat of physical harm, whereas ‘security’ is a broad term which refers to many aspects related to health, well-being, and aging. This chapter presents a quantitative evaluation tool of the sense of safety and security for robots in elder care. By investigating the literature on measurement of safety and security in human–robot interaction, we propose new evaluation tools specially tailored to assess interaction between robots and older people.

  • 6.
    Akalin, Neziha
    et al.
    Örebro University, School of Science and Technology.
    Kristoffersson, Annica
    School of Innovation, Design and Engineering, Mälardalen University, Västerås, Sweden.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    The Influence of Feedback Type in Robot-Assisted Training2019In: Multimodal Technologies and Interaction, E-ISSN 2414-4088, Vol. 3, no 4Article in journal (Refereed)
    Abstract [en]

    Robot-assisted training, where social robots can be used as motivational coaches, provides an interesting application area. This paper examines how feedback given by a robot agent influences the various facets of participant experience in robot-assisted training. Specifically, we investigated the effects of feedback type on robot acceptance, sense of safety and security, attitude towards robots and task performance. In the experiment, 23 older participants performed basic arm exercises with a social robot as a guide and received feedback. Different feedback conditions were administered, such as flattering, positive and negative feedback. Our results suggest that the robot with flattering and positive feedback was appreciated by older people in general, even if the feedback did not necessarily correspond to objective measures such as performance. Participants in these groups felt better about the interaction and the robot.

  • 7.
    Akbari, Aliakbar
    et al.
    Institute of Industrial and Control Engineering (IOC), Universitat Politècnica de Catalunya (UPC)—Barcelona Tech, Barcelona, Spain.
    Lagriffoul, Fabien
    Örebro University, School of Science and Technology.
    Rosell, Jan
    Institute of Industrial and Control Engineering (IOC), Universitat Politècnica de Catalunya (UPC)—Barcelona Tech, Barcelona, Spain.
    Combined heuristic task and motion planning for bi-manual robots2019In: Autonomous Robots, ISSN 0929-5593, E-ISSN 1573-7527, Vol. 43, no 6, p. 1575-1590Article in journal (Refereed)
    Abstract [en]

    Planning efficiently at task and motion levels allows the setting of new challenges for robotic manipulation problems, like for instance constrained table-top problems for bi-manual robots. In this scope, the appropriate combination of task and motion planning levels plays an important role. Accordingly, a heuristic-based task and motion planning approach is proposed, in which the computation of the heuristic addresses a geometrically relaxed problem, i.e., it only reasons upon objects placements, grasp poses, and inverse kinematics solutions. Motion paths are evaluated lazily, i.e., only after an action has been selected by the heuristic. This reduces the number of calls to the motion planner, while backtracking is reduced because the heuristic captures most of the geometric constraints. The approach has been validated in simulation and on a real robot, with different classes of table-top manipulation problems. Empirical comparison with recent approaches solving similar problems is also reported, showing that the proposed approach results in significant improvement both in terms of planing time and success rate.

  • 8.
    Almqvist, Håkan
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Kucner, Tomasz Piotr
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Learning to detect misaligned point clouds2018In: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 35, no 5, p. 662-677Article in journal (Refereed)
    Abstract [en]

    Matching and merging overlapping point clouds is a common procedure in many applications, including mobile robotics, three-dimensional mapping, and object visualization. However, fully automatic point-cloud matching, without manual verification, is still not possible because no matching algorithms exist today that can provide any certain methods for detecting misaligned point clouds. In this article, we make a comparative evaluation of geometric consistency methods for classifying aligned and nonaligned point-cloud pairs. We also propose a method that combines the results of the evaluated methods to further improve the classification of the point clouds. We compare a range of methods on two data sets from different environments related to mobile robotics and mapping. The results show that methods based on a Normal Distributions Transform representation of the point clouds perform best under the circumstances presented herein.

  • 9.
    Amigoni, Francesco
    et al.
    Politecnico di Milano, Milan, Italy.
    Yu, Wonpil
    Electronics and Telecommunications Research Institute (ETRI), Daejeon, South Korea.
    Andre, Torsten
    University of Klagenfurt, Klagenfurt, Austria.
    Holz, Dirk
    University of Bonn, Bonn, Germany.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Matteucci, Matteo
    Politecnico di Milano, Milan, Italy.
    Moon, Hyungpil
    Sungkyunkwan University, Suwon, South Korea.
    Yokozuka, Masashi
    Nat. Inst. of Advanced Industrial Science and Technology, Tsukuba, Japan.
    Biggs, Geoffrey
    Nat. Inst. of Advanced Industrial Science and Technology, Tsukuba, Japan.
    Madhavan, Raj
    Amrita University, Clarksburg MD, United States of America.
    A Standard for Map Data Representation: IEEE 1873-2015 Facilitates Interoperability Between Robots2018In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 25, no 1, p. 65-76Article in journal (Refereed)
    Abstract [en]

    The availability of environment maps for autonomous robots enables them to complete several tasks. A new IEEE standard, IEEE 1873-2015, Robot Map Data Representation for Navigation (MDR) [15], sponsored by the IEEE Robotics and Automation Society (RAS) and approved by the IEEE Standards Association Standards Board in September 2015, defines a common representation for two-dimensional (2-D) robot maps and is intended to facilitate interoperability among navigating robots. The standard defines an extensible markup language (XML) data format for exchanging maps between different systems. This article illustrates how metric maps, topological maps, and their combinations can be represented according to the standard.

  • 10.
    Antonova, Rika
    et al.
    Robotics, Perception and Learning, CSC, Royal Institute of Technology, Stockholm, Sweden.
    Kokic, Mia
    Robotics, Perception and Learning, CSC, Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Robotics, Perception and Learning, CSC, Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Robotics, Perception and Learning, CSC, Royal Institute of Technology, Stockholm, Sweden.
    Global Search with Bernoulli Alternation Kernel for Task-oriented Grasping Informed by Simulation2018In: Proceedings of Machine Learning Research: Conference on Robot Learning 2018, PMLR , 2018, Vol. 87, p. 641-650Conference paper (Refereed)
    Abstract [en]

    We develop an approach that benefits from large simulated datasets and takes full advantage of the limited online data that is most relevant. We propose a variant of Bayesian optimization that alternates between using informed and uninformed kernels. With this Bernoulli Alternation Kernel we ensure that discrepancies between simulation and reality do not hinder adapting robot control policies online. The proposed approach is applied to a challenging real-world problem of task-oriented grasping with novel objects. Our further contribution is a neural network architecture and training pipeline that use experience from grasping objects in simulation to learn grasp stability scores. We learn task scores from a labeled dataset with a convolutional network, which is used to construct an informed kernel for our variant of Bayesian optimization. Experiments on an ABB Yumi robot with real sensor data demonstrate success of our approach, despite the challenge of fulfilling task requirements and high uncertainty over physical properties of objects.

  • 11.
    Arnekvist, Isac
    et al.
    Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Örebro University, School of Science and Technology. Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
    VPE: Variational Policy Embedding for Transfer Reinforcement Learning2019In: 2019 International Conference on Robotics and Automation (ICRA) / [ed] Howard, A; Althoefer, K; Arai, F; Arrichiello, F; Caputo, B; Castellanos, J; Hauser, K; Isler, V Kim, J; Liu, H; Oh, P; Santos, V; Scaramuzza, D; Ude, A; Voyles, R; Yamane, K; Okamura, A, IEEE , 2019, p. 36-42Conference paper (Refereed)
    Abstract [en]

    Reinforcement Learning methods are capable of solving complex problems, but resulting policies might perform poorly in environments that are even slightly different. In robotics especially, training and deployment conditions often vary and data collection is expensive, making retraining undesirable. Simulation training allows for feasible training times, but on the other hand suffer from a reality-gap when applied in real-world settings. This raises the need of efficient adaptation of policies acting in new environments.

    We consider the problem of transferring knowledge within a family of similar Markov decision processes. We assume that Q-functions are generated by some low-dimensional latent variable. Given such a Q-function, we can find a master policy that can adapt given different values of this latent variable. Our method learns both the generative mapping and an approximate posterior of the latent variables, enabling identification of policies for new tasks by searching only in the latent space, rather than the space of all policies. The low-dimensional space, and master policy found by our method enables policies to quickly adapt to new environments. We demonstrate the method on both a pendulum swing-up task in simulation, and for simulation-to-real transfer on a pushing task.

  • 12.
    Arnekvist, Isac
    et al.
    Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
    VPE: Variational Policy Embedding for Transfer Reinforcement Learning2018Manuscript (preprint) (Other academic)
  • 13.
    Asadi, Sahar
    et al.
    Örebro University, School of Science and Technology.
    Fan, Han
    Örebro University, School of Science and Technology.
    Hernandez Bennetts, Victor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Time-dependent gas distribution modelling2017In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 96, p. 157-170Article in journal (Refereed)
    Abstract [en]

    Artificial olfaction can help to address pressing environmental problems due to unwanted gas emissions. Sensor networks and mobile robots equipped with gas sensors can be used for e.g. air pollution monitoring. Key in this context is the ability to derive truthful models of gas distribution from a set of sparse measurements. Most statistical gas distribution modelling methods assume that gas dispersion is a time constant random process. While this assumption approximately holds in some situations, it is necessary to model variations over time in order to enable applications of gas distribution modelling in a wider range of realistic scenarios. Time-invariant approaches cannot model well evolving gas plumes, for example, or major changes in gas dispersion due to a sudden change of the environmental conditions. This paper presents two approaches to gas distribution modelling, which introduce a time-dependency and a relation to a time-scale in generating the gas distribution model either by sub-sampling or by introducing a recency weight that relates measurement and prediction time. We evaluated these approaches in experiments performed in two real environments as well as on several simulated experiments. As expected, the comparison of different sub-sampling strategies revealed that more recent measurements are more informative to derive an estimate of the current gas distribution as long as a sufficient spatial coverage is given. Next, we compared a time-dependent gas distribution modelling approach (TD Kernel DM+V), which includes a recency weight, to the state-of-the-art gas distribution modelling approach (Kernel DM+V), which does not consider sampling times. The results indicate a consistent improvement in the prediction of unseen measurements, particularly in dynamic scenarios. Furthermore, this paper discusses the impact of meta-parameters in model selection and compares the performance of time-dependent GDM in different plume conditions. Finally, we investigated how to set the target time for which the model is created. The results indicate that TD Kernel DM+V performs best when the target time is set to the maximum sampling time in the test set.

  • 14.
    Bacciu, Davide
    et al.
    Università di Pisa, Pisa, Italy.
    Di Rocco, Maurizio
    Örebro University, Örebro, Sweden.
    Dragone, Mauro
    Heriot-Watt University, Edinburgh, UK.
    Gallicchio, Claudio
    Università di Pisa, Pisa, Italy.
    Micheli, Alessio
    Università di Pisa, Pisa, Italy.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    An ambient intelligence approach for learning in smart robotic environments2019In: Computational intelligence, ISSN 0824-7935, E-ISSN 1467-8640Article in journal (Refereed)
    Abstract [en]

    Smart robotic environments combine traditional (ambient) sensing devices and mobile robots. This combination extends the type of applications that can be considered, reduces their complexity, and enhances the individual values of the devices involved by enabling new services that cannot be performed by a single device. To reduce the amount of preparation and preprogramming required for their deployment in real-world applications, it is important to make these systems self-adapting. The solution presented in this paper is based upon a type of compositional adaptation where (possibly multiple) plans of actions are created through planning and involve the activation of pre-existing capabilities. All the devices in the smart environment participate in a pervasive learning infrastructure, which is exploited to recognize which plans of actions are most suited to the current situation. The system is evaluated in experiments run in a real domestic environment, showing its ability to proactively and smoothly adapt to subtle changes in the environment and in the habits and preferences of their user(s), in presence of appropriately defined performance measuring functions.

  • 15.
    Behrens, Jan Kristof
    et al.
    Robert Bosch GmbH, Corporate Research, Renningen, Germany.
    Lange, Ralph
    Robert Bosch GmbH, Corporate Research, Renningen, Germany.
    Mansouri, Masoumeh
    Örebro University, School of Science and Technology.
    A Constraint Programming Approach to Simultaneous Task Allocation and Motion Scheduling for Industrial Dual-Arm Manipulation Tasks2019In: 2019 International Conference on Robotics and Automation (ICRA) / [ed] Howard, A; Althoefer, K; Arai, F; Arrichiello, F; Caputo, B; Castellanos, J; Hauser, K; Isler, V Kim, J; Liu, H; Oh, P; Santos, V; Scaramuzza, D; Ude, A; Voyles, R; Yamane, K; Okamura, A, IEEE , 2019, p. 8705-8711Conference paper (Refereed)
    Abstract [en]

    Modern lightweight dual-arm robots bring the physical capabilities to quickly take over tasks at typical industrial workplaces designed for workers. Low setup times - including the instructing/specifying of new tasks - are crucial to stay competitive. We propose a constraint programming approach to simultaneous task allocation and motion scheduling for such industrial manipulation and assembly tasks. Our approach covers the robot as well as connected machines. The key concept are Ordered Visiting Constraints, a descriptive and extensible model to specify such tasks with their spatiotemporal requirements and combinatorial or ordering constraints. Our solver integrates such task models and robot motion models into constraint optimization problems and solves them efficiently using various heuristics to produce makespan-optimized robot programs. For large manipulation tasks with 200 objects, our solver implemented using Google's Operations Research tools requires less than a minute to compute usable plans. The proposed task model is robot-independent and can easily be deployed to other robotic platforms. This portability is validated through several simulation-based experiments.

  • 16.
    Bekiroglu, Yasemin
    et al.
    School of Mechanical Engineering, University of Birmingham, Birmingham, UK.
    Damianou, Andreas
    Department of Computer Science, University of Sheffield, Sheffield, UK.
    Detry, Renaud
    Centre for Autonomous Systems, CSC, Royal Institute of Technology, Sweden.
    Stork, Johannes Andreas
    Centre for Autonomous Systems, CSC, Royal Institute of Technology, Sweden.
    Kragic, Danica
    Centre for Autonomous Systems, CSC, Royal Institute of Technology, Sweden.
    Ek, Carl Henrik
    Centre for Autonomous Systems, CSC, Royal Institute of Technology, Sweden.
    Probabilistic consolidation of grasp experience2016In: 2016 IEEE International Conference on Robotics and Automation (ICRA), IEEE conference proceedings, 2016, p. 193-200Conference paper (Refereed)
    Abstract [en]

    We present a probabilistic model for joint representation of several sensory modalities and action parameters in a robotic grasping scenario. Our non-linear probabilistic latent variable model encodes relationships between grasp-related parameters, learns the importance of features, and expresses confidence in estimates. The model learns associations between stable and unstable grasps that it experiences during an exploration phase. We demonstrate the applicability of the model for estimating grasp stability, correcting grasps, identifying objects based on tactile imprints and predicting tactile imprints from object-relative gripper poses. We performed experiments on a real platform with both known and novel objects, i.e., objects the robot trained with, and previously unseen objects. Grasp correction had a 75% success rate on known objects, and 73% on new objects. We compared our model to a traditional regression model that succeeded in correcting grasps in only 38% of cases.

  • 17.
    Bhatt, Mehul
    et al.
    SFB/TR 8 Spatial Cognition, University of Bremen, Bremen, Germany.
    Dylla, Frank
    SFB/TR 8 Spatial Cognition, University of Bremen, Bremen, Germany.
    A Qualitative Model of Dynamic Scene Analysis and Interpretation in Ambient Intelligence Systems2009In: International Journal of Robotics and Automation, ISSN 0826-8185, Vol. 24, no 3, p. 235-244Article in journal (Refereed)
    Abstract [en]

    Ambient intelligence environments necessitate representing and reasoning about dynamic spatial scenes and configurations. The ability to perform predictive and explanatory analyses of spatial scenes is crucial towards serving a useful intelligent function within such environments. We present a formal qualitative model that combines existing qualitative theories about space with it formal logic-based calculus suited to modelling dynamic environments, or reasoning about action and change in general. With this approach, it is possible to represent and reason about arbitrary dynamic spatial environments within a unified framework. We clarify and elaborate on our ideas with examples grounded in a smart environment.

  • 18.
    Bhatt, Mehul
    et al.
    Department of Computer Science, La Trobe University, Germany.
    Loke, Seng
    Department of Computer Science, La Trobe University, Germany.
    Modelling Dynamic Spatial Systems in the Situation Calculus2008In: Spatial Cognition and Computation, ISSN 1387-5868, E-ISSN 1573-9252, Vol. 8, no 1-2, p. 86-130Article in journal (Refereed)
    Abstract [en]

    We propose and systematically formalise a dynamical spatial systems approach for the modelling of changing spatial environments. The formalisation adheres to the semantics of the situation calculus and includes a systematic account of key aspects that are necessary to realize a domain-independent qualitative spatial theory that may be utilised across diverse application domains. The spatial theory is primarily derivable from the all-pervasive generic notion of "qualitative spatial calculi" that are representative of differing aspects of space. In addition, the theory also includes aspects, both ontological and phenomenal in nature, that are considered inherent in dynamic spatial systems. Foundational to the formalisation is a causal theory that adheres to the representational and computational semantics of the situation calculus. This foundational theory provides the necessary (general) mechanism required to represent and reason about changing spatial environments and also includes an account of the key fundamental epistemological issues concerning the frame and the ramification problems that arise whilst modelling change within such domains. The main advantage of the proposed approach is that based on the structure and semantics of the proposed framework, fundamental reasoning tasks such as projection and explanation directly follow. Within the specialised spatial reasoning domain, these translate to spatial planning/re-configuration, causal explanation and spatial simulation. Our approach is based on the hypothesis that alternate formalisations of existing qualitative spatial calculi using high-level tools such as the situation calculus are essential for their utilisation in diverse application domains such as intelligent systems, cognitive robotics and event-based GIS.

  • 19.
    Bhatt, Mehul
    et al.
    Örebro University, School of Science and Technology.
    Suchan, Jakob
    University of Bremen, Bremen, Germany.
    Vardarajan, Srikrishna
    CoDesign Lab EU.
    Deep Semantics for Explainable Visuospatial Intelligence: Perspectives on Integrating Commonsense Spatial Abstractions and Low-Level Neural Features2019In: Proceedings of the 2019 International Workshop on Neural-Symbolic Learning and Reasoning: Annual workshop of the Neural-Symbolic Learning and Reasoning Association / [ed] Derek Doran; Artur d'Avila Garcez; Freddy Lecue, 2019Conference paper (Refereed)
    Abstract [en]

    High-level semantic interpretation of (dynamic) visual imagery calls for general and systematic methods integrating techniques in knowledge representation and computer vision. Towards this, we position "deep semantics", denoting the existence of declarative models –e.g., pertaining "space and motion"– and corresponding formalisation and methods supporting (domain-independent) explainability capabilities such as semantic question-answering, relational (and relationally-driven) visuospatial learning, and (non-monotonic) visuospatial abduction. Rooted in recent work, we summarise and report the status quo on deep visuospatial semantics —and our approach to neurosymbolic integration and explainable visuo-spatial computing in that context— with developed methods and tools in diverse settings such as behavioural research in psychology, art & social sciences, and autonomous driving.

  • 20.
    Bruno, Barbara
    et al.
    University of Genova, Genova, Italy.
    Chong, Nak Young
    Japan Advanced Institute of Science and Technology, Nomi [Ishikawa], Japan.
    Kamide, Hiroko
    Nagoya University, Nagoya, Japan.
    Kanoria, Sanjeev
    Advinia Health Care Limited LTD, London, UK.
    Lee, Jaeryoung
    Chubu University, Kasugai, Japan.
    Lim, Yuto
    Japan Advanced Institute of Science and Technology, Nomi [Ishikawa], Japan.
    Kumar Pandey, Amit
    SoftBank Robotics.
    Papadopoulos, Chris
    University of Bedfordshire, Luton, UK.
    Papadopoulos, Irena
    Middlesex University Higher Education Corporation, London, UK.
    Pecora, Federico
    Örebro University, School of Science and Technology.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Sgorbissa, Antonio
    University of Genova, Genova, Italy.
    Paving the Way for Culturally Competent Robots: a Position Paper2017In: 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) / [ed] Howard, A; Suzuki, K; Zollo, L, New York: Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 553-560Conference paper (Refereed)
    Abstract [en]

    Cultural competence is a well known requirementfor an effective healthcare, widely investigated in thenursing literature. We claim that personal assistive robotsshould likewise be culturally competent, aware of generalcultural characteristics and of the different forms they take indifferent individuals, and sensitive to cultural differences whileperceiving, reasoning, and acting. Drawing inspiration fromexisting guidelines for culturally competent healthcare and thestate-of-the-art in culturally competent robotics, we identifythe key robot capabilities which enable culturally competentbehaviours and discuss methodologies for their developmentand evaluation.

  • 21.
    Bruno, Barbara
    et al.
    University of Genoa, Genoa, Italy.
    Recchiuto, Carmine Tommaso
    University of Genoa, Genoa, Italy.
    Papadopoulos, Irena
    Middlesex University Higher Education Corporation, The Burroughs, Hendon, London, UK.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Koulouglioti, Christina
    Middlesex University Higher Education Corporation, The Burroughs, Hendon, London, UK.
    Menicatti, Roberto
    University of Genoa, Genoa, Italy.
    Mastrogiovanni, Fulvio
    University of Genoa, Genoa, Italy.
    Zaccarial, Renato
    University of Genoa, Genoa, Italy.
    Sgorbissa, Antonio
    University of Genoa, Genoa, Italy.
    Knowledge Representation for Culturally Competent Personal Robots: Requirements, Design Principles, Implementation, and Assessment2019In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 11, no 3, p. 515-538Article in journal (Refereed)
    Abstract [en]

    Culture, intended as the set of beliefs, values, ideas, language, norms and customs which compose a person's life, is an essential element to know by any robot for personal assistance. Culture, intended as that person's background, can be an invaluable source of information to drive and speed up the process of discovering and adapting to the person's habits, preferences and needs. This article discusses the requirements posed by cultural competence on the knowledge management system of a robot. We propose a framework for cultural knowledge representation that relies on (i) a three-layer ontology for storing concepts of relevance, culture-specific information and statistics, person-specific information and preferences; (ii) an algorithm for the acquisition of person-specific knowledge, which uses culture-specific knowledge to drive the search; (iii) a Bayesian Network for speeding up the adaptation to the person by propagating the effects of acquiring one specific information onto interconnected concepts. We have conducted a preliminary evaluation of the framework involving 159 Italian and German volunteers and considering 122 among habits, attitudes and social norms.

  • 22.
    Canelhas, Daniel R.
    et al.
    Örebro University, School of Science and Technology.
    Schaffernicht, Erik
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Davison, Andrew J.
    Department of Computing, Imperial College London, London, United Kingdom.
    Compressed Voxel-Based Mapping Using Unsupervised Learning2017In: Robotics, E-ISSN 2218-6581, Vol. 6, no 3, article id 15Article in journal (Refereed)
    Abstract [en]

    In order to deal with the scaling problem of volumetric map representations, we propose spatially local methods for high-ratio compression of 3D maps, represented as truncated signed distance fields. We show that these compressed maps can be used as meaningful descriptors for selective decompression in scenarios relevant to robotic applications. As compression methods, we compare using PCA-derived low-dimensional bases to nonlinear auto-encoder networks. Selecting two application-oriented performance metrics, we evaluate the impact of different compression rates on reconstruction fidelity as well as to the task of map-aided ego-motion estimation. It is demonstrated that lossily reconstructed distance fields used as cost functions for ego-motion estimation can outperform the original maps in challenging scenarios from standard RGB-D (color plus depth) data sets due to the rejection of high-frequency noise content.

  • 23.
    Canelhas, Daniel R.
    et al.
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    From Feature Detection in Truncated Signed Distance Fields to Sparse Stable Scene Graphs2016In: IEEE Robotics and Automation Letters, ISSN 2377-3766, Vol. 1, no 2, p. 1148-1155Article in journal (Refereed)
    Abstract [en]

    With the increased availability of GPUs and multicore CPUs, volumetric map representations are an increasingly viable option for robotic applications. A particularly important representation is the truncated signed distance field (TSDF) that is at the core of recent advances in dense 3D mapping. However, there is relatively little literature exploring the characteristics of 3D feature detection in volumetric representations. In this paper we evaluate the performance of features extracted directly from a 3D TSDF representation. We compare the repeatability of Integral invariant features, specifically designed for volumetric images, to the 3D extensions of Harris and Shi & Tomasi corners. We also study the impact of different methods for obtaining gradients for their computation. We motivate our study with an example application for building sparse stable scene graphs, and present an efficient GPU-parallel algorithm to obtain the graphs, made possible by the combination of TSDF and 3D feature points. Our findings show that while the 3D extensions of 2D corner-detection perform as expected, integral invariants have shortcomings when applied to discrete TSDFs. We conclude with a discussion of the cause for these points of failure that sheds light on possible mitigation strategies.

  • 24.
    Chadalavada, Ravi Teja
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Schindler, Maike
    Faculty of Human Sciences, University of Cologne, Germany.
    Palm, Rainer
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Bi-directional navigation intent communication using spatial augmented reality and eye-tracking glasses for improved safety in human-robot interaction2020In: Robotics and Computer-Integrated Manufacturing, ISSN 0736-5845, E-ISSN 1879-2537, Vol. 61, article id 101830Article in journal (Refereed)
    Abstract [en]

    Safety, legibility and efficiency are essential for autonomous mobile robots that interact with humans. A key factor in this respect is bi-directional communication of navigation intent, which we focus on in this article with a particular view on industrial logistic applications. In the direction robot-to-human, we study how a robot can communicate its navigation intent using Spatial Augmented Reality (SAR) such that humans can intuitively understand the robot's intention and feel safe in the vicinity of robots. We conducted experiments with an autonomous forklift that projects various patterns on the shared floor space to convey its navigation intentions. We analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift and carried out stimulated recall interviews (SRI) in order to identify desirable features for projection of robot intentions. In the direction human-to-robot, we argue that robots in human co-habited environments need human-aware task and motion planning to support safety and efficiency, ideally responding to people's motion intentions as soon as they can be inferred from human cues. Eye gaze can convey information about intentions beyond what can be inferred from the trajectory and head pose of a person. Hence, we propose eye-tracking glasses as safety equipment in industrial environments shared by humans and robots. In this work, we investigate the possibility of human-to-robot implicit intention transference solely from eye gaze data and evaluate how the observed eye gaze patterns of the participants relate to their navigation decisions. We again analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift for clues that could reveal direction intent. Our analysis shows that people primarily gazed on that side of the robot they ultimately decided to pass by. We discuss implications of these results and relate to a control approach that uses human gaze for early obstacle avoidance.

  • 25.
    Daoutis, Marios
    Örebro University, School of Science and Technology.
    Knowledge based perceptual anchoring: grounding percepts to concepts in cognitive robots2013In: Künstliche Intelligenz, ISSN 0933-1875, E-ISSN 1610-1987, p. 1-4Article in journal (Refereed)
    Abstract [en]

    Perceptual anchoring is the process of creating and maintaining a connection between the sensor data corresponding to a physical object and its symbolic description. It is a subset of the symbol grounding problem, introduced by Harnad (Phys. D, Nonlinear Phenom. 42(1–3):335–346, 1990) and investigated over the past years in several disciplines including robotics. This PhD dissertation focuses on a method for grounding sensor data of physical objects to the corresponding semantic descriptions, in the context of cognitive robots where the challenge is to establish the connection between percepts and concepts referring to objects, their relations and properties. We examine how knowledge representation can be used together with an anchoring framework, so as to complement the meaning of percepts while supporting better linguistic interaction with the use of the corresponding concepts. The proposed method addresses the need to represent and process both perceptual and semantic knowledge, often expressed in different abstraction levels, while originating from different modalities. We then focus on the integration of anchoring with a large scale knowledge base system and with perceptual routines. This integration is applied in a number of studies, where in the context of a smart home, several evaluations spanning from spatial and commonsense reasoning to linguistic interaction and concept acquisition.

  • 26.
    Daoutis, Marios
    et al.
    Örebro University, School of Science and Technology.
    Coradeschi, Silvia
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Towards concept anchoring for cognitive robots2012In: Intelligent Service Robotics, ISSN 1861-2784, Vol. 5, no 4, p. 213-228Article in journal (Refereed)
    Abstract [en]

    We present a model for anchoring categorical conceptual information which originates from physical perception and the web. The model is an extension of the anchoring framework which is used to create and maintain over time semantically grounded sensor information. Using the augmented anchoring framework that employs complex symbolic knowledge from a commonsense knowledge base, we attempt to ground and integrate symbolic and perceptual data that are available on the web. We introduce conceptual anchors which are representations of general, concrete conceptual terms. We show in an example scenario how conceptual anchors can be coherently integrated with perceptual anchors and commonsense information for the acquisition of novel concepts.

  • 27.
    Della Corte, Bartolomeo
    et al.
    Department of Computer, Control, and Management Engineering “Antonio Ruberti” Sapienza, University of Rome, Rome, Italy.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Grisetti, Giorgio
    Department of Computer, Control, and Management Engineering “Antonio Ruberti” Sapienza, University of Rome, Rome, Italy.
    Unified Motion-Based Calibration of Mobile Multi-Sensor Platforms With Time Delay Estimation2019In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, no 2, p. 902-909Article in journal (Refereed)
    Abstract [en]

    The ability to maintain and continuously update geometric calibration parameters of a mobile platform is a key functionality for every robotic system. These parameters include the intrinsic kinematic parameters of the platform, the extrinsic parameters of the sensors mounted on it, and their time delays. In this letter, we present a unified pipeline for motion-based calibration of mobile platforms equipped with multiple heterogeneous sensors. We formulate a unified optimization problem to concurrently estimate the platform kinematic parameters, the sensors extrinsic parameters, and their time delays. We analyze the influence of the trajectory followed by the robot on the accuracy of the estimate. Our framework automatically selects appropriate trajectories to maximize the information gathered and to obtain a more accurate parameters estimate. In combination with that, our pipeline observes the parameters evolution in long-term operation to detect possible values change in the parameters set. The experiments conducted on real data show a smooth convergence along with the ability to detect changes in parameters value. We release an open-source version of our framework to the community.

  • 28.
    Dubba, Krishna Sandeep Reddy
    et al.
    School of Computing, University of Leeds, Leeds, UK.
    Cohn, Anthony G.
    School of Computing, University of Leeds, Leeds, UK.
    Hogg, David C.
    School of Computing, University of Leeds, Leeds, UK.
    Bhatt, Mehul
    Cognitive Systems, SFB/TR 8 Spatial Cognition, University of Bremen, Bremen, Germany.
    Dylla, Frank
    Cognitive Systems, SFB/TR 8 Spatial Cognition, University of Bremen, Bremen, Germany.
    Learning Relational Event Models from Video2015In: The journal of artificial intelligence research, ISSN 1076-9757, E-ISSN 1943-5037, Vol. 53, p. 41-90Article in journal (Refereed)
    Abstract [en]

    Event models obtained automatically from video can be used in applications ranging from abnormal event detection to content based video retrieval. When multiple agents are involved in the events, characterizing events naturally suggests encoding interactions as relations. Learning event models from this kind of relational spatio-temporal data using relational learning techniques such as Inductive Logic Programming (ILP) hold promise, but have not been successfully applied to very large datasets which result from video data. In this paper, we present a novel framework REMIND (Relational Event Model INDuction) for supervised relational learning of event models from large video datasets using ILP. Efficiency is achieved through the learning from interpretations setting and using a typing system that exploits the type hierarchy of objects in a domain. The use of types also helps prevent over generalization. Furthermore, we also present a type-refining operator and prove that it is optimal. The learned models can be used for recognizing events from previously unseen videos. We also present an extension to the framework by integrating an abduction step that improves the learning performance when there is noise in the input data. The experimental results on several hours of video data from two challenging real world domains (an airport domain and a physical action verbs domain) suggest that the techniques are suitable to real world scenarios.

  • 29.
    Echelmeyer, Wolfgang
    et al.
    University of Reutlingen, Reutlingen, Germany.
    Kirchheim, Alice
    School of Science and Technology, Örebro University, Örebro, Sweden.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Akbiyik, Hülya
    University of Reutlingen, Reutlingen, Germany.
    Bonini, Marco
    University of Reutlingen, Reutlingen, Germany.
    Performance Indicators for Robotics Systems in Logistics Applications2011Conference paper (Refereed)
    Abstract [en]

    The transfer of research results to market-ready products is often a costly and time-consuming process. In order to generate successful products, researchers must cooperate with industrial companies; both the industrial and academic partners need to have a detailed understanding of the requirements of all parties concerned. Academic researchers need to identify the performance indicators for technical systems within a business environment and be able to apply them.

    Inservice logistics today, nearly all standardized mass goods are unloaded manually with one reason for this being the undefined position and orientation of the goods in the carrier. A study regarding the qualitative and quantitative properties of goods that are transported in containers shows that there is a huge economic relevance for autonomous systems. In 2008, more than 8,4 billion Twenty-foot equivalent units (TEU) were imported and unloaded manually at European ports, corresponding to more than 331,000 billion single goods items.

    Besides the economic relevance, the opinion of market participants is an important factor for the success of new systems on the market. The main outcomes of a study regarding the challenges, opportunities and barriers in robotic-logistics, allow for the estimation of the economic efficiency of performance indicators, performance flexibility and soft factors. The economic efficiency of the performance parameters is applied to the parcel robot – a cognitive system to unload parcels autonomously from containers. In the following article, the results of the study are presented and the resultant conclusions discussed.

  • 30.
    Efremova, Natalia
    et al.
    Plekhanov Russian University, Moskow, Russia.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Cognitive Architectures for Optimal Remote Image Representation for Driving a Telepresence Robot2014Conference paper (Refereed)
  • 31.
    Fan, Hongqi
    et al.
    Örebro University, School of Science and Technology. National Laboratory of Science and Technology on Automatic Target Recognition, National University of Defense Technology, Changsha, China.
    Kucner, Tomasz Piotr
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Li, Tiancheng
    School of Sciences, University of Salamanca, Salamanca, Spain.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    A Dual PHD Filter for Effective Occupancy Filtering in a Highly Dynamic Environment2018In: IEEE transactions on intelligent transportation systems (Print), ISSN 1524-9050, E-ISSN 1558-0016, Vol. 19, no 9, p. 2977-2993Article in journal (Refereed)
    Abstract [en]

    Environment monitoring remains a major challenge for mobile robots, especially in densely cluttered or highly populated dynamic environments, where uncertainties originated from environment and sensor significantly challenge the robot's perception. This paper proposes an effective occupancy filtering method called the dual probability hypothesis density (DPHD) filter, which models uncertain phenomena, such as births, deaths, occlusions, false alarms, and miss detections, by using random finite sets. The key insight of our method lies in the connection of the idea of dynamic occupancy with the concepts of the phase space density in gas kinetic and the PHD in multiple target tracking. By modeling the environment as a mixture of static and dynamic parts, the DPHD filter separates the dynamic part from the static one with a unified filtering process, but has a higher computational efficiency than existing Bayesian Occupancy Filters (BOFs). Moreover, an adaptive newborn function and a detection model considering occlusions are proposed to improve the filtering efficiency further. Finally, a hybrid particle implementation of the DPHD filter is proposed, which uses a box particle filter with constant discrete states and an ordinary particle filter with a time-varying number of particles in a continuous state space to process the static part and the dynamic part, respectively. This filter has a linear complexity with respect to the number of grid cells occupied by dynamic obstacles. Real-world experiments on data collected by a lidar at a busy roundabout demonstrate that our approach can handle monitoring of a highly dynamic environment in real time.

  • 32.
    Ferri, Gabriele
    et al.
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Mondini, Alessio
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Manzi, Alessandro
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Mazzolai, Barbara
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Laschi, Cecilia
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Mattoli, Virgilio
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Reggente, Matteo
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Lettere, Marco
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Dario, Paolo.
    Scuola Superiore Sant'Anna, Pisa, Italy.
    DustCart, a Mobile Robot for Urban Environments: Experiments of Pollution Monitoring and Mapping during Autonomous Navigation in Urban Scenarios2010In: Proceedings of ICRA Workshop on Networked and Mobile Robot Olfaction in Natural, Dynamic Environments, 2010Conference paper (Refereed)
    Abstract [en]

    In the framework of DustBot European project, aimed at developing a new multi-robot system for urban hygiene management, we have developed a twowheeled robot: DustCart. DustCart aims at providing a solution to door-to-door garbage collection: the robot, called by a user, navigates autonomously to his/her house; collects the garbage from the user and discharges it in an apposite area. An additional feature of DustCart is the capability to monitor the air pollution by means of an on board Air Monitoring Module (AMM). The AMM integrates sensors to monitor several atmospheric pollutants, such as carbon monoxide (CO), particular matter (PM10), nitrogen dioxide (NO2), ozone (O3) plus temperature (T) and relative humidity (rHu). An Ambient Intelligence platform (AmI) manages the robots’ operations through a wireless connection. AmI is able to collect measurements taken by different robots and to process them to create a pollution distribution map. In this paper we describe the DustCart robot system, focusing on the AMM and on the process of creating the pollutant distribution maps. We report results of experiments of one DustCart robot moving in urban scenarios and producing gas distribution maps using the Kernel DM+V algorithm. These experiments can be considered as one of the first attempts to use robots as mobile monitoring devices that can complement the traditional fixed stations.

  • 33.
    Gabellieri, Chiara
    et al.
    Centro di Ricerca “E. Piaggio” e Departimento di Ingnegneria dell’Informazione, Università di Pisa, Pisa, Italia.
    Palleschi, Alessandro
    Centro di Ricerca “E. Piaggio” e Departimento di Ingnegneria dell’Informazione, Università di Pisa, Pisa, Italia.
    Mannucci, Anna
    Centro di Ricerca “E. Piaggio” e Departimento di Ingnegneria dell’Informazione, Università di Pisa, Pisa, Italia.
    Pierallini, Michele
    Centro di Ricerca “E. Piaggio” e Departimento di Ingnegneria dell’Informazione, Università di Pisa, Pisa, Italia.
    Stefanini, Elisa
    Centro di Ricerca “E. Piaggio” e Departimento di Ingnegneria dell’Informazione, Università di Pisa, Pisa, Italia.
    Catalano, Manuel G.
    Istituto Italiano di Tecnologia, Genova GE, Italy.
    Caporale, Danilo
    Centro di Ricerca “E. Piaggio” e Departimento di Ingnegneria dell’Informazione, Università di Pisa, Pisa, Italia.
    Settimi, Alessandro
    Centro di Ricerca “E. Piaggio” e Departimento di Ingnegneria dell’Informazione, Università di Pisa, Pisa, Italia.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Garabini, Manolo
    Centro di Ricerca “E. Piaggio” e Departimento di Ingnegneria dell’Informazione, Università di Pisa, Pisa, Italia.
    Pallottino, Lucia
    Centro di Ricerca “E. Piaggio” e Departimento di Ingnegneria dell’Informazione, Università di Pisa, Pisa, Italia.
    Towards an Autonomous Unwrapping System for Intralogistics2019In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, no 4, p. 4603-4610Article in journal (Refereed)
    Abstract [en]

    Warehouse logistics is a rapidly growing market for robots. However, one key procedure that has not received much attention is the unwrapping of pallets to prepare them for objects picking. In fact, to prevent the goods from falling and to protect them, pallets are normally wrapped in plastic when they enter the warehouse. Currently, unwrapping is mainly performed by human operators, due to the complexity of its planning and control phases. Autonomous solutions exist, but usually they are designed for specific situations, require a large footprint and are characterized by low flexibility. In this work, we propose a novel integrated robotic solution for autonomous plastic film removal relying on an impedance-controlled robot. The main contribution is twofold: on one side, a strategy to plan Cartesian impedance and trajectory to execute the cut without damaging the goods is discussed; on the other side, we present a cutting device that we designed for this purpose. The proposed solution presents the characteristics of high versatility and the need for a reduced footprint, due to the adopted technologies and the integration with a mobile base. Experimental results are shown to validate the proposed approach.

  • 34.
    Grosinger, Jasmin
    et al.
    Örebro University, School of Science and Technology.
    Pecora, Federico
    Örebro University, School of Science and Technology.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Making Robots Proactive through Equilibrium Maintenance2016In: 25th International Joint Conference on Artificial Intelligence, 2016Conference paper (Refereed)
  • 35.
    Gürpinar, Cemal
    et al.
    Faculty of Computer and Informatics, Istanbul Technical University, Istanbul, Turkey.
    Uluer, Pinar
    Faculty of Computer and Informatics, Istanbul Technical University, Istanbul, Turkey; Faculty of Engineering and Technology, Galatasaray University, Istanbul, Turkey.
    Akalin, Neziha
    Örebro University, School of Science and Technology.
    Köse, Hatice
    Faculty of Computer and Informatics, Istanbul Technical University, Istanbul, Turkey.
    Sign Recognition System for an Assistive Robot Sign Tutor for Children2019In: International Journal of Social Robotics, ISSN 1875-4791, p. 1-15Article in journal (Refereed)
    Abstract [en]

    This paper presents a sign recognition system for a sign tutoring assistive humanoid robot. In this study, a specially designed 5-fingered robot platform with expressive face (Robovie R3) is used for interaction and communication with deaf or hard of hearing children using signs and visual cues. The robot is able to recognize and generate accurately a selected set of signs from Turkish sign language using various hand, arm and head gestures as relevant feedback. This paper focuses on the sign recognition system of the robot to recognize the human participant’s signing during the interaction. The system is based on two different approaches including a conventional method involving artificial neural network combined with hidden Markov model and a deep learning based method involving long short-term memory. The system is tested both on offline and real-time settings within an interaction game scenario with deaf or hard of hearing children. During the study, besides testing the sign recognition system, participants’ subjective evaluations and impressions were also collected and examined. The robot is perceived as likable and intelligent by the children, based on the questionnaires; and the proposed sign recognition system enables robust real-time interaction and communication of the assistive robot with children in sign language.

  • 36. Hang, Kaiyu
    et al.
    Li, Miao
    Stork, Johannes Andreas
    Bekiroglu, Yasemin
    Billard, Aude
    Kragic, Danica
    Hierarchical Fingertip Space for Synthesizing Adaptable Fingertip Grasps2014Conference paper (Other academic)
  • 37.
    Hang, Kaiyu
    et al.
    Computer Vision and Active Perception Laboratory, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Li, Miao
    Learning Algorithms and Systems Laboratory, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
    Stork, Johannes Andreas
    Computer Vision and Active Perception Laboratory, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Bekiroglu, Yasemin
    Department of Mechanical Engineering, School of Engineering, University of Birmingham, Birmingham, UK.
    Pokorny, Florian T.
    Computer Vision and Active Perception Laboratory, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Billard, Aude
    Learning Algorithms and Systems Laboratory, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
    Kragic, Danica
    Computer Vision and Active Perception Laboratory, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Hierarchical fingertip space: A unified framework for grasp planning and in-hand grasp adaptation2016In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 32, no 4, p. 960-972Article in journal (Refereed)
    Abstract [en]

    We present a unified framework for grasp planning and in-hand grasp adaptation using visual, tactile, and proprioceptive feedback. The main objective of the proposed framework is to enable fingertip grasping by addressing problems of changed weight of the object, slippage, and external disturbances. For this purpose we introduce the Hierarchical Fingertip Space as a representation enabling optimization for both efficient grasp synthesis and online finger gaiting. Grasp synthesis is followed by a grasp adaptation step that consists of both grasp force adaptation through impedance control and regrasping/finger gaiting when the former is not sufficient. Experimental evaluation is conducted on an Allegro hand mounted on a Kuka LWR arm.

  • 38.
    Hang, Kaiyu
    et al.
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Hierarchical fingertip space for multi-fingered precision grasping2014In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE Press, 2014, p. 1641-1648Conference paper (Refereed)
    Abstract [en]

    Dexterous in-hand manipulation of objects benefits from the ability of a robot system to generate precision grasps. In this paper, we propose a concept of Fingertip Space and its use for precision grasp synthesis. Fingertip Space is a representation that takes into account both the local geometry of object surface as well as the fingertip geometry. As such, it is directly applicable to the object point cloud data and it establishes a basis for the grasp search space. We propose a model for a hierarchical encoding of the Fingertip Space that enables multilevel refinement for efficient grasp synthesis. The proposed method works at the grasp contact level while not neglecting object shape nor hand kinematics. Experimental evaluation is performed for the Barrett hand considering also noisy and incomplete point cloud data.

  • 39.
    Hang, Kaiyu
    et al.
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Pokorny, Florian T.
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Combinatorial optimization for hierarchical contact-level grasping2014In: 2014 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2014, p. 381-388Conference paper (Refereed)
    Abstract [en]

    We address the problem of generating force-closed point contact grasps on complex surfaces and model it as a combinatorial optimization problem. Using a multilevel refinement metaheuristic, we maximize the quality of a grasp subject to a reachability constraint by recursively forming a hierarchy of increasingly coarser optimization problems. A grasp is initialized at the top of the hierarchy and then locally refined until convergence at each level. Our approach efficiently addresses the high dimensional problem of synthesizing stable point contact grasps while resulting in stable grasps from arbitrary initial configurations. Compared to a sampling-based approach, our method yields grasps with higher grasp quality. Empirical results are presented for a set of different objects. We investigate the number of levels in the hierarchy, the computational complexity, and the performance relative to a random sampling baseline approach.

  • 40.
    Hang, Kaiyu
    et al.
    Robotics, Perception, and Learning Lab, KTH Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Robotics, Perception, and Learning Lab, KTH Royal Institute of Technology, Stockholm, Sweden.
    Pollard, Nancy S.
    Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA.
    Kragic, Danica
    Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA.
    A Framework For Optimal Grasp Contact Planning2017In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 2, no 2, p. 704-711Article in journal (Refereed)
    Abstract [en]

    We consider the problem of finding grasp contacts that are optimal under a given grasp quality function on arbitrary objects. Our approach formulates a framework for contact-level grasping as a path finding problem in the space of supercontact grasps. The initial supercontact grasp contains all grasps and in each step along a path grasps are removed. For this, we introduce and formally characterize search space structure and cost functions under which minimal cost paths correspond to optimal grasps. Our formulation avoids expensive exhaustive search and reduces computational cost by several orders of magnitude. We present admissible heuristic functions and exploit approximate heuristic search to further reduce the computational cost while maintaining bounded suboptimality for resulting grasps. We exemplify our formulation with point-contact grasping for which we define domain specific heuristics and demonstrate optimality and bounded suboptimality by comparing against exhaustive and uniform cost search on example objects. Furthermore, we explain how to restrict the search graph to satisfy grasp constraints for modeling hand kinematics. We also analyze our algorithm empirically in terms of created and visited search states and resultant effective branching factor.

  • 41.
    Haustein, Joshua A.
    et al.
    Robotics, Perception and Learning Lab (RPL), CAS, EECS, KTH Royal Institute of Technology, Stockholm, Sweden.
    Arnekvist, Isac
    Robotics, Perception and Learning Lab (RPL), CAS, EECS, KTH Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Robotics, Perception and Learning Lab (RPL), CAS, EECS, KTH Royal Institute of Technology, Stockholm, Sweden.
    Hang, Kaiyu
    GRAB Lab, Yale University, New Haven, USA.
    Kragic, Danica
    Robotics, Perception and Learning Lab (RPL), CAS, EECS, KTH Royal Institute of Technology, Stockholm, Sweden.
    Learning Manipulation States and Actions for Efficient Non-prehensile Rearrangement Planning2019Manuscript (preprint) (Other academic)
  • 42.
    Kamarudin, Kamarulzaman
    et al.
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia; School of Mechatronics Engineering, Universiti Malaysia Perlis (UniMAP), Arau, Malaysia.
    Shakaff, Ali Yeon Md
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia; School of Mechatronics Engineering, Universiti Malaysia Perlis (UniMAP), Arau, Malaysia.
    Hernandez Bennetts, Victor
    Örebro University, School of Science and Technology.
    Mamduh, Syed Muhammad
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia.
    Zakaria, Ammar
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia; School of Mechatronics Engineering, Universiti Malaysia Perlis (UniMAP), Arau, Malaysia.
    Visvanathan, Retnam
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia.
    Yeon, Ahmad Shakaff Ali
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia.
    Kamarudin, Latifah Munirah
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia.
    Integrating SLAM and gas distribution mapping (SLAM-GDM) for real-time gas source localization2018In: Advanced Robotics, ISSN 0169-1864, E-ISSN 1568-5535, Vol. 32, no 17, p. 903-917Article in journal (Refereed)
    Abstract [en]

    Gas distribution mapping (GDM) learns models of the spatial distribution of gas concentrations across 2D/3D environments, among others, for the purpose of localizing gas sources. GDM requires run-time robot positioning in order to associate measurements with locations in a global coordinate frame. Most approaches assume that the robot has perfect knowledge about its position, which does not necessarily hold in realistic scenarios. We argue that the simultaneous localization and mapping (SLAM) algorithm should be used together with GDM to allow operation in an unknown environment. This paper proposes an SLAM-GDM approach that combines Hector SLAM and Kernel DM+V through a map merging technique. We argue that Hector SLAM is suitable for the SLAM-GDM approach since it does not perform loop closure or global corrections, which in turn would require to re-compute the gas distribution map. Real-time experiments were conducted in an environment with single and multiple gas sources. The results showed that the predictions of gas source location in all trials were often correct to around 0.5-1.5 m for the large indoor area being tested. The results also verified that the proposed SLAM-GDM approach and the designed system were able to achieve real-time operation.

  • 43.
    Khaliq, Ali Abdul
    et al.
    Örebro University, School of Science and Technology.
    Köckemann, Uwe
    Örebro University, School of Science and Technology.
    Pecora, Federico
    Örebro University, School of Science and Technology.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Bruno, Barbara
    University of Genova, Genova, Italy.
    Recchiuto, Carmine Tommaso
    University of Genova, Genova, Italy.
    Sgorbissa, Antonio
    University of Genova, Genova, Italy.
    Bui, Ha-Duong
    Japan Advanced Institute of Science and Technology, Ishikawa, Japan.
    Chong, Nak Young
    Japan Advanced Institute of Science and Technology, Ishikawa, Japan.
    Culturally aware Planning and Execution of Robot Actions2018In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2018, p. 326-332Conference paper (Refereed)
    Abstract [en]

    The way in which humans behave, speak andinteract is deeply influenced by their culture. For example,greeting is done differently in France, in Sweden or in Japan;and the average interpersonal distance changes from onecultural group to the other. In order to successfully coexistwith humans, robots should also adapt their behavior to theculture, customs and manners of the persons they interact with.In this paper, we deal with an important ingredient of culturaladaptation: how to generate robot plans that respect givencultural preferences, and how to execute them in a way thatis sensitive to those preferences. We present initial results inthis direction in the context of the CARESSES project, a jointEU-Japan effort to build culturally competent assistive robots.

  • 44.
    Kiselev, Andrey
    et al.
    Örebro University, School of Science and Technology.
    Kristoffersson, Annica
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    The Effect of Field of View on Social Interaction in Mobile Robotic Telepresence Systems2014In: Proceedings of the 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2014), IEEE conference proceedings, 2014, p. 214-215Conference paper (Refereed)
    Abstract [en]

    One goal of mobile robotic telepresence for social interaction is to design robotic units that are easy to operate for novice users and promote good interaction between people. This paper presents an exploratory study on the effect of camera orientation and field of view on the interaction between a remote and local user. Our findings suggest that limiting the width of the field of view can lead to better interaction quality as it encourages remote users to orient the robot towards local users.

  • 45.
    Kiselev, Andrey
    et al.
    Örebro University, School of Science and Technology.
    Mosiello, Giovanni
    Örebro University, School of Science and Technology. Roma Tre University, Rome, Italy.
    Kristoffersson, Annica
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Semi-Autonomous Cooperative Driving for Mobile Robotic Telepresence Systems2014In: Proceedings of the 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2014), IEEE conference proceedings, 2014, p. 104-104Conference paper (Refereed)
    Abstract [en]

    Mobile robotic telepresence (MRP) has been introduced to allow communication from remote locations. Modern MRP systems offer rich capabilities for human-human interactions. However, simply driving a telepresence robot can become a burden especially for novice users, leaving no room for interaction at all. In this video we introduce a project which aims to incorporate advanced robotic algorithms into manned telepresence robots in a natural way to allow human-robot cooperation for safe driving. It also shows a very first implementation of cooperative driving based on extracting a safe drivable area in real time using the image stream received from the robot.

  • 46.
    Kokic, Mia
    et al.
    Robotics, Perception, and Learning lab, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Robotics, Perception, and Learning lab, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Haustein, Joshua A.
    Robotics, Perception, and Learning lab, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Robotics, Perception, and Learning lab, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Affordance detection for task-specific grasping using deep learning2017In: 2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), IEEE conference proceedings, 2017, p. 91-98Conference paper (Refereed)
    Abstract [en]

    In this paper we utilize the notion of affordances to model relations between task, object and a grasp to address the problem of task-specific robotic grasping. We use convolutional neural networks for encoding and detecting object affordances, class and orientation, which we utilize to formulate grasp constraints. Our approach applies to previously unseen objects from a fixed set of classes and facilitates reasoning about which tasks an object affords and how to grasp it for that task. We evaluate affordance detection on full-view and partial-view synthetic data and compute task-specific grasps for objects that belong to ten different classes and afford five different tasks. We demonstrate the feasibility of our approach by employing an optimization-based grasp planner to compute task-specific grasps.

  • 47.
    Krishna, Sai
    et al.
    Örebro University, School of Science and Technology.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Kristoffersson, Annica
    School of Innovation, Design and Engineering, Mälardalen University, Västerås, Sweden.
    Repsilber, Dirk
    Örebro University, School of Medical Sciences.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    A Novel Method for Estimating Distances from a Robot to Humans Using Egocentric RGB Camera2019In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 19, no 14, article id E3142Article in journal (Refereed)
    Abstract [en]

    Estimating distances between people and robots plays a crucial role in understanding social Human-Robot Interaction (HRI) from an egocentric view. It is a key step if robots should engage in social interactions, and to collaborate with people as part of human-robot teams. For distance estimation between a person and a robot, different sensors can be employed, and the number of challenges to be addressed by the distance estimation methods rise with the simplicity of the technology of a sensor. In the case of estimating distances using individual images from a single camera in a egocentric position, it is often required that individuals in the scene are facing the camera, do not occlude each other, and are fairly visible so specific facial or body features can be identified. In this paper, we propose a novel method for estimating distances between a robot and people using single images from a single egocentric camera. The method is based on previously proven 2D pose estimation, which allows partial occlusions, cluttered background, and relatively low resolution. The method estimates distance with respect to the camera based on the Euclidean distance between ear and torso of people in the image plane. Ear and torso characteristic points has been selected based on their relatively high visibility regardless of a person orientation and a certain degree of uniformity with regard to the age and gender. Experimental validation demonstrates effectiveness of the proposed method.

  • 48.
    Krug, Robert
    et al.
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Kragic, Danica
    Centre for Autonomous Systems, Computer Vision and Active Perception Lab, CSC, KTH Stockholm, Stockholm, Sweden.
    Bekiroglu, Yasemin
    School of Mechanical Engineering, University of Birmingham, Birmingham, United Kingdom.
    Analytic Grasp Success Prediction with Tactile Feedback2016In: 2016 IEEE International Conference on Robotics and Automation, ICRA 2016, New York, USA: IEEE , 2016, p. 165-171Conference paper (Refereed)
    Abstract [en]

    Predicting grasp success is useful for avoiding failures in many robotic applications. Based on reasoning in wrench space, we address the question of how well analytic grasp success prediction works if tactile feedback is incorporated. Tactile information can alleviate contact placement uncertainties and facilitates contact modeling. We introduce a wrench-based classifier and evaluate it on a large set of real grasps. The key finding of this work is that exploiting tactile information allows wrench-based reasoning to perform on a level with existing methods based on learning or simulation. Different from these methods, the suggested approach has no need for training data, requires little modeling effort and is computationally efficient. Furthermore, our method affords task generalization by considering the capabilities of the grasping device and expected disturbance forces/moments in a physically meaningful way.

  • 49.
    Krug, Robert
    et al.
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Tincani, Vinicio
    University of Pisa, Pisa, Italy.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Mosberger, Rafael
    Örebro University, School of Science and Technology.
    Fantoni, Gualtiero
    University of Pisa, Pisa, Italy.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    The Next Step in Robot Commissioning: Autonomous Picking and Palletizing2016In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 1, no 1, p. 546-553Article in journal (Refereed)
    Abstract [en]

    So far, autonomous order picking (commissioning) systems have not been able to meet the stringent demands regarding speed, safety, and accuracy of real-world warehouse automation, resulting in reliance on human workers. In this letter, we target the next step in autonomous robot commissioning: automatizing the currently manual order picking procedure. To this end, we investigate the use case of autonomous picking and palletizing with a dedicated research platform and discuss lessons learned during testing in simplified warehouse settings. The main theoretical contribution is a novel grasp representation scheme which allows for redundancy in the gripper pose placement. This redundancy is exploited by a local, prioritized kinematic controller which generates reactive manipulator motions on-the-fly. We validated our grasping approach by means of a large set of experiments, which yielded an average grasp acquisition time of 23.5 s at a success rate of 94.7%. Our system is able to autonomously carry out simple order picking tasks in a humansafe manner, and as such serves as an initial step toward future commercial-scale in-house logistics automation solutions.

  • 50.
    Lagriffoul, Fabien
    et al.
    Örebro University, School of Science and Technology.
    Dantam, Neil T.
    Colorado School of Mines, Golden CO, USA.
    Garrett, Caelan
    Massachusetts Institute of Technology, Cambridge MA, USA.
    Akbari, Aliakbar
    Universidad Politécnica de Catalunya, Barcelona, Spain.
    Srivastava, Siddharth
    Arizona State University, Tempe AZ, USA.
    Kavraki, Lydia E.
    Rice University, Houston TX, USA .
    Platform-Independent Benchmarks for Task and Motion Planning2018In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 3, no 4, p. 3765-3772Article in journal (Refereed)
    Abstract [en]

    We present the first platform-independent evaluation method for task and motion planning (TAMP). Previously point, various problems have been used to test individual planners for specific aspects of TAMP. However, no common set of metrics, formats, and problems have been accepted by the community. We propose a set of benchmark problems covering the challenging aspects of TAMP and a planner-independent specification format for these problems. Our objective is to better evaluate and compare TAMP planners, foster communication, and progress within the field, and lay a foundation to better understand this class of planning problems.

123 1 - 50 of 104
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf