oru.sePublikationer
Ändra sökning
Avgränsa sökresultatet
123 1 - 50 av 117
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Agrawal, Vikas
    et al.
    IBM Research, , India.
    Archibald, Christopher
    Mississippi State University, Starkville, United States.
    Bhatt, Mehul
    University of Bremen, Bremen, Germany.
    Bui, Hung Hai
    Laboratory for Natural Language Understanding, Sunnyvale CA, United States.
    Cook, Diane J.
    Washington State University, Pullman WA, United States.
    Cortés, Juan
    University of Toulouse, Toulouse, France.
    Geib, Christopher W.
    Drexel University, Philadelphia PA, United States.
    Gogate, Vibhav
    Department of Computer Science, University of Texas, Dallas, United States.
    Guesgen, Hans W.
    Massey University, Palmerston North, New Zealand.
    Jannach, Dietmar
    Technical university Dortmund, Dortmund, Germany.
    Johanson, Michael
    University of Alberta, Edmonton, Canada.
    Kersting, Kristian
    Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme (IAIS), Sankt Augustin, Germany; The University of Bonn, Bonn, Germany.
    Konidaris, George
    Massachusetts Institute of Technology (MIT), Cambridge MA, United States.
    Kotthoff, Lars
    INSIGHT Centre for Data Analytics, University College Cork, Cork, Ireland.
    Michalowski, Martin
    Adventium Labs, Minneapolis MN, United States.
    Natarajan, Sriraam
    Indiana University, Bloomington IN, United States.
    O’Sullivan, Barry
    INSIGHT Centre for Data Analytics, University College Cork, Cork, Ireland.
    Pickett, Marc
    Naval Research Laboratory, Washington DC, United States.
    Podobnik, Vedran
    Telecommunication Department of the Faculty of Electrical Engineering and Computing, University of University of Zagreb, Zagreb, Croatia.
    Poole, David
    Department of Computer Science, University of British Columbia, Vancouver, Canada.
    Shastri, Lokendra
    Infosys, , India.
    Shehu, Amarda
    George Mason University, Washington, United States.
    Sukthankar, Gita
    University of Central Florida, Orlando FL, United States.
    The AAAI-13 Conference Workshops2013Ingår i: The AI Magazine, ISSN 0738-4602, Vol. 34, nr 4, s. 108-115Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The AAAI-13 Workshop Program, a part of the 27th AAAI Conference on Artificial Intelligence, was held Sunday and Monday, July 14-15, 2013, at the Hyatt Regency Bellevue Hotel in Bellevue, Washington, USA. The program included 12 workshops covering a wide range of topics in artificial intelligence, including Activity Context-Aware System Architectures (WS-13-05); Artificial Intelligence and Robotics Methods in Computational Biology (WS-13-06); Combining Constraint Solving with Mining and Learning (WS-13-07); Computer Poker and Imperfect Information (WS-13-08); Expanding the Boundaries of Health Informatics Using Artificial Intelligence (WS-13-09); Intelligent Robotic Systems (WS-13-10); Intelligent Techniques for Web Personalization and Recommendation (WS-13-11); Learning Rich Representations from Low-Level Sensors (WS-13-12); Plan, Activity,, and Intent Recognition (WS-13-13); Space, Time, and Ambient Intelligence (WS-13-14); Trading Agent Design and Analysis (WS-13-15); and Statistical Relational Artificial Intelligence (WS-13-16)

  • 2.
    Ahtiainen, Juhana
    et al.
    Department of Electrical Engineering and Automation, Aalto University, Espoo, Finland.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Saarinen, Jari
    GIM Ltd., Espoo, Finland.
    Normal Distributions Transform Traversability Maps: LIDAR-Only Approach for Traversability Mapping in Outdoor Environments2017Ingår i: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 34, nr 3, s. 600-621Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Safe and reliable autonomous navigation in unstructured environments remains a challenge for field robots. In particular, operating on vegetated terrain is problematic, because simple purely geometric traversability analysis methods typically classify dense foliage as nontraversable. As traversing through vegetated terrain is often possible and even preferable in some cases (e.g., to avoid executing longer paths), more complex multimodal traversability analysis methods are necessary. In this article, we propose a three-dimensional (3D) traversability mapping algorithm for outdoor environments, able to classify sparsely vegetated areas as traversable, without compromising accuracy on other terrain types. The proposed normal distributions transform traversability mapping (NDT-TM) representation exploits 3D LIDAR sensor data to incrementally expand normal distributions transform occupancy (NDT-OM) maps. In addition to geometrical information, we propose to augment the NDT-OM representation with statistical data of the permeability and reflectivity of each cell. Using these additional features, we train a support-vector machine classifier to discriminate between traversable and nondrivable areas of the NDT-TM maps. We evaluate classifier performance on a set of challenging outdoor environments and note improvements over previous purely geometrical traversability analysis approaches.

  • 3.
    Akalin, Neziha
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kiselev, Andrey
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kristoffersson, Annica
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    An Evaluation Tool of the Effect of Robots in Eldercare on the Sense of Safety and Security2017Ingår i: Social Robotics: 9th International Conference, ICSR 2017, Tsukuba, Japan, November 22-24, 2017, Proceedings / [ed] Kheddar, A.; Yoshida, E.; Ge, S.S.; Suzuki, K.; Cabibihan, J-J:, Eyssel, F:, He, H., Springer International Publishing , 2017, s. 628-637Konferensbidrag (Refereegranskat)
    Abstract [en]

    The aim of the study presented in this paper is to develop a quantitative evaluation tool of the sense of safety and security for robots in eldercare. By investigating the literature on measurement of safety and security in human-robot interaction, we propose new evaluation tools. These tools are semantic differential scale questionnaires. In experimental validation, we used the Pepper robot, programmed in the way to exhibit social behaviors, and constructed four experimental conditions varying the degree of the robot’s non-verbal behaviors from no gestures at all to full head and hand movements. The experimental results suggest that both questionnaires (for the sense of safety and the sense of security) have good internal consistency.

  • 4.
    Akalin, Neziha
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kiselev, Andrey
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kristoffersson, Annica
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    The Relevance of Social Cues in Assistive Training with a Social Robot2018Ingår i: 10th International Conference on Social Robotics, ICSR 2018, Proceedings / [ed] Ge, S.S., Cabibihan, J.-J., Salichs, M.A., Broadbent, E., He, H., Wagner, A., Castro-González, Á., Springer, 2018, s. 462-471Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper examines whether social cues, such as facial expressions, can be used to adapt and tailor a robot-assisted training in order to maximize performance and comfort. Specifically, this paper serves as a basis in determining whether key facial signals, including emotions and facial actions, are common among participants during a physical and cognitive training scenario. In the experiment, participants performed basic arm exercises with a social robot as a guide. We extracted facial features from video recordings of participants and applied a recursive feature elimination algorithm to select a subset of discriminating facial features. These features are correlated with the performance of the user and the level of difficulty of the exercises. The long-term aim of this work, building upon the work presented here, is to develop an algorithm that can eventually be used in robot-assisted training to allow a robot to tailor a training program based on the physical capabilities as well as the social cues of the users.

  • 5.
    Akalin, Neziha
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kristoffersson, Annica
    School of Innovation, Design and Engineering, Mälardalen University, Västerås, Sweden.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Evaluating the Sense of Safety and Security in Human - Robot Interaction with Older People2019Ingår i: Social Robots: Technological, Societal and Ethical Aspects of Human-Robot Interaction / [ed] Oliver Korn, Springer, 2019, s. 237-264Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    For many applications where interaction between robots and older people takes place, safety and security are key dimensions to consider. ‘Safety’ refers to a perceived threat of physical harm, whereas ‘security’ is a broad term which refers to many aspects related to health, well-being, and aging. This chapter presents a quantitative evaluation tool of the sense of safety and security for robots in elder care. By investigating the literature on measurement of safety and security in human–robot interaction, we propose new evaluation tools specially tailored to assess interaction between robots and older people.

  • 6.
    Akalin, Neziha
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kristoffersson, Annica
    School of Innovation, Design and Engineering, Mälardalen University, Västerås, Sweden.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    The Influence of Feedback Type in Robot-Assisted Training2019Ingår i: Multimodal Technologies and Interaction, E-ISSN 2414-4088, Vol. 3, nr 4Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Robot-assisted training, where social robots can be used as motivational coaches, provides an interesting application area. This paper examines how feedback given by a robot agent influences the various facets of participant experience in robot-assisted training. Specifically, we investigated the effects of feedback type on robot acceptance, sense of safety and security, attitude towards robots and task performance. In the experiment, 23 older participants performed basic arm exercises with a social robot as a guide and received feedback. Different feedback conditions were administered, such as flattering, positive and negative feedback. Our results suggest that the robot with flattering and positive feedback was appreciated by older people in general, even if the feedback did not necessarily correspond to objective measures such as performance. Participants in these groups felt better about the interaction and the robot.

  • 7.
    Akbari, Aliakbar
    et al.
    Institute of Industrial and Control Engineering (IOC), Universitat Politècnica de Catalunya (UPC)—Barcelona Tech, Barcelona, Spain.
    Lagriffoul, Fabien
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Rosell, Jan
    Institute of Industrial and Control Engineering (IOC), Universitat Politècnica de Catalunya (UPC)—Barcelona Tech, Barcelona, Spain.
    Combined heuristic task and motion planning for bi-manual robots2019Ingår i: Autonomous Robots, ISSN 0929-5593, E-ISSN 1573-7527, Vol. 43, nr 6, s. 1575-1590Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Planning efficiently at task and motion levels allows the setting of new challenges for robotic manipulation problems, like for instance constrained table-top problems for bi-manual robots. In this scope, the appropriate combination of task and motion planning levels plays an important role. Accordingly, a heuristic-based task and motion planning approach is proposed, in which the computation of the heuristic addresses a geometrically relaxed problem, i.e., it only reasons upon objects placements, grasp poses, and inverse kinematics solutions. Motion paths are evaluated lazily, i.e., only after an action has been selected by the heuristic. This reduces the number of calls to the motion planner, while backtracking is reduced because the heuristic captures most of the geometric constraints. The approach has been validated in simulation and on a real robot, with different classes of table-top manipulation problems. Empirical comparison with recent approaches solving similar problems is also reported, showing that the proposed approach results in significant improvement both in terms of planing time and success rate.

  • 8.
    Almqvist, Håkan
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Magnusson, Martin
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kucner, Tomasz Piotr
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Learning to detect misaligned point clouds2018Ingår i: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 35, nr 5, s. 662-677Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Matching and merging overlapping point clouds is a common procedure in many applications, including mobile robotics, three-dimensional mapping, and object visualization. However, fully automatic point-cloud matching, without manual verification, is still not possible because no matching algorithms exist today that can provide any certain methods for detecting misaligned point clouds. In this article, we make a comparative evaluation of geometric consistency methods for classifying aligned and nonaligned point-cloud pairs. We also propose a method that combines the results of the evaluated methods to further improve the classification of the point clouds. We compare a range of methods on two data sets from different environments related to mobile robotics and mapping. The results show that methods based on a Normal Distributions Transform representation of the point clouds perform best under the circumstances presented herein.

  • 9.
    Amigoni, Francesco
    et al.
    Politecnico di Milano, Milan, Italy.
    Yu, Wonpil
    Electronics and Telecommunications Research Institute (ETRI), Daejeon, South Korea.
    Andre, Torsten
    University of Klagenfurt, Klagenfurt, Austria.
    Holz, Dirk
    University of Bonn, Bonn, Germany.
    Magnusson, Martin
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Matteucci, Matteo
    Politecnico di Milano, Milan, Italy.
    Moon, Hyungpil
    Sungkyunkwan University, Suwon, South Korea.
    Yokozuka, Masashi
    Nat. Inst. of Advanced Industrial Science and Technology, Tsukuba, Japan.
    Biggs, Geoffrey
    Nat. Inst. of Advanced Industrial Science and Technology, Tsukuba, Japan.
    Madhavan, Raj
    Amrita University, Clarksburg MD, United States of America.
    A Standard for Map Data Representation: IEEE 1873-2015 Facilitates Interoperability Between Robots2018Ingår i: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 25, nr 1, s. 65-76Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The availability of environment maps for autonomous robots enables them to complete several tasks. A new IEEE standard, IEEE 1873-2015, Robot Map Data Representation for Navigation (MDR) [15], sponsored by the IEEE Robotics and Automation Society (RAS) and approved by the IEEE Standards Association Standards Board in September 2015, defines a common representation for two-dimensional (2-D) robot maps and is intended to facilitate interoperability among navigating robots. The standard defines an extensible markup language (XML) data format for exchanging maps between different systems. This article illustrates how metric maps, topological maps, and their combinations can be represented according to the standard.

  • 10.
    Antonova, Rika
    et al.
    Robotics, Perception and Learning, CSC, Royal Institute of Technology, Stockholm, Sweden.
    Kokic, Mia
    Robotics, Perception and Learning, CSC, Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Robotics, Perception and Learning, CSC, Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Robotics, Perception and Learning, CSC, Royal Institute of Technology, Stockholm, Sweden.
    Global Search with Bernoulli Alternation Kernel for Task-oriented Grasping Informed by Simulation2018Ingår i: Proceedings of Machine Learning Research: Conference on Robot Learning 2018, PMLR , 2018, Vol. 87, s. 641-650Konferensbidrag (Refereegranskat)
    Abstract [en]

    We develop an approach that benefits from large simulated datasets and takes full advantage of the limited online data that is most relevant. We propose a variant of Bayesian optimization that alternates between using informed and uninformed kernels. With this Bernoulli Alternation Kernel we ensure that discrepancies between simulation and reality do not hinder adapting robot control policies online. The proposed approach is applied to a challenging real-world problem of task-oriented grasping with novel objects. Our further contribution is a neural network architecture and training pipeline that use experience from grasping objects in simulation to learn grasp stability scores. We learn task scores from a labeled dataset with a convolutional network, which is used to construct an informed kernel for our variant of Bayesian optimization. Experiments on an ABB Yumi robot with real sensor data demonstrate success of our approach, despite the challenge of fulfilling task requirements and high uncertainty over physical properties of objects.

  • 11.
    Arnekvist, Isac
    et al.
    Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Örebro universitet, Institutionen för naturvetenskap och teknik. Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
    VPE: Variational Policy Embedding for Transfer Reinforcement Learning2019Ingår i: 2019 International Conference on Robotics and Automation (ICRA) / [ed] Howard, A; Althoefer, K; Arai, F; Arrichiello, F; Caputo, B; Castellanos, J; Hauser, K; Isler, V Kim, J; Liu, H; Oh, P; Santos, V; Scaramuzza, D; Ude, A; Voyles, R; Yamane, K; Okamura, A, IEEE , 2019, s. 36-42Konferensbidrag (Refereegranskat)
    Abstract [en]

    Reinforcement Learning methods are capable of solving complex problems, but resulting policies might perform poorly in environments that are even slightly different. In robotics especially, training and deployment conditions often vary and data collection is expensive, making retraining undesirable. Simulation training allows for feasible training times, but on the other hand suffer from a reality-gap when applied in real-world settings. This raises the need of efficient adaptation of policies acting in new environments.

    We consider the problem of transferring knowledge within a family of similar Markov decision processes. We assume that Q-functions are generated by some low-dimensional latent variable. Given such a Q-function, we can find a master policy that can adapt given different values of this latent variable. Our method learns both the generative mapping and an approximate posterior of the latent variables, enabling identification of policies for new tasks by searching only in the latent space, rather than the space of all policies. The low-dimensional space, and master policy found by our method enables policies to quickly adapt to new environments. We demonstrate the method on both a pendulum swing-up task in simulation, and for simulation-to-real transfer on a pushing task.

  • 12.
    Arnekvist, Isac
    et al.
    Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
    VPE: Variational Policy Embedding for Transfer Reinforcement Learning2018Manuskript (preprint) (Övrigt vetenskapligt)
  • 13.
    Asadi, Sahar
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Fan, Han
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Hernandez Bennetts, Victor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Time-dependent gas distribution modelling2017Ingår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 96, s. 157-170Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Artificial olfaction can help to address pressing environmental problems due to unwanted gas emissions. Sensor networks and mobile robots equipped with gas sensors can be used for e.g. air pollution monitoring. Key in this context is the ability to derive truthful models of gas distribution from a set of sparse measurements. Most statistical gas distribution modelling methods assume that gas dispersion is a time constant random process. While this assumption approximately holds in some situations, it is necessary to model variations over time in order to enable applications of gas distribution modelling in a wider range of realistic scenarios. Time-invariant approaches cannot model well evolving gas plumes, for example, or major changes in gas dispersion due to a sudden change of the environmental conditions. This paper presents two approaches to gas distribution modelling, which introduce a time-dependency and a relation to a time-scale in generating the gas distribution model either by sub-sampling or by introducing a recency weight that relates measurement and prediction time. We evaluated these approaches in experiments performed in two real environments as well as on several simulated experiments. As expected, the comparison of different sub-sampling strategies revealed that more recent measurements are more informative to derive an estimate of the current gas distribution as long as a sufficient spatial coverage is given. Next, we compared a time-dependent gas distribution modelling approach (TD Kernel DM+V), which includes a recency weight, to the state-of-the-art gas distribution modelling approach (Kernel DM+V), which does not consider sampling times. The results indicate a consistent improvement in the prediction of unseen measurements, particularly in dynamic scenarios. Furthermore, this paper discusses the impact of meta-parameters in model selection and compares the performance of time-dependent GDM in different plume conditions. Finally, we investigated how to set the target time for which the model is created. The results indicate that TD Kernel DM+V performs best when the target time is set to the maximum sampling time in the test set.

  • 14.
    Bacciu, Davide
    et al.
    Università di Pisa, Pisa, Italy.
    Di Rocco, Maurizio
    Örebro University, Örebro, Sweden.
    Dragone, Mauro
    Heriot-Watt University, Edinburgh, UK.
    Gallicchio, Claudio
    Università di Pisa, Pisa, Italy.
    Micheli, Alessio
    Università di Pisa, Pisa, Italy.
    Saffiotti, Alessandro
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    An ambient intelligence approach for learning in smart robotic environments2019Ingår i: Computational intelligence, ISSN 0824-7935, E-ISSN 1467-8640, Vol. 35, nr 4, s. 1060-1087Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Smart robotic environments combine traditional (ambient) sensing devices and mobile robots. This combination extends the type of applications that can be considered, reduces their complexity, and enhances the individual values of the devices involved by enabling new services that cannot be performed by a single device. To reduce the amount of preparation and preprogramming required for their deployment in real-world applications, it is important to make these systems self-adapting. The solution presented in this paper is based upon a type of compositional adaptation where (possibly multiple) plans of actions are created through planning and involve the activation of pre-existing capabilities. All the devices in the smart environment participate in a pervasive learning infrastructure, which is exploited to recognize which plans of actions are most suited to the current situation. The system is evaluated in experiments run in a real domestic environment, showing its ability to proactively and smoothly adapt to subtle changes in the environment and in the habits and preferences of their user(s), in presence of appropriately defined performance measuring functions.

  • 15.
    Behrens, Jan Kristof
    et al.
    Robert Bosch GmbH, Corporate Research, Renningen, Germany.
    Lange, Ralph
    Robert Bosch GmbH, Corporate Research, Renningen, Germany.
    Mansouri, Masoumeh
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    A Constraint Programming Approach to Simultaneous Task Allocation and Motion Scheduling for Industrial Dual-Arm Manipulation Tasks2019Ingår i: 2019 International Conference on Robotics and Automation (ICRA) / [ed] Howard, A; Althoefer, K; Arai, F; Arrichiello, F; Caputo, B; Castellanos, J; Hauser, K; Isler, V Kim, J; Liu, H; Oh, P; Santos, V; Scaramuzza, D; Ude, A; Voyles, R; Yamane, K; Okamura, A, IEEE , 2019, s. 8705-8711Konferensbidrag (Refereegranskat)
    Abstract [en]

    Modern lightweight dual-arm robots bring the physical capabilities to quickly take over tasks at typical industrial workplaces designed for workers. Low setup times - including the instructing/specifying of new tasks - are crucial to stay competitive. We propose a constraint programming approach to simultaneous task allocation and motion scheduling for such industrial manipulation and assembly tasks. Our approach covers the robot as well as connected machines. The key concept are Ordered Visiting Constraints, a descriptive and extensible model to specify such tasks with their spatiotemporal requirements and combinatorial or ordering constraints. Our solver integrates such task models and robot motion models into constraint optimization problems and solves them efficiently using various heuristics to produce makespan-optimized robot programs. For large manipulation tasks with 200 objects, our solver implemented using Google's Operations Research tools requires less than a minute to compute usable plans. The proposed task model is robot-independent and can easily be deployed to other robotic platforms. This portability is validated through several simulation-based experiments.

  • 16.
    Bekiroglu, Yasemin
    et al.
    School of Mechanical Engineering, University of Birmingham, Birmingham, UK.
    Damianou, Andreas
    Department of Computer Science, University of Sheffield, Sheffield, UK.
    Detry, Renaud
    Centre for Autonomous Systems, CSC, Royal Institute of Technology, Sweden.
    Stork, Johannes Andreas
    Centre for Autonomous Systems, CSC, Royal Institute of Technology, Sweden.
    Kragic, Danica
    Centre for Autonomous Systems, CSC, Royal Institute of Technology, Sweden.
    Ek, Carl Henrik
    Centre for Autonomous Systems, CSC, Royal Institute of Technology, Sweden.
    Probabilistic consolidation of grasp experience2016Ingår i: 2016 IEEE International Conference on Robotics and Automation (ICRA), IEEE conference proceedings, 2016, s. 193-200Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a probabilistic model for joint representation of several sensory modalities and action parameters in a robotic grasping scenario. Our non-linear probabilistic latent variable model encodes relationships between grasp-related parameters, learns the importance of features, and expresses confidence in estimates. The model learns associations between stable and unstable grasps that it experiences during an exploration phase. We demonstrate the applicability of the model for estimating grasp stability, correcting grasps, identifying objects based on tactile imprints and predicting tactile imprints from object-relative gripper poses. We performed experiments on a real platform with both known and novel objects, i.e., objects the robot trained with, and previously unseen objects. Grasp correction had a 75% success rate on known objects, and 73% on new objects. We compared our model to a traditional regression model that succeeded in correcting grasps in only 38% of cases.

  • 17.
    Bhatt, Mehul
    et al.
    SFB/TR 8 Spatial Cognition, University of Bremen, Bremen, Germany.
    Dylla, Frank
    SFB/TR 8 Spatial Cognition, University of Bremen, Bremen, Germany.
    A Qualitative Model of Dynamic Scene Analysis and Interpretation in Ambient Intelligence Systems2009Ingår i: International Journal of Robotics and Automation, ISSN 0826-8185, Vol. 24, nr 3, s. 235-244Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Ambient intelligence environments necessitate representing and reasoning about dynamic spatial scenes and configurations. The ability to perform predictive and explanatory analyses of spatial scenes is crucial towards serving a useful intelligent function within such environments. We present a formal qualitative model that combines existing qualitative theories about space with it formal logic-based calculus suited to modelling dynamic environments, or reasoning about action and change in general. With this approach, it is possible to represent and reason about arbitrary dynamic spatial environments within a unified framework. We clarify and elaborate on our ideas with examples grounded in a smart environment.

  • 18.
    Bhatt, Mehul
    et al.
    Department of Computer Science, La Trobe University, Germany.
    Loke, Seng
    Department of Computer Science, La Trobe University, Germany.
    Modelling Dynamic Spatial Systems in the Situation Calculus2008Ingår i: Spatial Cognition and Computation, ISSN 1387-5868, E-ISSN 1573-9252, Vol. 8, nr 1-2, s. 86-130Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We propose and systematically formalise a dynamical spatial systems approach for the modelling of changing spatial environments. The formalisation adheres to the semantics of the situation calculus and includes a systematic account of key aspects that are necessary to realize a domain-independent qualitative spatial theory that may be utilised across diverse application domains. The spatial theory is primarily derivable from the all-pervasive generic notion of "qualitative spatial calculi" that are representative of differing aspects of space. In addition, the theory also includes aspects, both ontological and phenomenal in nature, that are considered inherent in dynamic spatial systems. Foundational to the formalisation is a causal theory that adheres to the representational and computational semantics of the situation calculus. This foundational theory provides the necessary (general) mechanism required to represent and reason about changing spatial environments and also includes an account of the key fundamental epistemological issues concerning the frame and the ramification problems that arise whilst modelling change within such domains. The main advantage of the proposed approach is that based on the structure and semantics of the proposed framework, fundamental reasoning tasks such as projection and explanation directly follow. Within the specialised spatial reasoning domain, these translate to spatial planning/re-configuration, causal explanation and spatial simulation. Our approach is based on the hypothesis that alternate formalisations of existing qualitative spatial calculi using high-level tools such as the situation calculus are essential for their utilisation in diverse application domains such as intelligent systems, cognitive robotics and event-based GIS.

  • 19.
    Bhatt, Mehul
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Suchan, Jakob
    University of Bremen, Bremen, Germany.
    Vardarajan, Srikrishna
    CoDesign Lab EU.
    Deep Semantics for Explainable Visuospatial Intelligence: Perspectives on Integrating Commonsense Spatial Abstractions and Low-Level Neural Features2019Ingår i: Proceedings of the 2019 International Workshop on Neural-Symbolic Learning and Reasoning: Annual workshop of the Neural-Symbolic Learning and Reasoning Association / [ed] Derek Doran; Artur d'Avila Garcez; Freddy Lecue, 2019Konferensbidrag (Refereegranskat)
    Abstract [en]

    High-level semantic interpretation of (dynamic) visual imagery calls for general and systematic methods integrating techniques in knowledge representation and computer vision. Towards this, we position "deep semantics", denoting the existence of declarative models –e.g., pertaining "space and motion"– and corresponding formalisation and methods supporting (domain-independent) explainability capabilities such as semantic question-answering, relational (and relationally-driven) visuospatial learning, and (non-monotonic) visuospatial abduction. Rooted in recent work, we summarise and report the status quo on deep visuospatial semantics —and our approach to neurosymbolic integration and explainable visuo-spatial computing in that context— with developed methods and tools in diverse settings such as behavioural research in psychology, art & social sciences, and autonomous driving.

  • 20.
    Bhattacharyya, Subhajit
    et al.
    ECE Department, Mallabhum Institute of Technology, West Bengal, India.
    Chakraborty, Subham
    CSE Department, Mallabhum Institute of Technology, West Bengal, India.
    Reconstruction of Human Faces from Its Eigenfaces2014Ingår i: International Journal of Advanced Research In Computer Science and Software Engineering, ISSN 2277-6451, E-ISSN 2277-128X, Vol. 4, nr 1, s. 209-215Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Eigenface or Principal Components Analysis (PCA) methods have demonstrated their success in face recognition, detection and tracking. In this paper we have used this concept to reconstruct or represent a face as a linear combination of a set of basis images. The basis images are nothing but the eigenfaces. The idea is similar to represent a signal in the form of a linear combination of complex sinusoids called the Fourier Series. The main advantage is that the number of eigenfaces required is less than the number of face images in the database. Selection of number of eigefaces is important here. Here we investigate what is the number of minimum eigenface that is required for faithful production of a face image.

  • 21.
    Bruno, Barbara
    et al.
    University of Genova, Genova, Italy.
    Chong, Nak Young
    Japan Advanced Institute of Science and Technology, Nomi [Ishikawa], Japan.
    Kamide, Hiroko
    Nagoya University, Nagoya, Japan.
    Kanoria, Sanjeev
    Advinia Health Care Limited LTD, London, UK.
    Lee, Jaeryoung
    Chubu University, Kasugai, Japan.
    Lim, Yuto
    Japan Advanced Institute of Science and Technology, Nomi [Ishikawa], Japan.
    Kumar Pandey, Amit
    SoftBank Robotics.
    Papadopoulos, Chris
    University of Bedfordshire, Luton, UK.
    Papadopoulos, Irena
    Middlesex University Higher Education Corporation, London, UK.
    Pecora, Federico
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Saffiotti, Alessandro
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Sgorbissa, Antonio
    University of Genova, Genova, Italy.
    Paving the Way for Culturally Competent Robots: a Position Paper2017Ingår i: 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) / [ed] Howard, A; Suzuki, K; Zollo, L, New York: Institute of Electrical and Electronics Engineers (IEEE), 2017, s. 553-560Konferensbidrag (Refereegranskat)
    Abstract [en]

    Cultural competence is a well known requirementfor an effective healthcare, widely investigated in thenursing literature. We claim that personal assistive robotsshould likewise be culturally competent, aware of generalcultural characteristics and of the different forms they take indifferent individuals, and sensitive to cultural differences whileperceiving, reasoning, and acting. Drawing inspiration fromexisting guidelines for culturally competent healthcare and thestate-of-the-art in culturally competent robotics, we identifythe key robot capabilities which enable culturally competentbehaviours and discuss methodologies for their developmentand evaluation.

  • 22.
    Bruno, Barbara
    et al.
    University of Genoa, Genoa, Italy.
    Recchiuto, Carmine Tommaso
    University of Genoa, Genoa, Italy.
    Papadopoulos, Irena
    Middlesex University Higher Education Corporation, The Burroughs, Hendon, London, UK.
    Saffiotti, Alessandro
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Koulouglioti, Christina
    Middlesex University Higher Education Corporation, The Burroughs, Hendon, London, UK.
    Menicatti, Roberto
    University of Genoa, Genoa, Italy.
    Mastrogiovanni, Fulvio
    University of Genoa, Genoa, Italy.
    Zaccarial, Renato
    University of Genoa, Genoa, Italy.
    Sgorbissa, Antonio
    University of Genoa, Genoa, Italy.
    Knowledge Representation for Culturally Competent Personal Robots: Requirements, Design Principles, Implementation, and Assessment2019Ingår i: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 11, nr 3, s. 515-538Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Culture, intended as the set of beliefs, values, ideas, language, norms and customs which compose a person's life, is an essential element to know by any robot for personal assistance. Culture, intended as that person's background, can be an invaluable source of information to drive and speed up the process of discovering and adapting to the person's habits, preferences and needs. This article discusses the requirements posed by cultural competence on the knowledge management system of a robot. We propose a framework for cultural knowledge representation that relies on (i) a three-layer ontology for storing concepts of relevance, culture-specific information and statistics, person-specific information and preferences; (ii) an algorithm for the acquisition of person-specific knowledge, which uses culture-specific knowledge to drive the search; (iii) a Bayesian Network for speeding up the adaptation to the person by propagating the effects of acquiring one specific information onto interconnected concepts. We have conducted a preliminary evaluation of the framework involving 159 Italian and German volunteers and considering 122 among habits, attitudes and social norms.

  • 23.
    Burgues, Javier
    et al.
    Institute for Bioengineering of Catalonia (IBEC), The Barcelona Institute of Science and Technology, Barcelona, Spain; Department of Electronics and Biomedical Engineering, Universitat de Barcelona, Barcelona, Spain.
    Hernandez Bennetts, Victor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Marco, Santiago
    Institute for Bioengineering of Catalonia (IBEC), The Barcelona Institute of Science and Technology, Barcelona, Spain; Department of Electronics and Biomedical Engineering, Universitat de Barcelona, Barcelona, Spain.
    Gas Distribution Mapping and Source Localization Using a 3D Grid of Metal Oxide Semiconductor Sensors2020Ingår i: Sensors and actuators. B, Chemical, ISSN 0925-4005, E-ISSN 1873-3077, Vol. 304, artikel-id 127309Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The difficulty to obtain ground truth (i.e. empirical evidence) about how a gas disperses in an environment is one of the major hurdles in the field of mobile robotic olfaction (MRO), impairing our ability to develop efficient gas source localization strategies and to validate gas distribution maps produced by autonomous mobile robots. Previous ground truth measurements of gas dispersion have been mostly based on expensive tracer optical methods or 2D chemical sensor grids deployed only at ground level. With the ever-increasing trend towards gas-sensitive aerial robots, 3D measurements of gas dispersion become necessary to characterize the environment these platforms can explore. This paper presents ten different experiments performed with a 3D grid of 27 metal oxide semiconductor (MOX) sensors to visualize the temporal evolution of gas distribution produced by an evaporating ethanol source placed at different locations in an office room, including variations in height, release rate and air flow. We also studied which features of the MOX sensor signals are optimal for predicting the source location, considering different lengths of the measurement window. We found strongly time-varying and counter-intuitive gas distribution patterns that disprove some assumptions commonly held in the MRO field, such as that heavy gases disperse along ground level. Correspondingly, ground-level gas distributions were rarely useful for localizing the gas source and elevated measurements were much more informative. We make the dataset and the code publicly available to enable the community to develop, validate, and compare new approaches related to gas sensing in complex environments.

  • 24.
    Can, Ozan Arkan
    et al.
    Koc University.
    Zuidberg Dos Martires, Pedro
    KU Leuven.
    Persson, Andreas
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Gaal, Julian
    Osnabrück University.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    De Raedt, Luc
    KU Leuven.
    Yuret, Deniz
    Koc University.
    Saffiotti, Alessandro
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Learning from Implicit Information in Natural Language Instructions for Robotic Manipulations2019Ingår i: Proceedings of the Combined Workshop on Spatial Language Understanding (SpLU) and Grounded Communication for Robotics (RoboNLP) / [ed] Archna Bhatia, Yonatan Bisk, Parisa Kordjamshidi, Jesse Thomason, Association for Computational Linguistics , 2019, s. 29-39, artikel-id W19-1604Konferensbidrag (Refereegranskat)
    Abstract [en]

    Human-robot interaction often occurs in the form of instructions given from a human to a robot. For a robot to successfully follow instructions, a common representation of the world and objects in it should be shared between humans and the robot so that the instructions can be grounded. Achieving this representation can be done via learning, where both the world representation and the language grounding are learned simultaneously. However, in robotics this can be a difficult task due to the cost and scarcity of data. In this paper, we tackle the problem by separately learning the world representation of the robot and the language grounding. While this approach can address the challenges in getting sufficient data, it may give rise to inconsistencies between both learned components. Therefore, we further propose Bayesian learning to resolve such inconsistencies between the natural language grounding and a robot’s world representation by exploiting spatio-relational information that is implicitly present in instructions given by a human. Moreover, we demonstrate the feasibility of our approach on a scenario involving a robotic arm in the physical world.

  • 25.
    Canelhas, Daniel R.
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Schaffernicht, Erik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Davison, Andrew J.
    Department of Computing, Imperial College London, London, United Kingdom.
    Compressed Voxel-Based Mapping Using Unsupervised Learning2017Ingår i: Robotics, E-ISSN 2218-6581, Vol. 6, nr 3, artikel-id 15Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In order to deal with the scaling problem of volumetric map representations, we propose spatially local methods for high-ratio compression of 3D maps, represented as truncated signed distance fields. We show that these compressed maps can be used as meaningful descriptors for selective decompression in scenarios relevant to robotic applications. As compression methods, we compare using PCA-derived low-dimensional bases to nonlinear auto-encoder networks. Selecting two application-oriented performance metrics, we evaluate the impact of different compression rates on reconstruction fidelity as well as to the task of map-aided ego-motion estimation. It is demonstrated that lossily reconstructed distance fields used as cost functions for ego-motion estimation can outperform the original maps in challenging scenarios from standard RGB-D (color plus depth) data sets due to the rejection of high-frequency noise content.

  • 26.
    Canelhas, Daniel R.
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    From Feature Detection in Truncated Signed Distance Fields to Sparse Stable Scene Graphs2016Ingår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, Vol. 1, nr 2, s. 1148-1155Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    With the increased availability of GPUs and multicore CPUs, volumetric map representations are an increasingly viable option for robotic applications. A particularly important representation is the truncated signed distance field (TSDF) that is at the core of recent advances in dense 3D mapping. However, there is relatively little literature exploring the characteristics of 3D feature detection in volumetric representations. In this paper we evaluate the performance of features extracted directly from a 3D TSDF representation. We compare the repeatability of Integral invariant features, specifically designed for volumetric images, to the 3D extensions of Harris and Shi & Tomasi corners. We also study the impact of different methods for obtaining gradients for their computation. We motivate our study with an example application for building sparse stable scene graphs, and present an efficient GPU-parallel algorithm to obtain the graphs, made possible by the combination of TSDF and 3D feature points. Our findings show that while the 3D extensions of 2D corner-detection perform as expected, integral invariants have shortcomings when applied to discrete TSDFs. We conclude with a discussion of the cause for these points of failure that sheds light on possible mitigation strategies.

  • 27.
    Chadalavada, Ravi Teja
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Schindler, Maike
    Faculty of Human Sciences, University of Cologne, Germany.
    Palm, Rainer
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Bi-directional navigation intent communication using spatial augmented reality and eye-tracking glasses for improved safety in human-robot interaction2020Ingår i: Robotics and Computer-Integrated Manufacturing, ISSN 0736-5845, E-ISSN 1879-2537, Vol. 61, artikel-id 101830Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Safety, legibility and efficiency are essential for autonomous mobile robots that interact with humans. A key factor in this respect is bi-directional communication of navigation intent, which we focus on in this article with a particular view on industrial logistic applications. In the direction robot-to-human, we study how a robot can communicate its navigation intent using Spatial Augmented Reality (SAR) such that humans can intuitively understand the robot's intention and feel safe in the vicinity of robots. We conducted experiments with an autonomous forklift that projects various patterns on the shared floor space to convey its navigation intentions. We analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift and carried out stimulated recall interviews (SRI) in order to identify desirable features for projection of robot intentions. In the direction human-to-robot, we argue that robots in human co-habited environments need human-aware task and motion planning to support safety and efficiency, ideally responding to people's motion intentions as soon as they can be inferred from human cues. Eye gaze can convey information about intentions beyond what can be inferred from the trajectory and head pose of a person. Hence, we propose eye-tracking glasses as safety equipment in industrial environments shared by humans and robots. In this work, we investigate the possibility of human-to-robot implicit intention transference solely from eye gaze data and evaluate how the observed eye gaze patterns of the participants relate to their navigation decisions. We again analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift for clues that could reveal direction intent. Our analysis shows that people primarily gazed on that side of the robot they ultimately decided to pass by. We discuss implications of these results and relate to a control approach that uses human gaze for early obstacle avoidance.

  • 28.
    Daoutis, Marios
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Knowledge based perceptual anchoring: grounding percepts to concepts in cognitive robots2013Ingår i: Künstliche Intelligenz, ISSN 0933-1875, E-ISSN 1610-1987, s. 1-4Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Perceptual anchoring is the process of creating and maintaining a connection between the sensor data corresponding to a physical object and its symbolic description. It is a subset of the symbol grounding problem, introduced by Harnad (Phys. D, Nonlinear Phenom. 42(1–3):335–346, 1990) and investigated over the past years in several disciplines including robotics. This PhD dissertation focuses on a method for grounding sensor data of physical objects to the corresponding semantic descriptions, in the context of cognitive robots where the challenge is to establish the connection between percepts and concepts referring to objects, their relations and properties. We examine how knowledge representation can be used together with an anchoring framework, so as to complement the meaning of percepts while supporting better linguistic interaction with the use of the corresponding concepts. The proposed method addresses the need to represent and process both perceptual and semantic knowledge, often expressed in different abstraction levels, while originating from different modalities. We then focus on the integration of anchoring with a large scale knowledge base system and with perceptual routines. This integration is applied in a number of studies, where in the context of a smart home, several evaluations spanning from spatial and commonsense reasoning to linguistic interaction and concept acquisition.

  • 29.
    Daoutis, Marios
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Coradeschi, Silvia
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Towards concept anchoring for cognitive robots2012Ingår i: Intelligent Service Robotics, ISSN 1861-2784, Vol. 5, nr 4, s. 213-228Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We present a model for anchoring categorical conceptual information which originates from physical perception and the web. The model is an extension of the anchoring framework which is used to create and maintain over time semantically grounded sensor information. Using the augmented anchoring framework that employs complex symbolic knowledge from a commonsense knowledge base, we attempt to ground and integrate symbolic and perceptual data that are available on the web. We introduce conceptual anchors which are representations of general, concrete conceptual terms. We show in an example scenario how conceptual anchors can be coherently integrated with perceptual anchors and commonsense information for the acquisition of novel concepts.

  • 30.
    Della Corte, Bartolomeo
    et al.
    Department of Computer, Control, and Management Engineering “Antonio Ruberti” Sapienza, University of Rome, Rome, Italy.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Grisetti, Giorgio
    Department of Computer, Control, and Management Engineering “Antonio Ruberti” Sapienza, University of Rome, Rome, Italy.
    Unified Motion-Based Calibration of Mobile Multi-Sensor Platforms With Time Delay Estimation2019Ingår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, nr 2, s. 902-909Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The ability to maintain and continuously update geometric calibration parameters of a mobile platform is a key functionality for every robotic system. These parameters include the intrinsic kinematic parameters of the platform, the extrinsic parameters of the sensors mounted on it, and their time delays. In this letter, we present a unified pipeline for motion-based calibration of mobile platforms equipped with multiple heterogeneous sensors. We formulate a unified optimization problem to concurrently estimate the platform kinematic parameters, the sensors extrinsic parameters, and their time delays. We analyze the influence of the trajectory followed by the robot on the accuracy of the estimate. Our framework automatically selects appropriate trajectories to maximize the information gathered and to obtain a more accurate parameters estimate. In combination with that, our pipeline observes the parameters evolution in long-term operation to detect possible values change in the parameters set. The experiments conducted on real data show a smooth convergence along with the ability to detect changes in parameters value. We release an open-source version of our framework to the community.

  • 31.
    Dubba, Krishna Sandeep Reddy
    et al.
    School of Computing, University of Leeds, Leeds, UK.
    Cohn, Anthony G.
    School of Computing, University of Leeds, Leeds, UK.
    Hogg, David C.
    School of Computing, University of Leeds, Leeds, UK.
    Bhatt, Mehul
    Cognitive Systems, SFB/TR 8 Spatial Cognition, University of Bremen, Bremen, Germany.
    Dylla, Frank
    Cognitive Systems, SFB/TR 8 Spatial Cognition, University of Bremen, Bremen, Germany.
    Learning Relational Event Models from Video2015Ingår i: The journal of artificial intelligence research, ISSN 1076-9757, E-ISSN 1943-5037, Vol. 53, s. 41-90Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Event models obtained automatically from video can be used in applications ranging from abnormal event detection to content based video retrieval. When multiple agents are involved in the events, characterizing events naturally suggests encoding interactions as relations. Learning event models from this kind of relational spatio-temporal data using relational learning techniques such as Inductive Logic Programming (ILP) hold promise, but have not been successfully applied to very large datasets which result from video data. In this paper, we present a novel framework REMIND (Relational Event Model INDuction) for supervised relational learning of event models from large video datasets using ILP. Efficiency is achieved through the learning from interpretations setting and using a typing system that exploits the type hierarchy of objects in a domain. The use of types also helps prevent over generalization. Furthermore, we also present a type-refining operator and prove that it is optimal. The learned models can be used for recognizing events from previously unseen videos. We also present an extension to the framework by integrating an abduction step that improves the learning performance when there is noise in the input data. The experimental results on several hours of video data from two challenging real world domains (an airport domain and a physical action verbs domain) suggest that the techniques are suitable to real world scenarios.

  • 32.
    Echelmeyer, Wolfgang
    et al.
    University of Reutlingen, Reutlingen, Germany.
    Kirchheim, Alice
    School of Science and Technology, Örebro University, Örebro, Sweden.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Akbiyik, Hülya
    University of Reutlingen, Reutlingen, Germany.
    Bonini, Marco
    University of Reutlingen, Reutlingen, Germany.
    Performance Indicators for Robotics Systems in Logistics Applications2011Konferensbidrag (Refereegranskat)
    Abstract [en]

    The transfer of research results to market-ready products is often a costly and time-consuming process. In order to generate successful products, researchers must cooperate with industrial companies; both the industrial and academic partners need to have a detailed understanding of the requirements of all parties concerned. Academic researchers need to identify the performance indicators for technical systems within a business environment and be able to apply them.

    Inservice logistics today, nearly all standardized mass goods are unloaded manually with one reason for this being the undefined position and orientation of the goods in the carrier. A study regarding the qualitative and quantitative properties of goods that are transported in containers shows that there is a huge economic relevance for autonomous systems. In 2008, more than 8,4 billion Twenty-foot equivalent units (TEU) were imported and unloaded manually at European ports, corresponding to more than 331,000 billion single goods items.

    Besides the economic relevance, the opinion of market participants is an important factor for the success of new systems on the market. The main outcomes of a study regarding the challenges, opportunities and barriers in robotic-logistics, allow for the estimation of the economic efficiency of performance indicators, performance flexibility and soft factors. The economic efficiency of the performance parameters is applied to the parcel robot – a cognitive system to unload parcels autonomously from containers. In the following article, the results of the study are presented and the resultant conclusions discussed.

  • 33.
    Efremova, Natalia
    et al.
    Plekhanov Russian University, Moskow, Russia.
    Kiselev, Andrey
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Cognitive Architectures for Optimal Remote Image Representation for Driving a Telepresence Robot2014Konferensbidrag (Refereegranskat)
  • 34.
    Fan, Hongqi
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik. National Laboratory of Science and Technology on Automatic Target Recognition, National University of Defense Technology, Changsha, China.
    Kucner, Tomasz Piotr
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Magnusson, Martin
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Li, Tiancheng
    School of Sciences, University of Salamanca, Salamanca, Spain.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    A Dual PHD Filter for Effective Occupancy Filtering in a Highly Dynamic Environment2018Ingår i: IEEE transactions on intelligent transportation systems (Print), ISSN 1524-9050, E-ISSN 1558-0016, Vol. 19, nr 9, s. 2977-2993Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Environment monitoring remains a major challenge for mobile robots, especially in densely cluttered or highly populated dynamic environments, where uncertainties originated from environment and sensor significantly challenge the robot's perception. This paper proposes an effective occupancy filtering method called the dual probability hypothesis density (DPHD) filter, which models uncertain phenomena, such as births, deaths, occlusions, false alarms, and miss detections, by using random finite sets. The key insight of our method lies in the connection of the idea of dynamic occupancy with the concepts of the phase space density in gas kinetic and the PHD in multiple target tracking. By modeling the environment as a mixture of static and dynamic parts, the DPHD filter separates the dynamic part from the static one with a unified filtering process, but has a higher computational efficiency than existing Bayesian Occupancy Filters (BOFs). Moreover, an adaptive newborn function and a detection model considering occlusions are proposed to improve the filtering efficiency further. Finally, a hybrid particle implementation of the DPHD filter is proposed, which uses a box particle filter with constant discrete states and an ordinary particle filter with a time-varying number of particles in a continuous state space to process the static part and the dynamic part, respectively. This filter has a linear complexity with respect to the number of grid cells occupied by dynamic obstacles. Real-world experiments on data collected by a lidar at a busy roundabout demonstrate that our approach can handle monitoring of a highly dynamic environment in real time.

  • 35.
    Ferri, Gabriele
    et al.
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Mondini, Alessio
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Manzi, Alessandro
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Mazzolai, Barbara
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Laschi, Cecilia
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Mattoli, Virgilio
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Reggente, Matteo
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lettere, Marco
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Dario, Paolo.
    Scuola Superiore Sant'Anna, Pisa, Italy.
    DustCart, a Mobile Robot for Urban Environments: Experiments of Pollution Monitoring and Mapping during Autonomous Navigation in Urban Scenarios2010Ingår i: Proceedings of ICRA Workshop on Networked and Mobile Robot Olfaction in Natural, Dynamic Environments, 2010Konferensbidrag (Refereegranskat)
    Abstract [en]

    In the framework of DustBot European project, aimed at developing a new multi-robot system for urban hygiene management, we have developed a twowheeled robot: DustCart. DustCart aims at providing a solution to door-to-door garbage collection: the robot, called by a user, navigates autonomously to his/her house; collects the garbage from the user and discharges it in an apposite area. An additional feature of DustCart is the capability to monitor the air pollution by means of an on board Air Monitoring Module (AMM). The AMM integrates sensors to monitor several atmospheric pollutants, such as carbon monoxide (CO), particular matter (PM10), nitrogen dioxide (NO2), ozone (O3) plus temperature (T) and relative humidity (rHu). An Ambient Intelligence platform (AmI) manages the robots’ operations through a wireless connection. AmI is able to collect measurements taken by different robots and to process them to create a pollution distribution map. In this paper we describe the DustCart robot system, focusing on the AMM and on the process of creating the pollutant distribution maps. We report results of experiments of one DustCart robot moving in urban scenarios and producing gas distribution maps using the Kernel DM+V algorithm. These experiments can be considered as one of the first attempts to use robots as mobile monitoring devices that can complement the traditional fixed stations.

  • 36.
    Gabellieri, Chiara
    et al.
    Centro di Ricerca “E. Piaggio” e Departimento di Ingnegneria dell’Informazione, Università di Pisa, Pisa, Italia.
    Palleschi, Alessandro
    Centro di Ricerca “E. Piaggio” e Departimento di Ingnegneria dell’Informazione, Università di Pisa, Pisa, Italia.
    Mannucci, Anna
    Centro di Ricerca “E. Piaggio” e Departimento di Ingnegneria dell’Informazione, Università di Pisa, Pisa, Italia.
    Pierallini, Michele
    Centro di Ricerca “E. Piaggio” e Departimento di Ingnegneria dell’Informazione, Università di Pisa, Pisa, Italia.
    Stefanini, Elisa
    Centro di Ricerca “E. Piaggio” e Departimento di Ingnegneria dell’Informazione, Università di Pisa, Pisa, Italia.
    Catalano, Manuel G.
    Istituto Italiano di Tecnologia, Genova GE, Italy.
    Caporale, Danilo
    Centro di Ricerca “E. Piaggio” e Departimento di Ingnegneria dell’Informazione, Università di Pisa, Pisa, Italia.
    Settimi, Alessandro
    Centro di Ricerca “E. Piaggio” e Departimento di Ingnegneria dell’Informazione, Università di Pisa, Pisa, Italia.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Magnusson, Martin
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Garabini, Manolo
    Centro di Ricerca “E. Piaggio” e Departimento di Ingnegneria dell’Informazione, Università di Pisa, Pisa, Italia.
    Pallottino, Lucia
    Centro di Ricerca “E. Piaggio” e Departimento di Ingnegneria dell’Informazione, Università di Pisa, Pisa, Italia.
    Towards an Autonomous Unwrapping System for Intralogistics2019Ingår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, nr 4, s. 4603-4610Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Warehouse logistics is a rapidly growing market for robots. However, one key procedure that has not received much attention is the unwrapping of pallets to prepare them for objects picking. In fact, to prevent the goods from falling and to protect them, pallets are normally wrapped in plastic when they enter the warehouse. Currently, unwrapping is mainly performed by human operators, due to the complexity of its planning and control phases. Autonomous solutions exist, but usually they are designed for specific situations, require a large footprint and are characterized by low flexibility. In this work, we propose a novel integrated robotic solution for autonomous plastic film removal relying on an impedance-controlled robot. The main contribution is twofold: on one side, a strategy to plan Cartesian impedance and trajectory to execute the cut without damaging the goods is discussed; on the other side, we present a cutting device that we designed for this purpose. The proposed solution presents the characteristics of high versatility and the need for a reduced footprint, due to the adopted technologies and the integration with a mobile base. Experimental results are shown to validate the proposed approach.

  • 37.
    Grosinger, Jasmin
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Pecora, Federico
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Saffiotti, Alessandro
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Making Robots Proactive through Equilibrium Maintenance2016Ingår i: 25th International Joint Conference on Artificial Intelligence, 2016Konferensbidrag (Refereegranskat)
  • 38.
    Gürpinar, Cemal
    et al.
    Faculty of Computer and Informatics, Istanbul Technical University, Istanbul, Turkey.
    Uluer, Pinar
    Faculty of Computer and Informatics, Istanbul Technical University, Istanbul, Turkey; Faculty of Engineering and Technology, Galatasaray University, Istanbul, Turkey.
    Akalin, Neziha
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Köse, Hatice
    Faculty of Computer and Informatics, Istanbul Technical University, Istanbul, Turkey.
    Sign Recognition System for an Assistive Robot Sign Tutor for Children2019Ingår i: International Journal of Social Robotics, ISSN 1875-4791, s. 1-15Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This paper presents a sign recognition system for a sign tutoring assistive humanoid robot. In this study, a specially designed 5-fingered robot platform with expressive face (Robovie R3) is used for interaction and communication with deaf or hard of hearing children using signs and visual cues. The robot is able to recognize and generate accurately a selected set of signs from Turkish sign language using various hand, arm and head gestures as relevant feedback. This paper focuses on the sign recognition system of the robot to recognize the human participant’s signing during the interaction. The system is based on two different approaches including a conventional method involving artificial neural network combined with hidden Markov model and a deep learning based method involving long short-term memory. The system is tested both on offline and real-time settings within an interaction game scenario with deaf or hard of hearing children. During the study, besides testing the sign recognition system, participants’ subjective evaluations and impressions were also collected and examined. The robot is perceived as likable and intelligent by the children, based on the questionnaires; and the proposed sign recognition system enables robust real-time interaction and communication of the assistive robot with children in sign language.

  • 39. Hang, Kaiyu
    et al.
    Li, Miao
    Stork, Johannes Andreas
    Bekiroglu, Yasemin
    Billard, Aude
    Kragic, Danica
    Hierarchical Fingertip Space for Synthesizing Adaptable Fingertip Grasps2014Konferensbidrag (Övrigt vetenskapligt)
  • 40.
    Hang, Kaiyu
    et al.
    Computer Vision and Active Perception Laboratory, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Li, Miao
    Learning Algorithms and Systems Laboratory, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
    Stork, Johannes Andreas
    Computer Vision and Active Perception Laboratory, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Bekiroglu, Yasemin
    Department of Mechanical Engineering, School of Engineering, University of Birmingham, Birmingham, UK.
    Pokorny, Florian T.
    Computer Vision and Active Perception Laboratory, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Billard, Aude
    Learning Algorithms and Systems Laboratory, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
    Kragic, Danica
    Computer Vision and Active Perception Laboratory, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Hierarchical fingertip space: A unified framework for grasp planning and in-hand grasp adaptation2016Ingår i: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 32, nr 4, s. 960-972Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We present a unified framework for grasp planning and in-hand grasp adaptation using visual, tactile, and proprioceptive feedback. The main objective of the proposed framework is to enable fingertip grasping by addressing problems of changed weight of the object, slippage, and external disturbances. For this purpose we introduce the Hierarchical Fingertip Space as a representation enabling optimization for both efficient grasp synthesis and online finger gaiting. Grasp synthesis is followed by a grasp adaptation step that consists of both grasp force adaptation through impedance control and regrasping/finger gaiting when the former is not sufficient. Experimental evaluation is conducted on an Allegro hand mounted on a Kuka LWR arm.

  • 41.
    Hang, Kaiyu
    et al.
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Hierarchical fingertip space for multi-fingered precision grasping2014Ingår i: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE Press, 2014, s. 1641-1648Konferensbidrag (Refereegranskat)
    Abstract [en]

    Dexterous in-hand manipulation of objects benefits from the ability of a robot system to generate precision grasps. In this paper, we propose a concept of Fingertip Space and its use for precision grasp synthesis. Fingertip Space is a representation that takes into account both the local geometry of object surface as well as the fingertip geometry. As such, it is directly applicable to the object point cloud data and it establishes a basis for the grasp search space. We propose a model for a hierarchical encoding of the Fingertip Space that enables multilevel refinement for efficient grasp synthesis. The proposed method works at the grasp contact level while not neglecting object shape nor hand kinematics. Experimental evaluation is performed for the Barrett hand considering also noisy and incomplete point cloud data.

  • 42.
    Hang, Kaiyu
    et al.
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Pokorny, Florian T.
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Computer Vision and Active Perception Lab, Centre for Autonomous Systems, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    Combinatorial optimization for hierarchical contact-level grasping2014Ingår i: 2014 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2014, s. 381-388Konferensbidrag (Refereegranskat)
    Abstract [en]

    We address the problem of generating force-closed point contact grasps on complex surfaces and model it as a combinatorial optimization problem. Using a multilevel refinement metaheuristic, we maximize the quality of a grasp subject to a reachability constraint by recursively forming a hierarchy of increasingly coarser optimization problems. A grasp is initialized at the top of the hierarchy and then locally refined until convergence at each level. Our approach efficiently addresses the high dimensional problem of synthesizing stable point contact grasps while resulting in stable grasps from arbitrary initial configurations. Compared to a sampling-based approach, our method yields grasps with higher grasp quality. Empirical results are presented for a set of different objects. We investigate the number of levels in the hierarchy, the computational complexity, and the performance relative to a random sampling baseline approach.

  • 43.
    Hang, Kaiyu
    et al.
    Robotics, Perception, and Learning Lab, KTH Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Robotics, Perception, and Learning Lab, KTH Royal Institute of Technology, Stockholm, Sweden.
    Pollard, Nancy S.
    Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA.
    Kragic, Danica
    Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA.
    A Framework For Optimal Grasp Contact Planning2017Ingår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 2, nr 2, s. 704-711Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We consider the problem of finding grasp contacts that are optimal under a given grasp quality function on arbitrary objects. Our approach formulates a framework for contact-level grasping as a path finding problem in the space of supercontact grasps. The initial supercontact grasp contains all grasps and in each step along a path grasps are removed. For this, we introduce and formally characterize search space structure and cost functions under which minimal cost paths correspond to optimal grasps. Our formulation avoids expensive exhaustive search and reduces computational cost by several orders of magnitude. We present admissible heuristic functions and exploit approximate heuristic search to further reduce the computational cost while maintaining bounded suboptimality for resulting grasps. We exemplify our formulation with point-contact grasping for which we define domain specific heuristics and demonstrate optimality and bounded suboptimality by comparing against exhaustive and uniform cost search on example objects. Furthermore, we explain how to restrict the search graph to satisfy grasp constraints for modeling hand kinematics. We also analyze our algorithm empirically in terms of created and visited search states and resultant effective branching factor.

  • 44.
    Haustein, J A
    et al.
    Royal Institute of Technology, Stockholm, Sweden.
    Arnekvist, I
    Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Royal Institute of Technology, Stockholm, Sweden.
    Hang, K
    Yale University, New Haven, USA.
    Kragic, D
    Royal Institute of Technology, Stockholm, Sweden.
    Non-prehensile Rearrangement Planning with Learned Manipulation States and Actions2018Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    In this work we combine sampling-based motion planning with reinforcement learning and generative modeling to solve non-prehensile rearrangement problems. Our algorithm explores the composite configuration space of objects and robot as a search over robot actions, forward simulated in a physics model. This search is guided by a generative model that provides robot states from which an object can be transported towards a desired state, and a learned policy that provides corresponding robot actions. As an efficient generative model, we apply Generative Adversarial Networks.

  • 45.
    Haustein, J A
    et al.
    KTH Royal Institute of Technology, Stockholm, Sweden.
    Hang, K
    Yale University, New Haven, USA.
    Stork, Johannes Andreas
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kragic, D
    KTH Royal Institute of Technology, Stockholm, Sweden.
    Object placement planning and optimization for robot manipulatorsManuskript (preprint) (Övrigt vetenskapligt)
    Abstract [en]

    We address the problem of motion planning for a robotic manipulator with the task to place a grasped object in a cluttered environment. In this task, we need to locate a collision-free pose for the object that a) facilitates the stable placement of the object, b) is reachable by the robot manipulator and c) optimizes a user-given placement objective. Because of the placement objective, this problem is more challenging than classical motion planning where the target pose is defined from the start. To solve this task, we propose an anytime algorithm that integrates sampling-based motion planning for the robot manipulator with a novel hierarchical search for suitable placement poses. We evaluate our approach on a dual-arm robot for two different placement objectives, and observe its effectiveness even in challenging scenarios.

  • 46.
    Haustein, Joshua A.
    et al.
    Robotics, Perception and Learning Lab (RPL), CAS, EECS, KTH Royal Institute of Technology, Stockholm, Sweden.
    Arnekvist, Isac
    Robotics, Perception and Learning Lab (RPL), CAS, EECS, KTH Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Robotics, Perception and Learning Lab (RPL), CAS, EECS, KTH Royal Institute of Technology, Stockholm, Sweden.
    Hang, Kaiyu
    GRAB Lab, Yale University, New Haven, USA.
    Kragic, Danica
    Robotics, Perception and Learning Lab (RPL), CAS, EECS, KTH Royal Institute of Technology, Stockholm, Sweden.
    Learning Manipulation States and Actions for Efficient Non-prehensile Rearrangement Planning2019Manuskript (preprint) (Övrigt vetenskapligt)
  • 47.
    Kamarudin, Kamarulzaman
    et al.
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia; School of Mechatronics Engineering, Universiti Malaysia Perlis (UniMAP), Arau, Malaysia.
    Shakaff, Ali Yeon Md
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia; School of Mechatronics Engineering, Universiti Malaysia Perlis (UniMAP), Arau, Malaysia.
    Hernandez Bennetts, Victor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Mamduh, Syed Muhammad
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia.
    Zakaria, Ammar
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia; School of Mechatronics Engineering, Universiti Malaysia Perlis (UniMAP), Arau, Malaysia.
    Visvanathan, Retnam
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia.
    Yeon, Ahmad Shakaff Ali
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia.
    Kamarudin, Latifah Munirah
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia.
    Integrating SLAM and gas distribution mapping (SLAM-GDM) for real-time gas source localization2018Ingår i: Advanced Robotics, ISSN 0169-1864, E-ISSN 1568-5535, Vol. 32, nr 17, s. 903-917Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Gas distribution mapping (GDM) learns models of the spatial distribution of gas concentrations across 2D/3D environments, among others, for the purpose of localizing gas sources. GDM requires run-time robot positioning in order to associate measurements with locations in a global coordinate frame. Most approaches assume that the robot has perfect knowledge about its position, which does not necessarily hold in realistic scenarios. We argue that the simultaneous localization and mapping (SLAM) algorithm should be used together with GDM to allow operation in an unknown environment. This paper proposes an SLAM-GDM approach that combines Hector SLAM and Kernel DM+V through a map merging technique. We argue that Hector SLAM is suitable for the SLAM-GDM approach since it does not perform loop closure or global corrections, which in turn would require to re-compute the gas distribution map. Real-time experiments were conducted in an environment with single and multiple gas sources. The results showed that the predictions of gas source location in all trials were often correct to around 0.5-1.5 m for the large indoor area being tested. The results also verified that the proposed SLAM-GDM approach and the designed system were able to achieve real-time operation.

  • 48.
    Khaliq, Ali Abdul
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Köckemann, Uwe
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Pecora, Federico
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Saffiotti, Alessandro
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Bruno, Barbara
    University of Genova, Genova, Italy.
    Recchiuto, Carmine Tommaso
    University of Genova, Genova, Italy.
    Sgorbissa, Antonio
    University of Genova, Genova, Italy.
    Bui, Ha-Duong
    Japan Advanced Institute of Science and Technology, Ishikawa, Japan.
    Chong, Nak Young
    Japan Advanced Institute of Science and Technology, Ishikawa, Japan.
    Culturally aware Planning and Execution of Robot Actions2018Ingår i: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2018, s. 326-332Konferensbidrag (Refereegranskat)
    Abstract [en]

    The way in which humans behave, speak andinteract is deeply influenced by their culture. For example,greeting is done differently in France, in Sweden or in Japan;and the average interpersonal distance changes from onecultural group to the other. In order to successfully coexistwith humans, robots should also adapt their behavior to theculture, customs and manners of the persons they interact with.In this paper, we deal with an important ingredient of culturaladaptation: how to generate robot plans that respect givencultural preferences, and how to execute them in a way thatis sensitive to those preferences. We present initial results inthis direction in the context of the CARESSES project, a jointEU-Japan effort to build culturally competent assistive robots.

  • 49.
    Kiselev, Andrey
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kristoffersson, Annica
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    The Effect of Field of View on Social Interaction in Mobile Robotic Telepresence Systems2014Ingår i: Proceedings of the 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2014), IEEE conference proceedings, 2014, s. 214-215Konferensbidrag (Refereegranskat)
    Abstract [en]

    One goal of mobile robotic telepresence for social interaction is to design robotic units that are easy to operate for novice users and promote good interaction between people. This paper presents an exploratory study on the effect of camera orientation and field of view on the interaction between a remote and local user. Our findings suggest that limiting the width of the field of view can lead to better interaction quality as it encourages remote users to orient the robot towards local users.

  • 50.
    Kiselev, Andrey
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Mosiello, Giovanni
    Örebro universitet, Institutionen för naturvetenskap och teknik. Roma Tre University, Rome, Italy.
    Kristoffersson, Annica
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Semi-Autonomous Cooperative Driving for Mobile Robotic Telepresence Systems2014Ingår i: Proceedings of the 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2014), IEEE conference proceedings, 2014, s. 104-104Konferensbidrag (Refereegranskat)
    Abstract [en]

    Mobile robotic telepresence (MRP) has been introduced to allow communication from remote locations. Modern MRP systems offer rich capabilities for human-human interactions. However, simply driving a telepresence robot can become a burden especially for novice users, leaving no room for interaction at all. In this video we introduce a project which aims to incorporate advanced robotic algorithms into manned telepresence robots in a natural way to allow human-robot cooperation for safe driving. It also shows a very first implementation of cooperative driving based on extracting a safe drivable area in real time using the image stream received from the robot.

123 1 - 50 av 117
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf