oru.sePublications
Change search
Refine search result
1 - 34 of 34
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Andreasson, Henrik
    et al.
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    6D scan registration using depth-interpolated local image features2010In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 58, no 2, p. 157-165Article in journal (Refereed)
    Abstract [en]

    This paper describes a novel registration approach that is based on a combination of visual and 3D range information.To identify correspondences, local visual features obtained from images of a standard color camera are compared and the depth of matching features (and their position covariance) is determined from the range measurements of a 3D laserscanner. The matched depth-interpolated image features allows to apply registration with known correspondences.We compare several ICP variants in this paper and suggest an extension that considers the spatial distance betweenmatching features to eliminate false correspondences. Experimental results are presented in both outdoor and indoor environments. In addition to pair-wise registration, we also propose a global registration method that registers allscan poses simultaneously.

  • 2.
    Andreasson, Henrik
    et al.
    Örebro University, Department of Technology.
    Treptow, André
    University of Tübingen.
    Duckett, Tom
    Örebro University, Department of Technology.
    Self-localization in non-stationary environments using omni-directional vision2007In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 55, no 7, p. 541-551Article in journal (Refereed)
    Abstract [en]

    This paper presents an image-based approach for localization in non-static environments using local feature descriptors, and its experimental evaluation in a large, dynamic, populated environment where the time interval between the collected data sets is up to two months. By using local features together with panoramic images, robustness and invariance to large changes in the environment can be handled. Results from global place recognition with no evidence accumulation and a Monte Carlo localization method are shown. To test the approach even further, experiments were conducted with up to 90% virtual occlusion in addition to the dynamic changes in the environment

  • 3.
    Asadi, Sahar
    et al.
    Örebro University, School of Science and Technology.
    Fan, Han
    Örebro University, School of Science and Technology.
    Hernandez Bennetts, Victor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Time-dependent gas distribution modelling2017In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 96, p. 157-170Article in journal (Refereed)
    Abstract [en]

    Artificial olfaction can help to address pressing environmental problems due to unwanted gas emissions. Sensor networks and mobile robots equipped with gas sensors can be used for e.g. air pollution monitoring. Key in this context is the ability to derive truthful models of gas distribution from a set of sparse measurements. Most statistical gas distribution modelling methods assume that gas dispersion is a time constant random process. While this assumption approximately holds in some situations, it is necessary to model variations over time in order to enable applications of gas distribution modelling in a wider range of realistic scenarios. Time-invariant approaches cannot model well evolving gas plumes, for example, or major changes in gas dispersion due to a sudden change of the environmental conditions. This paper presents two approaches to gas distribution modelling, which introduce a time-dependency and a relation to a time-scale in generating the gas distribution model either by sub-sampling or by introducing a recency weight that relates measurement and prediction time. We evaluated these approaches in experiments performed in two real environments as well as on several simulated experiments. As expected, the comparison of different sub-sampling strategies revealed that more recent measurements are more informative to derive an estimate of the current gas distribution as long as a sufficient spatial coverage is given. Next, we compared a time-dependent gas distribution modelling approach (TD Kernel DM+V), which includes a recency weight, to the state-of-the-art gas distribution modelling approach (Kernel DM+V), which does not consider sampling times. The results indicate a consistent improvement in the prediction of unseen measurements, particularly in dynamic scenarios. Furthermore, this paper discusses the impact of meta-parameters in model selection and compares the performance of time-dependent GDM in different plume conditions. Finally, we investigated how to set the target time for which the model is created. The results indicate that TD Kernel DM+V performs best when the target time is set to the maximum sampling time in the test set.

  • 4.
    Bouguerra, Abdelbaki
    et al.
    Örebro University, School of Science and Technology.
    Karlsson, Lars
    Örebro University, School of Science and Technology.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Monitoring the execution of robot plans using semantic knowledge2008In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 56, no 11, p. 942-954Article in journal (Refereed)
    Abstract [en]

    Even the best laid plans can fail, and robot plans executed in real world domains tend to do so often. The ability of a robot to reliably monitor the execution of plans and detect failures is essential to its performance and its autonomy. In this paper, we propose a technique to increase the reliability of monitoring symbolic robot plans. We use semantic domain knowledge to derive implicit expectations of the execution of actions in the plan, and then match these expectations against observations. We present two realizations of this approach: a crisp one, which assumes deterministic actions and reliable sensing, and uses a standard knowledge representation system (LOOM); and a probabilistic one, which takes into account uncertainty in action effects, in sensing, and in world states. We perform an extensive validation of these realizations through experiments performed both in simulation and on real robots.

  • 5.
    Cielniak, Grzegorz
    et al.
    Sch Comp Sci, Lincoln Univ, Lincoln, England.
    Duckett, Tom
    Sch Comp Sci, Lincoln Univ, Lincoln, England.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Data association and occlusion handling for vision-based people tracking by mobile robots2010In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 58, no 5, p. 435-443Article in journal (Refereed)
    Abstract [en]

    This paper presents an approach for tracking multiple persons on a mobile robot with a combination of colour and thermal vision sensors, using several new techniques. First, an adaptive colour model is incorporated into the measurement model of the tracker. Second, a new approach for detecting occlusions is introduced, using a machine learning classifier for pairwise comparison of persons (classifying which one is in front of the other). Third, explicit occlusion handling is incorporated into the tracker. The paper presents a comprehensive, quantitative evaluation of the whole system and its different components using several real world data sets. (C) 2010 Elsevier B.V. All rights reserved.

  • 6.
    Coradeschi, Silvia
    et al.
    Örebro University, Department of Technology.
    Saffiotti, Alessandro
    Örebro University, Department of Technology.
    An introduction to the anchoring problem2003In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 43, no 2-3, p. 85-96Article in journal (Refereed)
    Abstract [en]

    Anchoring is the problem of connecting, inside an artificial system, symbols and sensor data that refer to the same physical objects in the external world. This problem needs to be solved in any robotic system that incorporates a symbolic component. However, it is only recently that the anchoring problem has started to be addressed as a problem per se, and a few general solutions have begun to appear in the literature. This paper introduces the special issue on perceptual anchoring of the Robotics and Autonomous Systems journal. Our goal is to provide a general overview of the anchoring problem, and to highlight some of its subtle points

  • 7.
    Duckett, Tom
    et al.
    Lincoln School of Computer Science, University of Lincoln, Lincoln, United Kingdom.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Special Issue: Selected Papers from the 5th European Conference on Mobile Robots (ECMR 2011)2013In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 61, no 10, p. 1049-1050Article in journal (Other academic)
  • 8.
    Duckett, Tom
    et al.
    Örebro University, Department of Technology.
    Nehmzow, Ulrich
    University of Manchester.
    Mobile robot self-localisation using occupancy histograms and a mixture of Gaussian location hypotheses2001In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 34, no 2-3, p. 117-129Article in journal (Refereed)
    Abstract [en]

    The topic of mobile robot self-localisation is often divided into the sub-problems of global localisation and position tracking. Both are now well understood individually, but few mobile robots can deal simultaneously with the two problems in large, complex environments. In this paper, we present a unified approach to global localisation and position tracking which is based on a topological map augmented with metric information. This method combines a new scan matching technique, using histograms extracted from local occupancy grids, with an efficient algorithm for tracking multiple location hypotheses over time. The method was validated with experiments in a series of real world environments, including its integration into a complete navigating robot. The results show that the robot can localise itself reliably in large, indoor environments using minimal computational resources

  • 9.
    Fabrizi, Elisabetta
    et al.
    La Sapienza - Roma.
    Saffiotti, Alessandro
    Örebro University, Department of Technology.
    Augmenting topology-based maps with geometric information2002In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 40, no 2, p. 91-97Article in journal (Refereed)
    Abstract [en]

    Topology-based maps are a new representation of the workspace of a mobile robot, which capture the structure of the free space in the environment in terms of the basic topological notions of connectivity and adjacency. A topology-based map can represent the environment in terms of open spaces (rooms and corridors) connected by narrow passages (doors and junctions). In this paper, we show how to enrich a topology-based map with geometric information useful for the generation and execution of navigation plans. Both the topology-based map and its geometric information are automatically extracted from sensor data. We illustrate the use of topology-based maps for planned behavior-based navigation on a real robot.

  • 10.
    Galindo, Cipriano
    et al.
    Dept. of System Engineering and Automation, University of Malaga, Spain.
    Fernández-Madrigal, Juan-Antonio
    Dept. of System Engineering and Automation, University of Malaga, Spain.
    González, Javier
    Dept. of System Engineering and Automation, University of Malaga, Spain.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Robot Task Planning Using Semantic Maps2008In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 56, no 11, p. 955-966Article in journal (Refereed)
    Abstract [en]

    Task planning for mobile robots usually relies solely on spatial information and on shallow domain knowledge, like labels attached to objects and places. Although spatial information is necessary for performing basic robot operations (navigation and localization), the use of deeper domain knowledge is pivotal to endow a robot with higher degrees of autonomy and intelligence. In this paper, we focus on semantic knowledge, and show how this type of knowledge can be profitably used for robot task planning. We start by defining a specific type of semantic maps, which integrate hierarchical spatial information and semantic knowledge. We then proceed to describe how these semantic maps can improve task planning in two ways: extending the capabilities of the planner by reasoning about semantic information, and improving the planning efficiency in large domains. We show several experiments that demonstrate the effectiveness of our solutions in a domain involving robot navigation in a domestic environment.

  • 11.
    Galindo, Cipriano
    et al.
    University of Malaga, Malaga, Spain.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Inferring robot goals from violations of semantic knowledge2013In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 61, no 10, p. 1131-1143Article in journal (Refereed)
    Abstract [en]

    A growing body of literature shows that endowing a mobile robot with semantic knowledge and with the ability to reason from this knowledge can greatly increase its capabilities. In this paper, we present a novel use of semantic knowledge, to encode information about how things should be, i.e. norms, and to enable the robot to infer deviations from these norms in order to generate goals to correct these deviations. For instance, if a robot has semantic knowledge that perishable items must be kept in a refrigerator, and it observes a bottle of milk on a table, this robot will generate the goal to bring that bottle into a refrigerator. The key move is to properly encode norms in an ontology so that each norm violation results in a detectable inconsistency. A goal is then generated to bring the world back in a consistent state, and a planner is used to transform this goal into actions. Our approach provides a mobile robot with a limited form of goal autonomy: the ability to derive its own goals to pursue generic aims. We illustrate our approach in a full mobile robot system that integrates a semantic map, a knowledge representation and reasoning system, a task planner, and standard perception and navigation routines. (C) 2013 Elsevier B.V. All rights reserved.

  • 12.
    Hertzberg, Joachim
    et al.
    University of Osnabrück, Inst. of Computer Science, Knowledge-Based Systems Research Group Osnabrück, Germany.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Using semantic knowledge in robotics2008In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 56, no 11, p. 875-877Article in journal (Refereed)
    Abstract [en]

    There is a growing tendency to introduce high-level semantic knowledge into robotic systems and beyond. This tendency is visible in different forms within several areas of robotics. Recent work in mapping and localization tries to extract semantically meaningful structures from sensor data during map building, or to use semantic knowledge in the map building process, or both. A similar trend characterizes the cognitive vision approach to scene understanding. Recent efforts in human–robot interaction try to endow the robot with some understanding of the human meaning of words, gestures and expressions. Ontological knowledge is increasingly being used in distributed systems in order to allow automatic re-configuration in the areas of flexible automation and of ubiquitous robotics. Ontological knowledge was also used recently to improve the inter-operability of robotic components developed for different systems.

    While these trends have many questions and issues in common, work on each one of them is often pursued in isolation within a specific area, without being aware of the related achievements in other areas. The aim of this special issue is to collect in a single place a set of advanced, high-quality papers that tackle the problem of using semantic knowledge in robotics in many of its different forms.

    The submissions to this special issue made it clear that there are many ways in which semantic knowledge may play a role in robotics. Interestingly, they also revealed that there are many ways in which the term semantic knowledge is being interpreted. Before turning to the technical papers, then, it is worth spending a few words on this matter.

  • 13.
    Larsson, Sören
    et al.
    Örebro University, Department of Technology.
    Kjellander, Johan
    Örebro University, Department of Technology.
    Motion control and data capturing for laser scanning with an industrial robot2006In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 54, no 6, p. 453-460Article in journal (Refereed)
    Abstract [en]

    Reverse engineering is concerned with the problem of creating computer aided design (CAD) models of real objects by interpreting point data measured from their surfaces. For complex objects, it is important that the measuring device is free to move along arbitrary paths and make its measurements from suitable directions. This paper shows how a standard industrial robot with a laser profile scanner can be used to achieve that freedom. The system is planned to be part of a future automatic system for the Reverse Engineering of unknown objects.

  • 14.
    Larsson, Sören
    et al.
    Örebro University, Department of Technology.
    Kjellander, Johan A. P.
    Örebro University, Department of Technology.
    Path planning for laser scanning with an industrial robot2008In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 56, no 7, p. 615-624Article in journal (Refereed)
  • 15.
    Lilienthal, Achim J.
    et al.
    University of Tübingen, WSI, Tübingen, Germany.
    Duckett, Tom
    Örebro University, Department of Technology.
    Building gas concentration gridmaps with a mobile robot2004In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 48, no 1, p. 3-16Article in journal (Refereed)
    Abstract [en]

    This paper addresses the problem of mapping the structure of a gas distribution by creating concentration gridmaps from the data collected by a mobile robot equipped with gas sensors. By contrast to metric gridmaps extracted from sonar or laser range scans, a single measurement from a gas sensor provides information about a comparatively small area. To overcome this problem, a mapping technique is introduced that uses a Gaussian weighting function to model the decreasing likelihood that a particular reading represents the true concentration with respect to the distance from the point of measurement. This method is evaluated in terms of its suitability regarding the slow response and recovery of the gas sensors, and experimental comparisons of different exploration strategies are presented. The stability of the mapped structures and the capability to use concentration gridmaps to locate a gas source are also discussed.

  • 16.
    Loutfi, Amy
    et al.
    Örebro University, Department of Technology.
    Broxvall, Mathias
    Örebro University, Department of Technology.
    Coradeschi, Silvia
    Örebro University, Department of Technology.
    Karlsson, Lars
    Örebro University, Department of Technology.
    Object recognition: a new application for smelling robots2005In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 52, no 4, p. 272-289Article in journal (Refereed)
    Abstract [en]

    Olfaction is a challenging new sensing modality for intelligent systems. With the emergence of electronic noses, it is now possible to detect and recognize a range of different odours for a variety of applications. In this work, we introduce a new application where electronic olfaction is used in cooperation with other types of sensors on a mobile robot in order to acquire the odour property of objects.We examine the problem of deciding when, how and where the electronic nose (e-nose) should be activated by planning for active perception and we consider the problem of integrating the information provided by the e-nose with both prior information and information from other sensors (e.g., vision). Experiments performed on a mobile robot equipped with an e-nose are presented.

  • 17.
    Lundh, Robert
    et al.
    Örebro University, School of Science and Technology.
    Karlsson, Lars
    Örebro University, School of Science and Technology.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Autonomous functional configuration of a network robot system2008In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 56, no 10, p. 819-830Article in journal (Refereed)
    Abstract [en]

    We consider distributed systems of networked robots in which: (1) each robot includes sensing, acting and/or processing modular functionalities; and (2) robots can help each other by offering those functionalities. A functional configuration is any way to allocate and connect functionalities among the robots. An interesting feature of a system of this type is the possibility to use different functional configurations to make the same set of robots perform different tasks, or to perform the same task under different conditions. In this paper, we propose an approach to automatically generate at run time a functional configuration of a network robot system to perform a given task in a given environment, and to dynamically change this configuration in response to failures. Our approach is based on artificial intelligence planning techniques, and it is provably sound, complete and optimal. Moreover, our configuration planner can be combined with an action planner to deal with tasks that require sequences of configurations. We illustrate our approach on a specific type of network robot system, called Peis-Ecology, and show experiments in which a sequence of configurations is automatically generated and executed on real robots. These experiments demonstrate that our self-configuration approach can help the system to achieve greater autonomy, flexibility and robustness.

  • 18.
    Marsland, Stephen
    et al.
    University of Manchester.
    Nehmzow, Ulrich
    The University of Essex.
    Duckett, Tom
    Örebro University, Department of Technology.
    Learning to select distinctive landmarks for mobile robot navigation2001In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 37, no 4, p. 241-260Article in journal (Refereed)
    Abstract [en]

    In landmark-based navigation systems for mobile robots, sensory perceptions (e.g., laser or sonar scans) are used to identify the robot’s current location or to construct internal representations, maps, of the robot’s environment. Being based on an external frame of reference (which is not subject to incorrigible drift errors such as those occurring in odometry-based systems), landmark-based robot navigation systems are now widely used in mobile robot applications.

    The problem that has attracted most attention to date in landmark-based navigation research is the question of how to deal with perceptual aliasing, i.e., perceptual ambiguities. In contrast, what constitutes a good landmark, or how to select landmarks for mapping, is still an open research topic. The usual method of landmark selection is to map perceptions at regular intervals, which has the drawback of being inefficient and possibly missing ‘good’ landmarks that lie between sampling points.

    In this paper, we present an automatic landmark selection algorithm that allows a mobile robot to select conspicuous landmarks from a continuous stream of sensory perceptions, without any pre-installed knowledge or human intervention during the selection process. This algorithm can be used to make mapping mechanisms more efficient and reliable. Experimental results obtained with two different mobile robots in a range of environments are presented and analysed.

  • 19.
    Mojtahedzadeh, Rasoul
    et al.
    Örebro University, School of Science and Technology.
    Bouguerra, Abdelbaki
    Örebro University, School of Science and Technology.
    Schaffernicht, Erik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J
    Örebro University, School of Science and Technology.
    Support relation analysis and decision making for safe robotic manipulation tasks2015In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 71, no SI, p. 99-117Article in journal (Refereed)
    Abstract [en]

    In this article, we describe an approach to address the issue of automatically building and using high-level symbolic representations that capture physical interactions between objects in static configurations. Our work targets robotic manipulation systems where objects need to be safely removed from piles that come in random configurations. We assume that a 3D visual perception module exists so that objects in the piles can be completely or partially detected. Depending on the outcome of the perception, we divide the issue into two sub-issues: 1) all objects in the configuration are detected; 2) only a subset of objects are correctly detected. For the first case, we use notions from geometry and static equilibrium in classical mechanics to automatically analyze and extract act and support relations between pairs of objects. For the second case, we use machine learning techniques to estimate the probability of objects supporting each other. Having the support relations extracted, a decision making process is used to identify which object to remove from the configuration so that an expected minimum cost is optimized. The proposed methods have been extensively tested and validated on data sets generated in simulation and from real world configurations for the scenario of unloading goods from shipping containers.

  • 20.
    Palm, Rainer
    et al.
    Örebro University, School of Science and Technology.
    Iliev, Boyko
    Örebro University, School of Science and Technology.
    Kadmiry, Bourhane
    Örebro University, School of Science and Technology.
    Recognition of human grasps by time-clustering and fuzzy modeling2009In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 57, no 5, p. 484-495Article in journal (Refereed)
    Abstract [en]

    In this paper we address the problem of recognition of human grasps for five-fingeredrobotic hands and industrial robots in the context of programming-by-demonstration. The robot isinstructed by a human operator wearing a data glove capturing the hand poses. For a number ofhuman grasps, the corresponding fingertip trajectories are modeled in time and space by fuzzyclustering and Takagi-Sugeno (TS) modeling. This so-called time-clustering leads to grasp modelsusing time as input parameter and fingertip positions as outputs. For a sequence of grasps thecontrol system of the robot hand identifies the grasp segments, classifies the grasps andgenerates the sequence of grasps shown before. For this purpose, each grasp is correlated with atraining sequence. By means of a hybrid fuzzy model the demonstrated grasp sequence can bereconstructed.

  • 21.
    Persson, Martin
    et al.
    Örebro University, School of Science and Technology.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, UK.
    Lilienthal, Achim J.
    Örebro University, Department of Natural Sciences.
    Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping2008In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 56, no 6, p. 483-492Article in journal (Refereed)
    Abstract [en]

    This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omni-directional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground.

  • 22.
    Persson, Martin
    et al.
    Örebro University, Department of Technology.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, UK.
    Lilienthal, Achim J.
    Örebro University, Department of Technology.
    Virtual sensors for human concepts: building detection by an outdoor mobile robot2007In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 55, no 5, p. 383-390Article in journal (Refereed)
    Abstract [en]

    In human-robot communication it is often important to relate robot sensor readings to concepts used by humans. We suggest the use of a virtual sensor (one or several physical sensors with a dedicated signal processing unit for the recognition of real world concepts) and a method with which the virtual sensor can learn from a set of generic features. The virtual sensor robustly establishes the link between sensor data and a particular human concept. In this work, we present a virtual sensor for building detection that uses vision and machine learning to classify the image content in a particular direction as representing buildings or non-buildings. The virtual sensor is trained on a diverse set of image data, using features extracted from grey level images. The features are based on edge orientation, the configurations of these edges, and on grey level clustering. To combine these features, the AdaBoost algorithm is applied. Our experiments with an outdoor mobile robot show that the method is able to separate buildings from nature with a high classification rate, and to extrapolate well to images collected under different conditions. Finally, the virtual sensor is applied on the mobile robot, combining its classifications of sub-images from a panoramic view with spatial information (in the form of location and orientation of the robot) in order to communicate the likely locations of buildings to a remote human operator. (c) 2006 Elsevier B.V. All rights reserved.

  • 23. Petrovitc, Ivan
    et al.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Special issue ECMR 20092011In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 59, no 5, p. 263-264Article in journal (Refereed)
  • 24.
    Pettersson, Ola
    Örebro University, Department of Technology.
    Execution monitoring in robotics: a survey2005In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 53, no 2, p. 73-88Article in journal (Refereed)
    Abstract [en]

    Research on execution monitoring in its own is still not very common within the field of robotics and autonomous systems. It is more common that researchers interested in control architectures or execution planning include monitoring as a small part of their work when they realize that it is needed. On the other hand, execution monitoring has been a well studied topic within industrial control, although control theorists seldom use this term. Instead they refer to the problem of fault detection and isolation (FDI).

    This survey will use the knowledge and terminology from industrial control in order to classify different execution monitoring approaches applied to robotics. The survey is particularly focused on autonomous mobile robotics.

  • 25.
    Sanfeliu, Alberto
    et al.
    Institut de Robòtica I Informàtica Industrial (UPC-CSIC), Universitat Politècnica de Catalunya, Barcelona, Spain.
    Hagita, Norihiro
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Network Robot Systems2008In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 56, no 10, p. 793-797Article in journal (Refereed)
    Abstract [en]

     This article introduces the definition of Network Robot Systems (NRS) as is understood in Europe, USA and Japan. Moreover, it describes some of the NRS projects in Europe and Japan and presents a summary of the papers of this Special Issue.  

  • 26. Sanfeliu, Alberto
    et al.
    Hagita, Norihiro
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Special issue: Network robot systems2008In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 56, no 10, p. 791-791Article in journal (Refereed)
  • 27.
    Skoglund, Alexander
    et al.
    AASS Learning Systems Lab, Örebro Universitet, Örebro, Sweden.
    Iliev, Boyko
    Örebro University, School of Science and Technology.
    Palm, Rainer
    AASS Learning Systems Lab, Örebro Universitet, Örebro, Sweden.
    Programming-by-demonstration of reaching motions: a next-state-planner approach2010In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 58, no 5, p. 607-621Article in journal (Refereed)
    Abstract [en]

    This paper presents a novel approach to skill acquisition from human demonstration. A robot manipulator with a morphology which is very different from the human arm simply cannot copy a human motion, but has to execute its own version of the skill. When a skill once has been acquired the robot must also be able to generalize to other similar skills, without a new learning process. By using a motion planner that operates in an object-related world frame called hand-state, we show that this representation simplifies skill reconstruction and preserves the essential parts of the skill. (C) 2010 Elsevier B.V. All rights reserved.

  • 28.
    Stoyanov, Todor
    et al.
    Örebro University, School of Science and Technology.
    Mojtahedzadeh, Rasoul
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Comparative evaluation of range sensor accuracy for indoor mobile robotics and automated logistics applications2013In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 61, no 10, p. 1094-1105Article in journal (Refereed)
    Abstract [en]

    3D range sensing is an important topic in robotics, as it is a component in vital autonomous subsystems such as for collision avoidance, mapping and perception. The development of affordable, high frame rate and precise 3D range sensors is thus of considerable interest. Recent advances in sensing technology have produced several novel sensors that attempt to meet these requirements. This work is concerned with the development of a holistic method for accuracy evaluation of the measurements produced by such devices. A method for comparison of range sensor output to a set of reference distance measurements, without using a precise ground truth environment model, is proposed. This article presents an extensive evaluation of three novel depth sensors — the Swiss Ranger SR-4000, Fotonic B70 and Microsoft Kinect. Tests are concentrated on the automated logistics scenario of container unloading. Six different setups of box-, cylinder-, and sack-shaped goods inside a mock-up container are used to collect range measurements. Comparisons are performed against hand-crafted ground truth data, as well as against a reference actuated Laser Range Finder (aLRF) system. Additional test cases in an uncontrolled indoor environment are performed in order to evaluate the sensors’ performance in a challenging, realistic application scenario.

  • 29.
    Sun, Da
    et al.
    Örebro University, School of Science and Technology. Department of Biomedical Engineering, National University of, Singapore, Singapore.
    Liao, Qianfang
    Research & Development Center, Nidec Singapore Pte Ltd, Singapore, Singapore; Department of Biomedical Engineering, National University of ,Singapore, Singapore.
    Ren, Hongliang
    Department of Biomedical Engineering, National University of, Singapore, Singapore.
    Type-2 Fuzzy Logic based Time-delayed Shared Control in Online-switching Tele-operated and Autonomous Systems2018In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 101, p. 138-152Article in journal (Refereed)
    Abstract [en]

    This paper develops a novel shared control scheme for online-switching tele-operated and autonomous system with time-varying delays. Type-2 Takagi-Sugeno (T-S) fuzzy model is used to describe the dynamics of master and slave robots in this system. A novel non-singular fast terminal siding mode (NFTSM)-based algorithm combined with an extended wave-based time domain passivity approach (TDPA) is presented to enhance the master-slave motion synchronization in the tele-operated mode and reference-slave motion synchronization in the autonomous mode, while simultaneously ensuring the stability of the overall system in the presence of arbitrary time delays. In addition, based on the Type-2 Fuzzy model, a new torque observer is designed to estimate the external torques and then the torque tracking method is employed in the control laws to let the slave apply the designated force to further improve the operator’s force perception for the environment. The stability of the closed-loop system is proven using the Lyapunov-Krasovskii functions. Finally, experiments using two haptic devices prove the superiority of the proposed strategy.

  • 30.
    Tamimi, Hashem
    et al.
    University of Tubingen.
    Andreasson, Henrik
    Örebro University, Department of Technology.
    Treptow, André
    University of Tubingen.
    Duckett, Tom
    University of Lincoln.
    Zell, Andreas
    University of Tubingen.
    Localization of mobile robots with omnidirectional vision using Particle Filter and iterative SIFT2006In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 54, no 9, p. 758-765Article in journal (Refereed)
    Abstract [en]

    The Scale Invariant Feature Transform, SIFT, has been successfully applied to robot localization. Still, the number of features extracted with this approach is immense, especially when dealing with omnidirectional vision. In this work, we propose a new approach that reduces the number of features generated by SIFT as well as their extraction and matching time. With the help of a Particle Filter, we demonstrate that we can still localize the mobile robot accurately with a lower number of features

  • 31.
    Treptow, André
    et al.
    University of T ubingen.
    Cielniak, Grzegorz
    Örebro University, Department of Technology.
    Duckett, Tom
    University of Lincoln.
    Real-time people tracking for mobile robots using thermal vision2006In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 54, no 9, p. 729-739Article in journal (Refereed)
    Abstract [en]

    This paper presents a vision-based approach for tracking people on a mobile robot using thermal images. The approach combines a particle filter with two alternative measurement models that are suitable for real-time tracking. With this approach a person can be detected independently from current light conditions and in situations where no skin colour is visible. In addition, the paper presents a comprehensive, quantitative evaluation of the different methods on a mobile robot in an office environment, for both single and multiple persons. The results show that the measurement model that was learned from local greyscale features could improve on the performance of the elliptic contour model, and that both models could be combined to further improve performance with minimal extra computational cost.

  • 32.
    Valgren, Christoffer
    et al.
    Department of Computer Science, Örebro University, Örebro, Sweden.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    SIFT, SURF & seasons: Appearance-based long-term localization in outdoor environments2010In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 58, no 2, p. 149-156Article in journal (Refereed)
    Abstract [en]

    In this paper, we address the problem of outdoor, appearance-based topological localization, particularly over long periods of time where seasonal changes alter the appearance of the environment. We investigate a straight-forward method that relies on local image features to compare single image pairs. We rst look into which of the dominating image feature algorithms, SIFT or the more recent SURF, that is most suitable for this task. We then ne-tune our localization algorithm in terms of accuracy, and also introduce the epipolar constraint to further improve the result. The nal localization algorithm is applied on multiple data sets, each consisting of a large number of panoramic images, which have been acquired over a period of nine months with large seasonal changes. The nal localization rate in the single-image matching, cross-seasonal case is between 80 to 95%.

  • 33.
    Wiedemann, Thomas
    et al.
    German Aerospace Center, Oberpfaffenhofen, Germany.
    Shutin, Dmitriy
    German Aerospace Center, Oberpfaffenhofen, Germany.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Model-based gas source localization strategy for a cooperative multi-robot system-A probabilistic approach and experimental validation incorporating physical knowledge and model uncertainties2019In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 118, p. 66-79Article in journal (Refereed)
    Abstract [en]

    Sampling gas distributions by robotic platforms in order to find gas sources is an appealing approach to alleviate threats for a human operator. Different sampling strategies for robotic gas exploration exist. In this paper we investigate the benefit that could be obtained by incorporating physical knowledge about the gas dispersion. By exploring a gas diffusion process using a multi-robot system. The physical behavior of the diffusion process is modeled using a Partial Differential Equation (PDE) which is integrated into the exploration strategy. It is assumed that the diffusion process is driven by only a few spatial sources at unknown locations with unknown intensity. The objective of the exploration strategy is to guide the robots to informative measurement locations and by means of concentration measurements estimate the source parameters, in particular, their number, locations and magnitudes. To this end we propose a probabilistic approach towards PDE identification under sparsity constraints using factor graphs and a message passing algorithm. Moreover, message passing schemes permit efficient distributed implementation of the algorithm, which makes it suitable for a multi-robot system. We designed an experimental setup that allows us to evaluate the performance of the exploration strategy in hardware-in-the-loop experiments as well as in experiments with real ethanol gas under laboratory conditions. The results indicate that the proposed exploration approach accelerates the identification of the source parameters and outperforms systematic sampling. (C) 2019 Elsevier B.V. All rights reserved.

  • 34.
    Yuan, Weihao
    et al.
    Robotics Institute, ECE, Hong Kong University of Science and Technology, Hong Kong SAR, China.
    Hang, Kaiyu
    Mechanical Engineering and Material Science, Yale University, New Haven CT, USA.
    Kragic, Danica
    Centre for Autonomous Systems, EECS, KTH Royal Institute of Technology, Stockholm, Sweden.
    Wang, Michael Y.
    Robotics Institute, ECE, Hong Kong University of Science and Technology, Hong Kong SAR, China.
    Stork, Johannes Andreas
    Örebro University, School of Science and Technology.
    End-to-end nonprehensile rearrangement with deep reinforcement learning and simulation-to-reality transfer2019In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 119, p. 119-134Article in journal (Refereed)
    Abstract [en]

    Nonprehensile rearrangement is the problem of controlling a robot to interact with objects through pushing actions in order to reconfigure the objects into a predefined goal pose. In this work, we rearrange one object at a time in an environment with obstacles using an end-to-end policy that maps raw pixels as visual input to control actions without any form of engineered feature extraction. To reduce the amount of training data that needs to be collected using a real robot, we propose a simulation-to-reality transfer approach. In the first step, we model the nonprehensile rearrangement task in simulation and use deep reinforcement learning to learn a suitable rearrangement policy, which requires in the order of hundreds of thousands of example actions for training. Thereafter, we collect a small dataset of only 70 episodes of real-world actions as supervised examples for adapting the learned rearrangement policy to real-world input data. In this process, we make use of newly proposed strategies for improving the reinforcement learning process, such as heuristic exploration and the curation of a balanced set of experiences. We evaluate our method in both simulation and real setting using a Baxter robot to show that the proposed approach can effectively improve the training process in simulation, as well as efficiently adapt the learned policy to the real world application, even when the camera pose is different from simulation. Additionally, we show that the learned system not only can provide adaptive behavior to handle unforeseen events during executions, such as distraction objects, sudden changes in positions of the objects, and obstacles, but also can deal with obstacle shapes that were not present in the training process.

1 - 34 of 34
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf