To Örebro University

oru.seÖrebro University Publications
Change search
Refine search result
1 - 43 of 43
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Adolfsson, Daniel
    et al.
    Örebro University, School of Science and Technology.
    Castellano-Quero, Manuel
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    CorAl: Introspection for robust radar and lidar perception in diverse environments using differential entropy2022In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 155, article id 104136Article in journal (Refereed)
    Abstract [en]

    Robust perception is an essential component to enable long-term operation of mobile robots. It depends on failure resilience through reliable sensor data and pre-processing, as well as failure awareness through introspection, for example the ability to self-assess localization performance. This paper presents CorAl: a principled, intuitive, and generalizable method to measure the quality of alignment between pairs of point clouds, which learns to detect alignment errors in a self-supervised manner. CorAl compares the differential entropy in the point clouds separately with the entropy in their union to account for entropy inherent to the scene. By making use of dual entropy measurements, we obtain a quality metric that is highly sensitive to small alignment errors and still generalizes well to unseen environments. In this work, we extend our previous work on lidar-only CorAl to radar data by proposing a two-step filtering technique that produces high-quality point clouds from noisy radar scans. Thus, we target robust perception in two ways: by introducing a method that introspectively assesses alignment quality, and by applying it to an inherently robust sensor modality. We show that our filtering technique combined with CorAl can be applied to the problem of alignment classification, and that it detects small alignment errors in urban settings with up to 98% accuracy, and with up to 96% if trained only in a different environment. Our lidar and radar experiments demonstrate that CorAl outperforms previous methods both on the ETH lidar benchmark, which includes several indoor and outdoor environments, and the large-scale Oxford and MulRan radar data sets for urban traffic scenarios. The results also demonstrate that CorAl generalizes very well across substantially different environments without the need of retraining.

  • 2.
    Andreasson, Henrik
    et al.
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    6D scan registration using depth-interpolated local image features2010In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 58, no 2, p. 157-165Article in journal (Refereed)
    Abstract [en]

    This paper describes a novel registration approach that is based on a combination of visual and 3D range information.To identify correspondences, local visual features obtained from images of a standard color camera are compared and the depth of matching features (and their position covariance) is determined from the range measurements of a 3D laserscanner. The matched depth-interpolated image features allows to apply registration with known correspondences.We compare several ICP variants in this paper and suggest an extension that considers the spatial distance betweenmatching features to eliminate false correspondences. Experimental results are presented in both outdoor and indoor environments. In addition to pair-wise registration, we also propose a global registration method that registers allscan poses simultaneously.

    Download full text (pdf)
    FULLTEXT01
  • 3.
    Andreasson, Henrik
    et al.
    Örebro University, Department of Technology.
    Treptow, André
    University of Tübingen.
    Duckett, Tom
    Örebro University, Department of Technology.
    Self-localization in non-stationary environments using omni-directional vision2007In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 55, no 7, p. 541-551Article in journal (Refereed)
    Abstract [en]

    This paper presents an image-based approach for localization in non-static environments using local feature descriptors, and its experimental evaluation in a large, dynamic, populated environment where the time interval between the collected data sets is up to two months. By using local features together with panoramic images, robustness and invariance to large changes in the environment can be handled. Results from global place recognition with no evidence accumulation and a Monte Carlo localization method are shown. To test the approach even further, experiments were conducted with up to 90% virtual occlusion in addition to the dynamic changes in the environment

  • 4.
    Asadi, Sahar
    et al.
    Örebro University, School of Science and Technology.
    Fan, Han
    Örebro University, School of Science and Technology.
    Hernandez Bennetts, Victor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Time-dependent gas distribution modelling2017In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 96, p. 157-170Article in journal (Refereed)
    Abstract [en]

    Artificial olfaction can help to address pressing environmental problems due to unwanted gas emissions. Sensor networks and mobile robots equipped with gas sensors can be used for e.g. air pollution monitoring. Key in this context is the ability to derive truthful models of gas distribution from a set of sparse measurements. Most statistical gas distribution modelling methods assume that gas dispersion is a time constant random process. While this assumption approximately holds in some situations, it is necessary to model variations over time in order to enable applications of gas distribution modelling in a wider range of realistic scenarios. Time-invariant approaches cannot model well evolving gas plumes, for example, or major changes in gas dispersion due to a sudden change of the environmental conditions. This paper presents two approaches to gas distribution modelling, which introduce a time-dependency and a relation to a time-scale in generating the gas distribution model either by sub-sampling or by introducing a recency weight that relates measurement and prediction time. We evaluated these approaches in experiments performed in two real environments as well as on several simulated experiments. As expected, the comparison of different sub-sampling strategies revealed that more recent measurements are more informative to derive an estimate of the current gas distribution as long as a sufficient spatial coverage is given. Next, we compared a time-dependent gas distribution modelling approach (TD Kernel DM+V), which includes a recency weight, to the state-of-the-art gas distribution modelling approach (Kernel DM+V), which does not consider sampling times. The results indicate a consistent improvement in the prediction of unseen measurements, particularly in dynamic scenarios. Furthermore, this paper discusses the impact of meta-parameters in model selection and compares the performance of time-dependent GDM in different plume conditions. Finally, we investigated how to set the target time for which the model is created. The results indicate that TD Kernel DM+V performs best when the target time is set to the maximum sampling time in the test set.

  • 5.
    Bouguerra, Abdelbaki
    et al.
    Örebro University, School of Science and Technology.
    Karlsson, Lars
    Örebro University, School of Science and Technology.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Monitoring the execution of robot plans using semantic knowledge2008In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 56, no 11, p. 942-954Article in journal (Refereed)
    Abstract [en]

    Even the best laid plans can fail, and robot plans executed in real world domains tend to do so often. The ability of a robot to reliably monitor the execution of plans and detect failures is essential to its performance and its autonomy. In this paper, we propose a technique to increase the reliability of monitoring symbolic robot plans. We use semantic domain knowledge to derive implicit expectations of the execution of actions in the plan, and then match these expectations against observations. We present two realizations of this approach: a crisp one, which assumes deterministic actions and reliable sensing, and uses a standard knowledge representation system (LOOM); and a probabilistic one, which takes into account uncertainty in action effects, in sensing, and in world states. We perform an extensive validation of these realizations through experiments performed both in simulation and on real robots.

  • 6.
    Cielniak, Grzegorz
    et al.
    Sch Comp Sci, Lincoln Univ, Lincoln, England.
    Duckett, Tom
    Sch Comp Sci, Lincoln Univ, Lincoln, England.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Data association and occlusion handling for vision-based people tracking by mobile robots2010In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 58, no 5, p. 435-443Article in journal (Refereed)
    Abstract [en]

    This paper presents an approach for tracking multiple persons on a mobile robot with a combination of colour and thermal vision sensors, using several new techniques. First, an adaptive colour model is incorporated into the measurement model of the tracker. Second, a new approach for detecting occlusions is introduced, using a machine learning classifier for pairwise comparison of persons (classifying which one is in front of the other). Third, explicit occlusion handling is incorporated into the tracker. The paper presents a comprehensive, quantitative evaluation of the whole system and its different components using several real world data sets. (C) 2010 Elsevier B.V. All rights reserved.

  • 7.
    Coradeschi, Silvia
    et al.
    Örebro University, Department of Technology.
    Saffiotti, Alessandro
    Örebro University, Department of Technology.
    An introduction to the anchoring problem2003In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 43, no 2-3, p. 85-96Article in journal (Refereed)
    Abstract [en]

    Anchoring is the problem of connecting, inside an artificial system, symbols and sensor data that refer to the same physical objects in the external world. This problem needs to be solved in any robotic system that incorporates a symbolic component. However, it is only recently that the anchoring problem has started to be addressed as a problem per se, and a few general solutions have begun to appear in the literature. This paper introduces the special issue on perceptual anchoring of the Robotics and Autonomous Systems journal. Our goal is to provide a general overview of the anchoring problem, and to highlight some of its subtle points

  • 8.
    Duckett, Tom
    et al.
    Lincoln School of Computer Science, University of Lincoln, Lincoln, United Kingdom.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Special Issue: Selected Papers from the 5th European Conference on Mobile Robots (ECMR 2011)2013In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 61, no 10, p. 1049-1050Article in journal (Other academic)
  • 9.
    Duckett, Tom
    et al.
    Örebro University, Department of Technology.
    Nehmzow, Ulrich
    University of Manchester.
    Mobile robot self-localisation using occupancy histograms and a mixture of Gaussian location hypotheses2001In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 34, no 2-3, p. 117-129Article in journal (Refereed)
    Abstract [en]

    The topic of mobile robot self-localisation is often divided into the sub-problems of global localisation and position tracking. Both are now well understood individually, but few mobile robots can deal simultaneously with the two problems in large, complex environments. In this paper, we present a unified approach to global localisation and position tracking which is based on a topological map augmented with metric information. This method combines a new scan matching technique, using histograms extracted from local occupancy grids, with an efficient algorithm for tracking multiple location hypotheses over time. The method was validated with experiments in a series of real world environments, including its integration into a complete navigating robot. The results show that the robot can localise itself reliably in large, indoor environments using minimal computational resources

  • 10.
    Fabrizi, Elisabetta
    et al.
    La Sapienza - Roma.
    Saffiotti, Alessandro
    Örebro University, Department of Technology.
    Augmenting topology-based maps with geometric information2002In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 40, no 2, p. 91-97Article in journal (Refereed)
    Abstract [en]

    Topology-based maps are a new representation of the workspace of a mobile robot, which capture the structure of the free space in the environment in terms of the basic topological notions of connectivity and adjacency. A topology-based map can represent the environment in terms of open spaces (rooms and corridors) connected by narrow passages (doors and junctions). In this paper, we show how to enrich a topology-based map with geometric information useful for the generation and execution of navigation plans. Both the topology-based map and its geometric information are automatically extracted from sensor data. We illustrate the use of topology-based maps for planned behavior-based navigation on a real robot.

  • 11.
    Galindo, Cipriano
    et al.
    Dept. of System Engineering and Automation, University of Malaga, Spain.
    Fernández-Madrigal, Juan-Antonio
    Dept. of System Engineering and Automation, University of Malaga, Spain.
    González, Javier
    Dept. of System Engineering and Automation, University of Malaga, Spain.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Robot Task Planning Using Semantic Maps2008In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 56, no 11, p. 955-966Article in journal (Refereed)
    Abstract [en]

    Task planning for mobile robots usually relies solely on spatial information and on shallow domain knowledge, like labels attached to objects and places. Although spatial information is necessary for performing basic robot operations (navigation and localization), the use of deeper domain knowledge is pivotal to endow a robot with higher degrees of autonomy and intelligence. In this paper, we focus on semantic knowledge, and show how this type of knowledge can be profitably used for robot task planning. We start by defining a specific type of semantic maps, which integrate hierarchical spatial information and semantic knowledge. We then proceed to describe how these semantic maps can improve task planning in two ways: extending the capabilities of the planner by reasoning about semantic information, and improving the planning efficiency in large domains. We show several experiments that demonstrate the effectiveness of our solutions in a domain involving robot navigation in a domestic environment.

  • 12.
    Galindo, Cipriano
    et al.
    University of Malaga, Malaga, Spain.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Inferring robot goals from violations of semantic knowledge2013In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 61, no 10, p. 1131-1143Article in journal (Refereed)
    Abstract [en]

    A growing body of literature shows that endowing a mobile robot with semantic knowledge and with the ability to reason from this knowledge can greatly increase its capabilities. In this paper, we present a novel use of semantic knowledge, to encode information about how things should be, i.e. norms, and to enable the robot to infer deviations from these norms in order to generate goals to correct these deviations. For instance, if a robot has semantic knowledge that perishable items must be kept in a refrigerator, and it observes a bottle of milk on a table, this robot will generate the goal to bring that bottle into a refrigerator. The key move is to properly encode norms in an ontology so that each norm violation results in a detectable inconsistency. A goal is then generated to bring the world back in a consistent state, and a planner is used to transform this goal into actions. Our approach provides a mobile robot with a limited form of goal autonomy: the ability to derive its own goals to pursue generic aims. We illustrate our approach in a full mobile robot system that integrates a semantic map, a knowledge representation and reasoning system, a task planner, and standard perception and navigation routines. (C) 2013 Elsevier B.V. All rights reserved.

  • 13.
    Gugliermo, Simona
    et al.
    Örebro University, School of Science and Technology. Autonomous Transport Systems, Scania CV AB, Södertälje, Sweden.
    Dominguez, David Caceres
    Iannotta, Marco
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Schaffernicht, Erik
    Örebro University, School of Science and Technology.
    Evaluating behavior trees2024In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 178, article id 104714Article in journal (Refereed)
    Abstract [en]

    Behavior trees (BTs) are increasingly popular in the robotics community. Yet in the growing body of published work on this topic, there is a lack of consensus on what to measure and how to quantify BTs when reporting results. This is not only due to the lack of standardized measures, but due to the sometimes ambiguous use of definitions to describe BT properties. This work provides a comprehensive overview of BT properties the community is interested in, how they relate to each other, the metrics currently used to measure BTs, and whether the metrics appropriately quantify those properties of interest. Finally, we provide the practitioner with a set of metrics to measure, as well as insights into the properties that can be derived from those metrics. By providing this holistic view of properties and their corresponding evaluation metrics, we hope to improve clarity when using BTs in robotics. This more systematic approach will make reported results more consistent and comparable when evaluating BTs.

  • 14.
    Hernandez, Alejandra C.
    et al.
    Robotics Lab, Department of Systems Engineering and Automation, Carlos III University of Madrid, Spain.
    Gomez, Clara
    Robotics Lab, Department of Systems Engineering and Automation, Carlos III University of Madrid, Spain.
    Barber, Ramon
    Robotics Lab, Department of Systems Engineering and Automation, Carlos III University of Madrid, Spain.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology.
    Exploiting the confusions of semantic places to improve service robotic tasks in indoor environments2023In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 159, article id 104290Article in journal (Refereed)
    Abstract [en]

    A significant challenge in service robots is the semantic understanding of their surrounding areas. Traditional approaches addressed this problem by segmenting the environment into regions corresponding to full rooms that are assigned labels consistent with human perception, e.g. office or kitchen. However, different areas inside the same room can be used in different ways: Could the table and the chair in my kitchen become my office ? What is the category of that area now? office or kitchen? To adapt to these circumstances we propose a new paradigm where we intentionally relax the resulting labeling of place classifiers by allowing confusions, and by avoiding further filtering leading to clean full room classifications. Our hypothesis is that confusions can be beneficial to a service robot and, therefore, they can be kept and better exploited. Our approach creates a subdivision of the environment into different regions by maintaining the confusions which are due to the scene appearance or to the distribution of objects. In this paper, we present a proof of concept implemented in simulated and real scenarios, that improves efficiency in the robotic task of searching for objects by exploiting the confusions in place classifications.

  • 15.
    Hertzberg, Joachim
    et al.
    University of Osnabrück, Inst. of Computer Science, Knowledge-Based Systems Research Group Osnabrück, Germany.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Using semantic knowledge in robotics2008In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 56, no 11, p. 875-877Article in journal (Refereed)
    Abstract [en]

    There is a growing tendency to introduce high-level semantic knowledge into robotic systems and beyond. This tendency is visible in different forms within several areas of robotics. Recent work in mapping and localization tries to extract semantically meaningful structures from sensor data during map building, or to use semantic knowledge in the map building process, or both. A similar trend characterizes the cognitive vision approach to scene understanding. Recent efforts in human–robot interaction try to endow the robot with some understanding of the human meaning of words, gestures and expressions. Ontological knowledge is increasingly being used in distributed systems in order to allow automatic re-configuration in the areas of flexible automation and of ubiquitous robotics. Ontological knowledge was also used recently to improve the inter-operability of robotic components developed for different systems.

    While these trends have many questions and issues in common, work on each one of them is often pursued in isolation within a specific area, without being aware of the related achievements in other areas. The aim of this special issue is to collect in a single place a set of advanced, high-quality papers that tackle the problem of using semantic knowledge in robotics in many of its different forms.

    The submissions to this special issue made it clear that there are many ways in which semantic knowledge may play a role in robotics. Interestingly, they also revealed that there are many ways in which the term semantic knowledge is being interpreted. Before turning to the technical papers, then, it is worth spending a few words on this matter.

  • 16.
    Hoang, Dinh-Cuong
    et al.
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Object-RPE: Dense 3D Reconstruction and Pose Estimation with Convolutional Neural Networks2020In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 133, article id 103632Article in journal (Refereed)
    Abstract [en]

    We present an approach for recognizing objects present in a scene and estimating their full pose by means of an accurate 3D instance-aware semantic reconstruction. Our framework couples convolutional neural networks (CNNs) and a state-of-the-art dense Simultaneous Localisation and Mapping(SLAM) system, ElasticFusion [1], to achieve both high-quality semantic reconstruction as well as robust 6D pose estimation for relevant objects. We leverage the pipeline of ElasticFusion as a back-bone and propose a joint geometric and photometric error function with per-pixel adaptive weights. While the main trend in CNN-based 6D pose estimation has been to infer an object’s position and orientation from single views of the scene, our approach explores performing pose estimation from multiple viewpoints, under the conjecture that combining multiple predictions can improve the robustness of an object detection system. The resulting system is capable of producing high-quality instance-aware semantic reconstructions of room-sized environments, as well as accurately detecting objects and their 6D poses. The developed method has been verified through extensive experiments on different datasets. Experimental results confirmed that the proposed system achieves improvements over state-of-the-art methods in terms of surface reconstruction and object pose prediction. Our code and video are available at https://sites.google.com/view/object-rpe.

    Download full text (pdf)
    Object-RPE: Dense 3D reconstruction and pose estimation with convolutional neural networks
  • 17.
    Kurtser, Polina
    et al.
    Örebro University, School of Science and Technology. Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
    Edan, Yael
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
    Planning the sequence of tasks for harvesting robots2020In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 131, article id 103591Article in journal (Refereed)
    Abstract [en]

    A methodology for planning the sequence of tasks for a harvesting robot is presented. The fruit targets are situated at unknown locations and must be detected by the robot through a sequence of sensing tasks. Once the targets are detected, the robot must execute a harvest action at each target location. The traveling salesman paradigm (TSP) is used to plan the sequence of sensing and harvesting tasks taking into account the costs of the sensing and harvesting actions and the traveling times. Sensing is planned online. The methodology is validated and evaluated in both laboratory and greenhouse conditions for a case study of a sweet pepper harvesting robot. The results indicate that planning the sequence of tasks for a sweet pepper harvesting robot results in 12% cost reduction. Incorporating the sensing operation in the planning sequence for fruit harvesting is a new approach in fruit harvesting robots and is important for cycle time reduction. Furthermore, the sequence is re-planned as sensory information becomes available and the costs of these new sensing operations are also considered in the planning.

  • 18.
    Larsson, Sören
    et al.
    Örebro University, Department of Technology.
    Kjellander, Johan
    Örebro University, Department of Technology.
    Motion control and data capturing for laser scanning with an industrial robot2006In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 54, no 6, p. 453-460Article in journal (Refereed)
    Abstract [en]

    Reverse engineering is concerned with the problem of creating computer aided design (CAD) models of real objects by interpreting point data measured from their surfaces. For complex objects, it is important that the measuring device is free to move along arbitrary paths and make its measurements from suitable directions. This paper shows how a standard industrial robot with a laser profile scanner can be used to achieve that freedom. The system is planned to be part of a future automatic system for the Reverse Engineering of unknown objects.

  • 19.
    Larsson, Sören
    et al.
    Örebro University, Department of Technology.
    Kjellander, Johan A. P.
    Örebro University, Department of Technology.
    Path planning for laser scanning with an industrial robot2008In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 56, no 7, p. 615-624Article in journal (Refereed)
  • 20.
    Lilienthal, Achim J.
    et al.
    University of Tübingen, WSI, Tübingen, Germany.
    Duckett, Tom
    Örebro University, Department of Technology.
    Building gas concentration gridmaps with a mobile robot2004In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 48, no 1, p. 3-16Article in journal (Refereed)
    Abstract [en]

    This paper addresses the problem of mapping the structure of a gas distribution by creating concentration gridmaps from the data collected by a mobile robot equipped with gas sensors. By contrast to metric gridmaps extracted from sonar or laser range scans, a single measurement from a gas sensor provides information about a comparatively small area. To overcome this problem, a mapping technique is introduced that uses a Gaussian weighting function to model the decreasing likelihood that a particular reading represents the true concentration with respect to the distance from the point of measurement. This method is evaluated in terms of its suitability regarding the slow response and recovery of the gas sensors, and experimental comparisons of different exploration strategies are presented. The stability of the mapped structures and the capability to use concentration gridmaps to locate a gas source are also discussed.

    Download full text (pdf)
    Building Gas Concentration Gridmaps
  • 21.
    Loutfi, Amy
    et al.
    Örebro University, Department of Technology.
    Broxvall, Mathias
    Örebro University, Department of Technology.
    Coradeschi, Silvia
    Örebro University, Department of Technology.
    Karlsson, Lars
    Örebro University, Department of Technology.
    Object recognition: a new application for smelling robots2005In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 52, no 4, p. 272-289Article in journal (Refereed)
    Abstract [en]

    Olfaction is a challenging new sensing modality for intelligent systems. With the emergence of electronic noses, it is now possible to detect and recognize a range of different odours for a variety of applications. In this work, we introduce a new application where electronic olfaction is used in cooperation with other types of sensors on a mobile robot in order to acquire the odour property of objects.We examine the problem of deciding when, how and where the electronic nose (e-nose) should be activated by planning for active perception and we consider the problem of integrating the information provided by the e-nose with both prior information and information from other sensors (e.g., vision). Experiments performed on a mobile robot equipped with an e-nose are presented.

  • 22.
    Lundh, Robert
    et al.
    Örebro University, School of Science and Technology.
    Karlsson, Lars
    Örebro University, School of Science and Technology.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Autonomous functional configuration of a network robot system2008In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 56, no 10, p. 819-830Article in journal (Refereed)
    Abstract [en]

    We consider distributed systems of networked robots in which: (1) each robot includes sensing, acting and/or processing modular functionalities; and (2) robots can help each other by offering those functionalities. A functional configuration is any way to allocate and connect functionalities among the robots. An interesting feature of a system of this type is the possibility to use different functional configurations to make the same set of robots perform different tasks, or to perform the same task under different conditions. In this paper, we propose an approach to automatically generate at run time a functional configuration of a network robot system to perform a given task in a given environment, and to dynamically change this configuration in response to failures. Our approach is based on artificial intelligence planning techniques, and it is provably sound, complete and optimal. Moreover, our configuration planner can be combined with an action planner to deal with tasks that require sequences of configurations. We illustrate our approach on a specific type of network robot system, called Peis-Ecology, and show experiments in which a sequence of configurations is automatically generated and executed on real robots. These experiments demonstrate that our self-configuration approach can help the system to achieve greater autonomy, flexibility and robustness.

  • 23.
    Luperto, Matteo
    et al.
    Applied Intelligent System Lab, Department of Computer Science, University of Milan, Italy.
    Monroy, Javier
    Machine Perception and Intelligent Robotics Group, Department of System Engineering and Automation, Biomedical Research Institute of Malaga, University of Malaga, Spain.
    Moreno, Francisco-Angel
    Machine Perception and Intelligent Robotics Group, Department of System Engineering and Automation, Biomedical Research Institute of Malaga, University of Malaga, Spain.
    Lunardini, Francesca
    NearLab, Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy.
    Renoux, Jennifer
    Örebro University, School of Science and Technology.
    Krpic, Andrej
    Smart-Com, Ljubljana, Slovenia.
    Galindo, Cipriano
    Machine Perception and Intelligent Robotics Group, Department of System Engineering and Automation, Biomedical Research Institute of Malaga, University of Malaga, Spain.
    Ferrante, Simona
    NearLab, Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy.
    Basilico, Nicola
    Applied Intelligent System Lab, Department of Computer Science, University of Milan, Italy.
    Gonzalez-Jimenez, Javier
    Machine Perception and Intelligent Robotics Group, Department of System Engineering and Automation, Biomedical Research Institute of Malaga, University of Malaga, Spain.
    Borghese, N. Alberto
    Applied Intelligent System Lab, Department of Computer Science, University of Milan, Italy.
    Seeking at-home long-term autonomy of assistive mobile robots through the integration with an IoT-based monitoring system2023In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 161, article id 104346Article in journal (Refereed)
    Abstract [en]

    In this paper, we propose a system that stems from the integration of an autonomous mobile robot with an IoT-based monitoring system to provide monitoring, assistance, and stimulation to older adults living alone in their own houses. The creation of an Internet of Robotics Things (IoRT) based on the interplay between pervasive smart objects and autonomous robotic systems is claimed to enable the creation of innovative services conceived for assisting the final user, especially in elderly care. The synergy between IoT and a Socially Assistive Robot (SAR) was conceived to offer robustness, reconfiguration, heterogeneity, and scalability, by bringing a strong added value to both the current SAR and IoT technologies. First, we propose a method to achieve the synergy and integration between the IoT system and the robot; then, we show how our method increases the performance and effectiveness of both to provide long-term support to the older adults. To do so, we present a case-study, where we focus on the detection of signs of the frailty syndrome, a set of vulnerabilities typically conveyed by a cognitive and physical decline in older people that concur in amplifying the risks of major diseases hindering the capabilities of independent living. Experimental evaluation is performed in both controlled settings and in a long-term real-world pilot study with 9 older adults in their own apartments, where the system was deployed autonomously for, on average, 12 weeks.

  • 24.
    Luperto, Matteo
    et al.
    Applied Intelligent System Lab, Department of Computer Science, University of Milan, Milano, Italy.
    Romeo, Marta
    University of Manchester, Manchester, UK.
    Monroy, Javier
    Machine Perception and Intelligent Robotics Group (MAPIR), Department of System Engineering and Automation, University of Malaga, Spain; Instituto de Investigación Biomédica de Málaga–IBIMA BIONAND Platform, University of Málaga, Spain.
    Renoux, Jennifer
    Örebro University, School of Science and Technology. Machine Perception and Interaction Lab.
    Vuono, Alessandro
    Applied Intelligent System Lab, Department of Computer Science, University of Milan, Milano, Italy.
    Moreno, Francisco-Angel
    Machine Perception and Intelligent Robotics Group (MAPIR), Department of System Engineering and Automation, University of Malaga, Spain; Instituto de Investigación Biomédica de Málaga–IBIMA BIONAND Platform, University of Málaga, Spain.
    Gonzalez-Jimenez, Javier
    Machine Perception and Intelligent Robotics Group (MAPIR), Department of System Engineering and Automation, University of Malaga, Spain; Instituto de Investigación Biomédica de Málaga–IBIMA BIONAND Platform, University of Málaga, Spain.
    Basilico, Nicola
    Applied Intelligent System Lab, Department of Computer Science, University of Milan, Milano, Italy.
    Borghese, N. Alberto
    Applied Intelligent System Lab, Department of Computer Science, University of Milan, Milano, Italy.
    User feedback and remote supervision for assisted living with mobile robots: A field study in long-term autonomy2022In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 155, article id 104170Article in journal (Refereed)
    Abstract [en]

    In an ageing society, the at-home use of Socially Assistive Robots (SARs) could provide remote monitoring of their users' well-being, together with physical and psychological support. However, private home environments are particularly challenging for SARs, due to their unstructured and dynamic nature which often contributes to robots' failures. For this reason, even though several prototypes of SARs for elderly care have been developed, their commercialisation and wide-spread at-home use are yet to be effective. In this paper, we analyse how including the end users' feedback impacts the SARs reliability and acceptance. To do so, we introduce a Monitoring and Logging System (MLS) for remote supervision, which increases the explainability of SAR-based systems deployed in older adults' apartments, while also allowing the exchange of feedback between caregivers, technicians, and older adults. We then present an extensive field study showing how long-term deployment of autonomous SARs can be accomplished by relying on such a feedback loop to address any potential issue. To this end, we provide the results obtained in a 130-week long study where autonomous SARs were deployed in the apartments of 10 older adults, with the aim of possibly serving and assisting future practitioners, with the knowledge collected from this extensive experimental campaign, to fill the gap that currently exists for the widespread adoption of SARs.

  • 25.
    Marsland, Stephen
    et al.
    University of Manchester.
    Nehmzow, Ulrich
    The University of Essex.
    Duckett, Tom
    Örebro University, Department of Technology.
    Learning to select distinctive landmarks for mobile robot navigation2001In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 37, no 4, p. 241-260Article in journal (Refereed)
    Abstract [en]

    In landmark-based navigation systems for mobile robots, sensory perceptions (e.g., laser or sonar scans) are used to identify the robot’s current location or to construct internal representations, maps, of the robot’s environment. Being based on an external frame of reference (which is not subject to incorrigible drift errors such as those occurring in odometry-based systems), landmark-based robot navigation systems are now widely used in mobile robot applications.

    The problem that has attracted most attention to date in landmark-based navigation research is the question of how to deal with perceptual aliasing, i.e., perceptual ambiguities. In contrast, what constitutes a good landmark, or how to select landmarks for mapping, is still an open research topic. The usual method of landmark selection is to map perceptions at regular intervals, which has the drawback of being inefficient and possibly missing ‘good’ landmarks that lie between sampling points.

    In this paper, we present an automatic landmark selection algorithm that allows a mobile robot to select conspicuous landmarks from a continuous stream of sensory perceptions, without any pre-installed knowledge or human intervention during the selection process. This algorithm can be used to make mapping mechanisms more efficient and reliable. Experimental results obtained with two different mobile robots in a range of environments are presented and analysed.

  • 26.
    Mojtahedzadeh, Rasoul
    et al.
    Örebro University, School of Science and Technology.
    Bouguerra, Abdelbaki
    Örebro University, School of Science and Technology.
    Schaffernicht, Erik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J
    Örebro University, School of Science and Technology.
    Support relation analysis and decision making for safe robotic manipulation tasks2015In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 71, no SI, p. 99-117Article in journal (Refereed)
    Abstract [en]

    In this article, we describe an approach to address the issue of automatically building and using high-level symbolic representations that capture physical interactions between objects in static configurations. Our work targets robotic manipulation systems where objects need to be safely removed from piles that come in random configurations. We assume that a 3D visual perception module exists so that objects in the piles can be completely or partially detected. Depending on the outcome of the perception, we divide the issue into two sub-issues: 1) all objects in the configuration are detected; 2) only a subset of objects are correctly detected. For the first case, we use notions from geometry and static equilibrium in classical mechanics to automatically analyze and extract act and support relations between pairs of objects. For the second case, we use machine learning techniques to estimate the probability of objects supporting each other. Having the support relations extracted, a decision making process is used to identify which object to remove from the configuration so that an expected minimum cost is optimized. The proposed methods have been extensively tested and validated on data sets generated in simulation and from real world configurations for the scenario of unloading goods from shipping containers.

  • 27.
    Palm, Rainer
    et al.
    Örebro University, School of Science and Technology.
    Iliev, Boyko
    Örebro University, School of Science and Technology.
    Kadmiry, Bourhane
    Örebro University, School of Science and Technology.
    Recognition of human grasps by time-clustering and fuzzy modeling2009In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 57, no 5, p. 484-495Article in journal (Refereed)
    Abstract [en]

    In this paper we address the problem of recognition of human grasps for five-fingeredrobotic hands and industrial robots in the context of programming-by-demonstration. The robot isinstructed by a human operator wearing a data glove capturing the hand poses. For a number ofhuman grasps, the corresponding fingertip trajectories are modeled in time and space by fuzzyclustering and Takagi-Sugeno (TS) modeling. This so-called time-clustering leads to grasp modelsusing time as input parameter and fingertip positions as outputs. For a sequence of grasps thecontrol system of the robot hand identifies the grasp segments, classifies the grasps andgenerates the sequence of grasps shown before. For this purpose, each grasp is correlated with atraining sequence. By means of a hybrid fuzzy model the demonstrated grasp sequence can bereconstructed.

  • 28.
    Persson, Martin
    et al.
    Örebro University, School of Science and Technology.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, UK.
    Lilienthal, Achim J.
    Örebro University, Department of Natural Sciences.
    Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping2008In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 56, no 6, p. 483-492Article in journal (Refereed)
    Abstract [en]

    This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omni-directional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground.

    Download full text (pdf)
    Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping
  • 29.
    Persson, Martin
    et al.
    Örebro University, Department of Technology.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, UK.
    Lilienthal, Achim J.
    Örebro University, Department of Technology.
    Virtual sensors for human concepts: building detection by an outdoor mobile robot2007In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 55, no 5, p. 383-390Article in journal (Refereed)
    Abstract [en]

    In human-robot communication it is often important to relate robot sensor readings to concepts used by humans. We suggest the use of a virtual sensor (one or several physical sensors with a dedicated signal processing unit for the recognition of real world concepts) and a method with which the virtual sensor can learn from a set of generic features. The virtual sensor robustly establishes the link between sensor data and a particular human concept. In this work, we present a virtual sensor for building detection that uses vision and machine learning to classify the image content in a particular direction as representing buildings or non-buildings. The virtual sensor is trained on a diverse set of image data, using features extracted from grey level images. The features are based on edge orientation, the configurations of these edges, and on grey level clustering. To combine these features, the AdaBoost algorithm is applied. Our experiments with an outdoor mobile robot show that the method is able to separate buildings from nature with a high classification rate, and to extrapolate well to images collected under different conditions. Finally, the virtual sensor is applied on the mobile robot, combining its classifications of sub-images from a panoramic view with spatial information (in the form of location and orientation of the robot) in order to communicate the likely locations of buildings to a remote human operator. (c) 2006 Elsevier B.V. All rights reserved.

    Download full text (pdf)
    Virtual Sensors for Human Concepts: Building Detection by an Outdoor Mobile Robot
  • 30. Petrovitc, Ivan
    et al.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Special issue ECMR 20092011In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 59, no 5, p. 263-264Article in journal (Refereed)
  • 31.
    Pettersson, Ola
    Örebro University, Department of Technology.
    Execution monitoring in robotics: a survey2005In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 53, no 2, p. 73-88Article in journal (Refereed)
    Abstract [en]

    Research on execution monitoring in its own is still not very common within the field of robotics and autonomous systems. It is more common that researchers interested in control architectures or execution planning include monitoring as a small part of their work when they realize that it is needed. On the other hand, execution monitoring has been a well studied topic within industrial control, although control theorists seldom use this term. Instead they refer to the problem of fault detection and isolation (FDI).

    This survey will use the knowledge and terminology from industrial control in order to classify different execution monitoring approaches applied to robotics. The survey is particularly focused on autonomous mobile robotics.

  • 32.
    Sanfeliu, Alberto
    et al.
    Institut de Robòtica I Informàtica Industrial (UPC-CSIC), Universitat Politècnica de Catalunya, Barcelona, Spain.
    Hagita, Norihiro
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Network Robot Systems2008In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 56, no 10, p. 793-797Article in journal (Refereed)
    Abstract [en]

     This article introduces the definition of Network Robot Systems (NRS) as is understood in Europe, USA and Japan. Moreover, it describes some of the NRS projects in Europe and Japan and presents a summary of the papers of this Special Issue.  

  • 33. Sanfeliu, Alberto
    et al.
    Hagita, Norihiro
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Special issue: Network robot systems2008In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 56, no 10, p. 791-791Article in journal (Refereed)
  • 34.
    Skoglund, Alexander
    et al.
    AASS Learning Systems Lab, Örebro Universitet, Örebro, Sweden.
    Iliev, Boyko
    Örebro University, School of Science and Technology.
    Palm, Rainer
    AASS Learning Systems Lab, Örebro Universitet, Örebro, Sweden.
    Programming-by-demonstration of reaching motions: a next-state-planner approach2010In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 58, no 5, p. 607-621Article in journal (Refereed)
    Abstract [en]

    This paper presents a novel approach to skill acquisition from human demonstration. A robot manipulator with a morphology which is very different from the human arm simply cannot copy a human motion, but has to execute its own version of the skill. When a skill once has been acquired the robot must also be able to generalize to other similar skills, without a new learning process. By using a motion planner that operates in an object-related world frame called hand-state, we show that this representation simplifies skill reconstruction and preserves the essential parts of the skill. (C) 2010 Elsevier B.V. All rights reserved.

  • 35.
    Stoyanov, Todor
    et al.
    Örebro University, School of Science and Technology.
    Mojtahedzadeh, Rasoul
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Comparative evaluation of range sensor accuracy for indoor mobile robotics and automated logistics applications2013In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 61, no 10, p. 1094-1105Article in journal (Refereed)
    Abstract [en]

    3D range sensing is an important topic in robotics, as it is a component in vital autonomous subsystems such as for collision avoidance, mapping and perception. The development of affordable, high frame rate and precise 3D range sensors is thus of considerable interest. Recent advances in sensing technology have produced several novel sensors that attempt to meet these requirements. This work is concerned with the development of a holistic method for accuracy evaluation of the measurements produced by such devices. A method for comparison of range sensor output to a set of reference distance measurements, without using a precise ground truth environment model, is proposed. This article presents an extensive evaluation of three novel depth sensors — the Swiss Ranger SR-4000, Fotonic B70 and Microsoft Kinect. Tests are concentrated on the automated logistics scenario of container unloading. Six different setups of box-, cylinder-, and sack-shaped goods inside a mock-up container are used to collect range measurements. Comparisons are performed against hand-crafted ground truth data, as well as against a reference actuated Laser Range Finder (aLRF) system. Additional test cases in an uncontrolled indoor environment are performed in order to evaluate the sensors’ performance in a challenging, realistic application scenario.

  • 36.
    Sun, Da
    et al.
    Örebro University, School of Science and Technology. Department of Biomedical Engineering, National University of, Singapore, Singapore.
    Liao, Qianfang
    Research & Development Center, Nidec Singapore Pte Ltd, Singapore, Singapore; Department of Biomedical Engineering, National University of ,Singapore, Singapore.
    Type-2 Fuzzy Logic based Time-delayed Shared Control in Online-switching Tele-operated and Autonomous Systems2018In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 101, p. 138-152Article in journal (Refereed)
    Abstract [en]

    This paper develops a novel shared control scheme for online-switching tele-operated and autonomous system with time-varying delays. Type-2 Takagi-Sugeno (T-S) fuzzy model is used to describe the dynamics of master and slave robots in this system. A novel non-singular fast terminal siding mode (NFTSM)-based algorithm combined with an extended wave-based time domain passivity approach (TDPA) is presented to enhance the master-slave motion synchronization in the tele-operated mode and reference-slave motion synchronization in the autonomous mode, while simultaneously ensuring the stability of the overall system in the presence of arbitrary time delays. In addition, based on the Type-2 Fuzzy model, a new torque observer is designed to estimate the external torques and then the torque tracking method is employed in the control laws to let the slave apply the designated force to further improve the operator’s force perception for the environment. The stability of the closed-loop system is proven using the Lyapunov-Krasovskii functions. Finally, experiments using two haptic devices prove the superiority of the proposed strategy.

    Download full text (pdf)
    Type-2 Fuzzy Logic based Time-delayed Shared Control in Online-switching Tele-operated and Autonomous Systems
  • 37.
    Sun, Da
    et al.
    Örebro University, School of Science and Technology.
    Liao, Qianfang
    Örebro University, School of Science and Technology.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Shared mixed reality-bilateral telerobotic system2020In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 134, article id 103648Article in journal (Refereed)
    Abstract [en]

    This study proposes a new shared mixed reality (MR)-bilateral telerobotic system. The main contribution of this study is to combine MR teleoperation and bilateral teleoperation, which takes advantage of the two types of teleoperation and compensates for each other's drawbacks. With this combination, the proposed system can address the asymmetry issues in bilateral teleoperation, such as kinematic redundancy and workspace inequality, and provide force feedback, which is lacking in MR teleoperation. In addition, this system effectively supports long-distance movements and fine movements. In this system, a new MR interface is developed to provide the operator with an immersive visual feedback of the workspace, in which a useful virtual controller known as an interaction proxy—is designed. Compared with previous virtual reality-based teleoperation systems, this interaction proxy can freely decouple the operator from the control loop, such that the operational burden can be substantially alleviated. Additionally, the force feedback provided by the bilateral teleoperation gives the operator an advanced perception about the remote workspace and can improve task performance. Experiments on multiple pick-and-place tasks are provided to demonstrate the feasibility and effectiveness of the proposed system.

  • 38.
    Tamimi, Hashem
    et al.
    University of Tubingen.
    Andreasson, Henrik
    Örebro University, Department of Technology.
    Treptow, André
    University of Tubingen.
    Duckett, Tom
    University of Lincoln.
    Zell, Andreas
    University of Tubingen.
    Localization of mobile robots with omnidirectional vision using Particle Filter and iterative SIFT2006In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 54, no 9, p. 758-765Article in journal (Refereed)
    Abstract [en]

    The Scale Invariant Feature Transform, SIFT, has been successfully applied to robot localization. Still, the number of features extracted with this approach is immense, especially when dealing with omnidirectional vision. In this work, we propose a new approach that reduces the number of features generated by SIFT as well as their extraction and matching time. With the help of a Particle Filter, we demonstrate that we can still localize the mobile robot accurately with a lower number of features

  • 39.
    Treptow, André
    et al.
    University of T ubingen.
    Cielniak, Grzegorz
    Örebro University, Department of Technology.
    Duckett, Tom
    University of Lincoln.
    Real-time people tracking for mobile robots using thermal vision2006In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 54, no 9, p. 729-739Article in journal (Refereed)
    Abstract [en]

    This paper presents a vision-based approach for tracking people on a mobile robot using thermal images. The approach combines a particle filter with two alternative measurement models that are suitable for real-time tracking. With this approach a person can be detected independently from current light conditions and in situations where no skin colour is visible. In addition, the paper presents a comprehensive, quantitative evaluation of the different methods on a mobile robot in an office environment, for both single and multiple persons. The results show that the measurement model that was learned from local greyscale features could improve on the performance of the elliptic contour model, and that both models could be combined to further improve performance with minimal extra computational cost.

  • 40.
    Valgren, Christoffer
    et al.
    Department of Computer Science, Örebro University, Örebro, Sweden.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    SIFT, SURF & seasons: Appearance-based long-term localization in outdoor environments2010In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 58, no 2, p. 149-156Article in journal (Refereed)
    Abstract [en]

    In this paper, we address the problem of outdoor, appearance-based topological localization, particularly over long periods of time where seasonal changes alter the appearance of the environment. We investigate a straight-forward method that relies on local image features to compare single image pairs. We rst look into which of the dominating image feature algorithms, SIFT or the more recent SURF, that is most suitable for this task. We then ne-tune our localization algorithm in terms of accuracy, and also introduce the epipolar constraint to further improve the result. The nal localization algorithm is applied on multiple data sets, each consisting of a large number of panoramic images, which have been acquired over a period of nine months with large seasonal changes. The nal localization rate in the single-image matching, cross-seasonal case is between 80 to 95%.

  • 41.
    Wiedemann, Thomas
    et al.
    German Aerospace Center, Oberpfaffenhofen, Germany.
    Shutin, Dmitriy
    German Aerospace Center, Oberpfaffenhofen, Germany.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Model-based gas source localization strategy for a cooperative multi-robot system-A probabilistic approach and experimental validation incorporating physical knowledge and model uncertainties2019In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 118, p. 66-79Article in journal (Refereed)
    Abstract [en]

    Sampling gas distributions by robotic platforms in order to find gas sources is an appealing approach to alleviate threats for a human operator. Different sampling strategies for robotic gas exploration exist. In this paper we investigate the benefit that could be obtained by incorporating physical knowledge about the gas dispersion. By exploring a gas diffusion process using a multi-robot system. The physical behavior of the diffusion process is modeled using a Partial Differential Equation (PDE) which is integrated into the exploration strategy. It is assumed that the diffusion process is driven by only a few spatial sources at unknown locations with unknown intensity. The objective of the exploration strategy is to guide the robots to informative measurement locations and by means of concentration measurements estimate the source parameters, in particular, their number, locations and magnitudes. To this end we propose a probabilistic approach towards PDE identification under sparsity constraints using factor graphs and a message passing algorithm. Moreover, message passing schemes permit efficient distributed implementation of the algorithm, which makes it suitable for a multi-robot system. We designed an experimental setup that allows us to evaluate the performance of the exploration strategy in hardware-in-the-loop experiments as well as in experiments with real ethanol gas under laboratory conditions. The results indicate that the proposed exploration approach accelerates the identification of the source parameters and outperforms systematic sampling. (C) 2019 Elsevier B.V. All rights reserved.

    Download full text (pdf)
    Model-based Gas Source Localization Strategy for a Cooperative Multi-Robot System - A Probabilistic Approach and Experimental Validation Incorporating Physical Knowledge and Model Uncertainties
  • 42.
    Yang, Yuxuan
    et al.
    Örebro University, School of Science and Technology.
    Stork, Johannes Andreas
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Learning differentiable dynamics models for shape control of deformable linear objects2022In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 158, article id 104258Article in journal (Refereed)
    Abstract [en]

    Robots manipulating deformable linear objects (DLOs) – such as surgical sutures in medical robotics, or cables and hoses in industrial assembly – can benefit substantially from accurate and fast differentiable predictive models. However, the off-the-shelf analytic physics models fall short of differentiability. Recently, neural-network-based data-driven models have shown promising results in learning DLO dynamics. These models have additional advantages compared to analytic physics models, as they are differentiable and can be used in gradient-based trajectory planning. Still, the data-driven approaches demand a large amount of training data, which can be challenging for real-world applications. In this paper, we propose a framework for learning a differentiable data-driven model for DLO dynamics with a minimal set of real-world data. To learn DLO twisting and bending dynamics in a 3D environment, we first introduce a new suitable DLO representation. Next, we use a recurrent network module to propagate effects between different segments along a DLO, thereby addressing a critical limitation of current state-of-the-art methods. Then, we train a data-driven model on synthetic data generated in simulation, instead of foregoing the time-consuming and laborious data collection process for real-world applications. To achieve a good correspondence between real and simulated models, we choose a set of simulation model parameters through parameter identification with only a few trajectories of a real DLO required. We evaluate several optimization methods for parameter identification and demonstrate that the differential evolution algorithm is efficient and effective for parameter identification. In DLO shape control tasks with a model-based controller, the data-driven model trained on synthetic data generated by the resulting models performs on par with the ones trained with a comparable amount of real-world data which, however, would be intractable to collect.

  • 43.
    Yuan, Weihao
    et al.
    Robotics Institute, ECE, Hong Kong University of Science and Technology, Hong Kong SAR, China.
    Hang, Kaiyu
    Mechanical Engineering and Material Science, Yale University, New Haven CT, USA.
    Kragic, Danica
    Centre for Autonomous Systems, EECS, KTH Royal Institute of Technology, Stockholm, Sweden.
    Wang, Michael Y.
    Robotics Institute, ECE, Hong Kong University of Science and Technology, Hong Kong SAR, China.
    Stork, Johannes Andreas
    Örebro University, School of Science and Technology.
    End-to-end nonprehensile rearrangement with deep reinforcement learning and simulation-to-reality transfer2019In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 119, p. 119-134Article in journal (Refereed)
    Abstract [en]

    Nonprehensile rearrangement is the problem of controlling a robot to interact with objects through pushing actions in order to reconfigure the objects into a predefined goal pose. In this work, we rearrange one object at a time in an environment with obstacles using an end-to-end policy that maps raw pixels as visual input to control actions without any form of engineered feature extraction. To reduce the amount of training data that needs to be collected using a real robot, we propose a simulation-to-reality transfer approach. In the first step, we model the nonprehensile rearrangement task in simulation and use deep reinforcement learning to learn a suitable rearrangement policy, which requires in the order of hundreds of thousands of example actions for training. Thereafter, we collect a small dataset of only 70 episodes of real-world actions as supervised examples for adapting the learned rearrangement policy to real-world input data. In this process, we make use of newly proposed strategies for improving the reinforcement learning process, such as heuristic exploration and the curation of a balanced set of experiences. We evaluate our method in both simulation and real setting using a Baxter robot to show that the proposed approach can effectively improve the training process in simulation, as well as efficiently adapt the learned policy to the real world application, even when the camera pose is different from simulation. Additionally, we show that the learned system not only can provide adaptive behavior to handle unforeseen events during executions, such as distraction objects, sudden changes in positions of the objects, and obstacles, but also can deal with obstacle shapes that were not present in the training process.

1 - 43 of 43
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf