To Örebro University

oru.seÖrebro University Publications
Change search
Refine search result
1 - 9 of 9
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Arain, Muhammad Asif
    et al.
    Örebro University, School of Science and Technology.
    Hernandez Bennetts, Victor
    Mobile Robotics and Olfaction (MRO) Lab, Center for Applied Autonomous Sensor Systems (AASS), School of Science and Technology, Örebro University, Örebro, Sweden.
    Schaffernicht, Erik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Sniffing out fugitive methane emissions: autonomous remote gas inspection with a mobile robot2021In: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 40, no 4-5, p. 782-814Article in journal (Refereed)
    Abstract [en]

    Air pollution causes millions of premature deaths every year, and fugitive emissions of, e.g., methane are major causes of global warming. Correspondingly, air pollution monitoring systems are urgently needed. Mobile, autonomous monitoring can provide adaptive and higher spatial resolution compared with traditional monitoring stations and allows fast deployment and operation in adverse environments. We present a mobile robot solution for autonomous gas detection and gas distribution mapping using remote gas sensing. Our ‘‘Autonomous Remote Methane Explorer’’ (ARMEx) is equipped with an actuated spectroscopy-based remote gas sensor, which collects integral gas measurements along up to 30 m long optical beams. State-of-the-art 3D mapping and robot localization allow the precise location of the optical beams to be determined, which then facilitates gas tomography (tomographic reconstruction of local gas distributions from sets of integral gas measurements). To autonomously obtain informative sampling strategies for gas tomography, we reduce the search space for gas inspection missions by defining a sweep of the remote gas sensor over a selectable field of view as a sensing configuration. We describe two different ways to find sequences of sensing configurations that optimize the criteria for gas detection and gas distribution mapping while minimizing the number of measurements and distance traveled. We evaluated anARMExprototype deployed in a large, challenging indoor environment with eight gas sources. In comparison with human experts teleoperating the platform from a distant building, the autonomous strategy produced better gas maps with a lower number of sensing configurations and a slightly longer route.

  • 2.
    Bennewitz, Maren
    et al.
    University of Freiburg.
    Burgard, Wolfram
    University of Freiburg.
    Cielniak, Grzegorz
    Örebro University, Department of Technology.
    Thrun, Sebastian
    Carnegie Mellon University.
    Learning motion patterns of people for compliant robot motion2005In: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 24, no 1, p. 31-48Article in journal (Refereed)
    Abstract [en]

    Whenever people move through their environments they do not move randomly. Instead, they usually follow specific trajectories or motion patterns corresponding to their intentions. Knowledge about such patterns enables a mobile robot to robustly keep track of persons in its environment and to improve its behavior. This paper proposes a technique for learning collections of trajectories that characterize typical motion patterns of persons. Data recorded with laser-range finders is clustered using the expectation maximization algorithm. Based on the result of the clustering process we derive a Hidden Markov Model (HMM) that is applied to estimate the current and future positions of persons based on sensory input. We also describe how to incorporate the probabilistic belief about the potential trajectories of persons into the path planning process. We present several experiments carried out in different environments with a mobile robot equipped with a laser range scanner and a camera system. The results demonstrate that our approach can reliably learn motion patterns of persons, can robustly estimate and predict positions of persons, and can be used to improve the navigation behavior of a mobile robot.

  • 3.
    Kucner, Tomasz Piotr
    et al.
    Mobile Robotics Group, School of Electrical Engineering, Aalto University, Finland; Finnish Center for Artificial Intelligence, Finland.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Mghames, Sariah
    L-CAS, School of Computer Science, University of Lincoln, Lincoln, UK.
    Palmieri, Luigi
    BOSCH Corporate Research, Renningen, Germany.
    Verdoja, Francesco
    Intelligent Robotics Group, School of Electrical Engineering, Aalto University, Finland.
    Swaminathan, Chittaranjan Srinivas
    Örebro University, School of Science and Technology.
    Krajnik, Tomas
    Artificial Intelligence Center, Czech Technical University, Praha, Czechia.
    Schaffernicht, Erik
    Örebro University, School of Science and Technology.
    Bellotto, Nicola
    L-CAS, School of Computer Science, University of Lincoln, Lincoln, UK; Department of Information Engineering, Univeristy of Padua, Padova, Italy.
    Hanheide, Marc
    L-CAS, School of Computer Science, University of Lincoln, Lincoln, UK.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology. Technical Univeristy of Munich, Munich, Germany.
    Survey of maps of dynamics for mobile robots2023In: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 42, no 11, p. 977-1006Article in journal (Refereed)
    Abstract [en]

    Robotic mapping provides spatial information for autonomous agents. Depending on the tasks they seek to enable, the maps created range from simple 2D representations of the environment geometry to complex, multilayered semantic maps. This survey article is about maps of dynamics (MoDs), which store semantic information about typical motion patterns in a given environment. Some MoDs use trajectories as input, and some can be built from short, disconnected observations of motion. Robots can use MoDs, for example, for global motion planning, improved localization, or human motion prediction. Accounting for the increasing importance of maps of dynamics, we present a comprehensive survey that organizes the knowledge accumulated in the field and identifies promising directions for future work. Specifically, we introduce field-specific vocabulary, summarize existing work according to a novel taxonomy, and describe possible applications and open research problems. We conclude that the field is mature enough, and we expect that maps of dynamics will be increasingly used to improve robot performance in real-world use cases. At the same time, the field is still in a phase of rapid development where novel contributions could significantly impact this research area.

  • 4.
    Lagriffoul, Fabien
    et al.
    Örebro University, School of Science and Technology.
    Andres, Benjamin
    Knowledge Processing and Information Systems, University of Potsdam, Potsdam, Germany.
    Combining task and motion planning: a culprit detection problem2016In: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 35, no 8, p. 890-927Article in journal (Refereed)
    Abstract [en]

    Solving problems combining task and motion planning requires searching across a symbolic search space and a geometricsearch space. Because of the semantic gap between symbolic and geometric representations, symbolic sequences of actionsare not guaranteed to be geometrically feasible. This compels us to search in the combined search space, in which frequentbacktracks between symbolic and geometric levels make the search inefficient. We address this problem by guiding symbolicsearch with rich information extracted from the geometric level through culprit detection mechanisms.

  • 5.
    Lagriffoul, Fabien
    et al.
    Örebro University, School of Science and Technology.
    Dimitrov, Dimitar
    Örebro University, School of Science and Technology.
    Bidot, Julien
    Örebro University, School of Science and Technology.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Karlsson, Lars
    Örebro University, School of Science and Technology.
    Efficiently combining task and motion planning using geometric constraints2014In: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 33, no 14, p. 1726-1747Article in journal (Refereed)
    Abstract [en]

    We propose a constraint-based approach to address a class of problems encountered in combined task and motion planning (CTAMP), which we call kinematically constrained problems. CTAMP is a hybrid planning process in which task planning and geometric reasoning are interleaved. During this process, symbolic action sequences generated by a task planner are geometrically evaluated. This geometric evaluation is a search problem per se, which we refer to as geometric backtrack search. In kinematically constrained problems, a significant computational effort is spent on geometric backtrack search, which impairs search at the task level. At the basis of our approach to address this problem, is the introduction of an intermediate layer between task planning and geometric reasoning. A set of constraints is automatically generated from the symbolic action sequences to evaluate, and combined with a set of constraints derived from the kinematic model of the robot. The resulting constraint network is then used to prune the search space during geometric backtrack search. We present experimental evidence that our approach significantly reduces the complexity of geometric backtrack search on various types of problem.

  • 6.
    Martinez Mozos, Oscar
    et al.
    Technical Univeristy of Cartagena, Cartagena, Spain.
    Nakashima, Kazuto
    Graduate School of Information Science and Electrical Engeneering, Kyushu University, Fukuoka, Japan.
    Jung, Hojung
    Graduate School of Information Science and Electrical Engeneering, Kyushu University, Fukuoka, Japan.
    Iwashita, Yumi
    Jet Propulsion Laboratory, California Institute of Technology, Paasadena, USA.
    Kurazume, Ryo
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Fukuoka datasets for place categorization2019In: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 38, no 5, p. 507-517Article in journal (Refereed)
    Abstract [en]

    This paper presents several multi-modal 3D datasets for the problem of categorization of places. In this problem. a robotic agent should decide on the type of place/environment where it is located (residential area, forest, etc.) using information gathered by its sensors. In addition to the 3D depth information, the datasets include additional modalities such as RGB or reflectance images. The observations were taken in different indoor and outdoor environments in Fukuoka city, Japan. Outdoor place categories include forests, urban areas, indoor parking, outdoor parking, coastal areas, and residential areas. Indoor place categories include corridors, offices, study rooms, kitchens, laboratories, and toilets. The datasets are available to download at http://robotics.ait.kyushu-u.ac.jp/kyushu_datasets.

  • 7.
    Rudenko, Andrey
    et al.
    Örebro University, School of Science and Technology. Robert Bosch GmbH, Corporate Research, Germany.
    Palmieri, Luigi
    Robert Bosch GmbH, Corporate Research, Germany.
    Herman, Michael
    Bosch Center for Artificial Intelligence, Germany.
    Kitani, Kris M.
    Carnegie Mellon University, Pittsburgh PA , USA.
    Gavrila, Dariu M.
    Intelligent Vehicles group, TU Delft, The Netherlands.
    Arras, Kai O.
    Robert Bosch GmbH, Corporate Research, Germany.
    Human motion trajectory prediction: a survey2020In: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 39, no 8, p. 895-935, article id UNSP 0278364920917446Article in journal (Refereed)
    Abstract [en]

    With growing numbers of intelligent autonomous systems in human environments, the ability of such systems to perceive, understand, and anticipate human behavior becomes increasingly important. Specifically, predicting future positions of dynamic agents and planning considering such predictions are key tasks for self-driving vehicles, service robots, and advanced surveillance systems. This article provides a survey of human motion trajectory prediction. We review, analyze, and structure a large selection of work from different communities and propose a taxonomy that categorizes existing methods based on the motion modeling approach and level of contextual information used. We provide an overview of the existing datasets and performance metrics. We discuss limitations of the state of the art and outline directions for further research.

  • 8.
    Saarinen, Jari
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    3D normal distributions transform occupancy maps: an efficient representation for mapping in dynamic environments2013In: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 32, no 14, p. 1627-1644Article in journal (Refereed)
    Abstract [en]

    In order to enable long-term operation of autonomous vehicles in industrial environments numerous challenges need to be addressed. A basic requirement for many applications is the creation and maintenance of consistent 3D world models. This article proposes a novel 3D spatial representation for online real-world mapping, building upon two known representations: normal distributions transform (NDT) maps and occupancy grid maps. The proposed normal distributions transform occupancy map (NDT-OM) combines the advantages of both representations; compactness of NDT maps and robustness of occupancy maps. One key contribution in this article is that we formulate an exact recursive updates for NDT-OMs. We show that the recursive update equations provide natural support for multi-resolution maps. Next, we describe a modification of the recursive update equations that allows adaptation in dynamic environments. As a second key contribution we introduce NDT-OMs and formulate the occupancy update equations that allow to build consistent maps in dynamic environments. The update of the occupancy values are based on an efficient probabilistic sensor model that is specially formulated for NDT-OMs. In several experiments with a total of 17 hours of data from a milk factory we demonstrate that NDT-OMs enable real-time performance in large-scale, long-term industrial setups.

  • 9.
    Stoyanov, Todor
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Fast and accurate scan registration through minimization of the distance between compact 3D NDT Representations2012In: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 31, no 12, p. 1377-1393Article in journal (Refereed)
    Abstract [en]

    Registration of range sensor measurements is an important task in mobile robotics and has received a lot of attention. Several iterative optimization schemes have been proposed in order to align three-dimensional (3D) point scans. With the more widespread use of high-frame-rate 3D sensors and increasingly more challenging application scenarios for mobile robots, there is a need for fast and accurate registration methods that current state-of-the-art algorithms cannot always meet. This work proposes a novel algorithm that achieves accurate point cloud registration an order of a magnitude faster than the current state of the art. The speedup is achieved through the use of a compact spatial representation: the Three-Dimensional Normal Distributions Transform (3D-NDT). In addition, a fast, global-descriptor based on the 3D-NDT is defined and used to achieve reliable initial poses for the iterative algorithm. Finally, a closed-form expression for the covariance of the proposed method is also derived. The proposed algorithms are evaluated on two standard point cloud data sets, resulting in stable performance on a par with or better than the state of the art. The implementation is available as an open-source package for the Robot Operating system (ROS).

1 - 9 of 9
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf