To Örebro University

oru.seÖrebro University Publications
Change search
Refine search result
1 - 8 of 8
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. Herdt, Andrei
    et al.
    Diedam, Holger
    Wieber, Pierre-Brice
    Dimitrov, Dimitar
    Örebro University, School of Science and Technology.
    Mombaur, Katja
    Diehl, Moritz
    Online walking motion generation with automatic footstep placement2010In: Advanced Robotics, ISSN 0169-1864, E-ISSN 1568-5535, Vol. 24, no 5-6, p. 719-737Article in journal (Refereed)
    Abstract [en]

    The goal of this paper is to demonstrate the capacity of model predictive control (MPC) to generate stable walking motions without the use of predefined footsteps. Building up on well-known MPC schemes for walking motion generation, we show that a minimal modification of these schemes allows designing an online walking motion generator that can track a given reference speed of the robot and decide automatically the footstep placement. Simulation results are proposed on the HRP-2 humanoid robot, showing a significant improvement over previous approaches.

  • 2.
    Jung, Hojung
    et al.
    Graduate School of Information Science and Electrical Engeneering, Kyushu University, Fukuoka, Japan.
    Martinez Mozos, Oscar
    School of Computer Science, University of Lincoln, Lincoln, England.
    Iwashita, Yumi
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Kurazume, Ryo
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Local N-ary Patterns: a local multi-modal descriptor for place categorization2016In: Advanced Robotics, ISSN 0169-1864, E-ISSN 1568-5535, Vol. 30, no 6, p. 402-415Article in journal (Refereed)
    Abstract [en]

    This paper presents an effective integration method of multiple modalities such as depth, color, and reflectance for place categorization. To achieve better performance with integrated multi-modalities, we introduce a novel descriptor, local N-ary patterns (LTP), which can perform robust discrimination of place categorization. In this paper, the LNP descriptor is applied to a combination of two modalities, i.e. depth and reflectance, provided by a laser range finder. However, the LNP descriptor can be easily extended to a larger number of modalities. The proposed LNP describes relationships between the multi-modal values of pixels and their neighboring pixels. Since we consider the multi-modal relationship, our proposed method clearly demonstrates more effective classification results than using individual modalities. We carried out experiments with the Kyushu University Indoor Semantic Place Dataset, which is publicly available. This data-set is composed of five indoor categories: corridors, kitchens, laboratories, study rooms, and offices. We confirmed that our proposed method outperforms previous uni-modal descriptors.

  • 3.
    Kamarudin, Kamarulzaman
    et al.
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia; School of Mechatronics Engineering, Universiti Malaysia Perlis (UniMAP), Arau, Malaysia.
    Shakaff, Ali Yeon Md
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia; School of Mechatronics Engineering, Universiti Malaysia Perlis (UniMAP), Arau, Malaysia.
    Hernandez Bennetts, Victor
    Örebro University, School of Science and Technology.
    Mamduh, Syed Muhammad
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia.
    Zakaria, Ammar
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia; School of Mechatronics Engineering, Universiti Malaysia Perlis (UniMAP), Arau, Malaysia.
    Visvanathan, Retnam
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia.
    Yeon, Ahmad Shakaff Ali
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia.
    Kamarudin, Latifah Munirah
    Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis, Arau, Malaysia.
    Integrating SLAM and gas distribution mapping (SLAM-GDM) for real-time gas source localization2018In: Advanced Robotics, ISSN 0169-1864, E-ISSN 1568-5535, Vol. 32, no 17, p. 903-917Article in journal (Refereed)
    Abstract [en]

    Gas distribution mapping (GDM) learns models of the spatial distribution of gas concentrations across 2D/3D environments, among others, for the purpose of localizing gas sources. GDM requires run-time robot positioning in order to associate measurements with locations in a global coordinate frame. Most approaches assume that the robot has perfect knowledge about its position, which does not necessarily hold in realistic scenarios. We argue that the simultaneous localization and mapping (SLAM) algorithm should be used together with GDM to allow operation in an unknown environment. This paper proposes an SLAM-GDM approach that combines Hector SLAM and Kernel DM+V through a map merging technique. We argue that Hector SLAM is suitable for the SLAM-GDM approach since it does not perform loop closure or global corrections, which in turn would require to re-compute the gas distribution map. Real-time experiments were conducted in an environment with single and multiple gas sources. The results showed that the predictions of gas source location in all trials were often correct to around 0.5-1.5 m for the large indoor area being tested. The results also verified that the proposed SLAM-GDM approach and the designed system were able to achieve real-time operation.

  • 4.
    Lilienthal, Achim J.
    et al.
    W.-Schickard-Inst. for Comp. Science, University of Tübingen, Tübingen, Germany.
    Duckett, Tom
    Örebro University, Department of Technology.
    Experimental analysis of gas-sensitive Braitenberg vehicles2004In: Advanced Robotics, ISSN 0169-1864, E-ISSN 1568-5535, Vol. 18, no 8, p. 817-834Article in journal (Refereed)
    Abstract [en]

    This article addresses the problem of localising a static gas source in an indoor environment by a mobile robot. In contrast to previous works, the environment is not artificially ventilated to produce a strong unidirectional airflow. Here, the dominant transport mechanisms of gas molecules are turbulence and convection flow rather than diffusion, which results in a patchy, chaotically fluctuating gas distribution. Two Braitenberg-type strategies (positive and negative tropotaxis) based on the instantaneously measured spatial concentration gradient were investigated. Both strategies were shown to be of potential use for gas source localisation. As a possible solution to the problem of gas source declaration (the task of determining with certainty that the gas source has been found), an indirect localisation strategy based on exploration and concentration peak avoidance is suggested. Here, a gas source is located by exploiting the fact that local concentration maxima occur more frequently near the gas source compared to distant regions

    Download full text (pdf)
    Experimental Analysis of Gas-Sensitive Braitenberg Vehicles
  • 5.
    Martinez Mozos, Oscar
    et al.
    School of Computer Science, University of Lincoln, Lincoln, England.
    Mizutani, Hitoshi
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Jung, Hojung
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Kurazume, Ryo
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Hasegawa, Tsutomu
    Kumamoto National College of Technology, Kumamoto, Japan.
    Categorization of Indoor Places by Combining Local Binary Pattern Histograms of Range and Reflectance Data from Laser Range Finders2013In: Advanced Robotics, ISSN 0169-1864, E-ISSN 1568-5535, Vol. 27, no 18, p. 1455-1464Article in journal (Refereed)
    Abstract [en]

    This paper presents an approach to categorize typical places in indoor environments using 3D scans provided by a laser range finder. Examples of such places are offices, laboratories, or kitchens. In our method, we combine the range and reflectance data from the laser scan for the final categorization of places. Range and reflectance images are transformed into histograms of local binary patterns and combined into a single feature vector. This vector is later classified using support vector machines. The results of the presented experiments demonstrate the capability of our technique to categorize indoor places with high accuracy. We also show that the combination of range and reflectance information improves the final categorization results in comparison with a single modality.

  • 6.
    Nakashima, Kazuto
    et al.
    Graduate School of Information Science and Electrical Engeneering, Kyushu University, Fukuoka, Japan.
    Jung, Hojung
    Graduate School of Information Science and Electrical Engeneering, Kyushu University, Fukuoka, Japan.
    Oto, Yuki
    Graduate School of Information Science and Electrical Engeneering, Kyushu University, Fukuoka, Japan.
    Iwashita, Yumi
    Jet Propulsion Laboratory, California Institute of Technology, Pasadena, USA.
    Kurazume, Ryo
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Martinez Mozos, Oscar
    Technical University of Cartagena, Cartagena, Spain.
    Learning geometric and photometric features from panoramic LiDAR scans for outdoor place categorization2018In: Advanced Robotics, ISSN 0169-1864, E-ISSN 1568-5535, Vol. 32, no 14, p. 750-765Article in journal (Refereed)
    Abstract [en]

    Semantic place categorization, which is one of the essential tasks for autonomous robots and vehicles, allows them to have capabilities of self-decision and navigation in unfamiliar environments. In particular, outdoor places are more difficult targets than indoor ones due to perceptual variations, such as dynamic illuminance over 24 hours and occlusions by cars and pedestrians. This paper presents a novel method of categorizing outdoor places using convolutional neural networks (CNNs), which take omnidirectional depth/reflectance images obtained by 3D LiDARs as the inputs. First, we construct a large-scale outdoor place dataset named Multi-modal Panoramic 3D Outdoor (MPO) comprising two types of point clouds captured by two different LiDARs. They are labeled with six outdoor place categories: coast, forest, indoor/outdoor parking, residential area, and urban area. Second, we provide CNNs for LiDAR-based outdoor place categorization and evaluate our approach with the MPO dataset. Our results on the MPO dataset outperform traditional approaches and show the effectiveness in which we use both depth and reflectance modalities. To analyze our trained deep networks, we visualize the learned features.

  • 7.
    Neumann, Patrick
    et al.
    Federal Institute for Materials Research and Testing (BAM), Berlin, Germany.
    Hernandez Bennetts, Victor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Bartholmai, Matthias
    Federal Institute for Materials Research and Testing (BAM), Berlin, Germany.
    Schiller, Jochen H.
    Institute of Computer Science, Freie Universität, Berlin, Germany.
    Gas source localization with a micro-drone using bio-inspired and particle filter-based algorithms2013In: Advanced Robotics, ISSN 0169-1864, E-ISSN 1568-5535, ISSN 0169-1864, Vol. 27, no 9, p. 725-738Article in journal (Refereed)
    Abstract [en]

    Gas source localization (GSL) with mobile robots is a challenging task due to the unpredictable nature of gas dispersion,the limitations of the currents sensing technologies, and the mobility constraints of ground-based robots. This work proposesan integral solution for the GSL task, including source declaration. We present a novel pseudo-gradient-basedplume tracking algorithm and a particle filter-based source declaration approach, and apply it on a gas-sensitivemicro-drone. We compare the performance of the proposed system in simulations and real-world experiments againsttwo commonly used tracking algorithms adapted for aerial exploration missions.

  • 8.
    Yamada, Hiroyuki
    et al.
    Graduate School of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan; Research & Development Group, Hitachi, Ltd., Ibaraki, Japan.
    Ahn, Jeongho
    Graduate School of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Martinez Mozos, Oscar
    Iwashita, Yumi
    Jet Propulsion Laboratory, California Institute of Technology, Pasadena CA, USA.
    Kurazume, Ryo
    Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Gait-based person identification using 3D LiDAR and long short-term memory deep networks2020In: Advanced Robotics, ISSN 0169-1864, E-ISSN 1568-5535, Vol. 34, no 18, p. 1201-1211Article in journal (Refereed)
    Abstract [en]

    Gait recognition is one measure of biometrics, which also includes facial, fingerprint, and retina recognition. Although most biometric methods require direct contact between a device and a subject, gait recognition has unique characteristics whereby interaction with the subjects is not required and can be performed from a distance. Cameras are commonly used for gait recognition, and a number of researchers have used depth information obtained using an RGB-D camera, such as the Microsoft Kinect. Although depth-based gait recognition has advantages, such as robustness against light conditions or appearance variations, there are also limitations. For instance, the RGB-D camera cannot be used outdoors and the measurement distance is limited to approximately 10 meters. The present paper describes a long short-term memory-based method for gait recognition using a real-time multi-line LiDAR. Very few studies have dealt with LiDAR-based gait recognition, and the present study is the first attempt that combines LiDAR data and long short-term memory for gait recognition and focuses on dealing with different appearances. We collect the first gait recognition dataset that consists of time-series range data for 30 people with clothing variations and show the effectiveness of the proposed approach.

1 - 8 of 8
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf