To Örebro University

oru.seÖrebro University Publications
Change search
Refine search result
1 - 7 of 7
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Almeida, Tiago
    et al.
    Örebro University, School of Science and Technology. IEETA, DEM, University of Aveiro, Aveiro, Portugal.
    Santos, Vitor
    IEETA, DEM, University of Aveiro, Aveiro, Portugal.
    Martinez Mozos, Oscar
    Örebro University, School of Science and Technology.
    Lourenco, Bernardo
    IEETA, DEM, University of Aveiro, Aveiro, Portugal.
    Comparative Analysis of Deep Neural Networks for the Detection and Decoding of Data Matrix Landmarks in Cluttered Indoor Environments2021In: Journal of Intelligent and Robotic Systems, ISSN 0921-0296, E-ISSN 1573-0409, Vol. 103, no 1, article id 13Article in journal (Refereed)
    Abstract [en]

    Data Matrix patterns imprinted as passive visual landmarks have shown to be a valid solution for the self-localization of Automated Guided Vehicles (AGVs) in shop floors. However, existing Data Matrix decoding applications take a long time to detect and segment the markers in the input image. Therefore, this paper proposes a pipeline where the detector is based on a real-time Deep Learning network and the decoder is a conventional method, i.e. the implementation in libdmtx. To do so, several types of Deep Neural Networks (DNNs) for object detection were studied, trained, compared, and assessed. The architectures range from region proposals (Faster R-CNN) to single-shot methods (SSD and YOLO). This study focused on performance and processing time to select the best Deep Learning (DL) model to carry out the detection of the visual markers. Additionally, a specific data set was created to evaluate those networks. This test set includes demanding situations, such as high illumination gradients in the same scene and Data Matrix markers positioned in skewed planes. The proposed approach outperformed the best known and most used Data Matrix decoder available in libraries like libdmtx.

  • 2.
    Almqvist, Håkan
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Improving Point Cloud Accuracy Obtained from a Moving Platform for Consistent Pile Attack Pose Estimation2014In: Journal of Intelligent and Robotic Systems, ISSN 0921-0296, E-ISSN 1573-0409, Vol. 75, no 1, p. 101-128Article in journal (Refereed)
    Abstract [en]

    We present a perception system for enabling automated loading with waist-articulated wheel loaders. To enable autonomous loading of piled materials, using either above-ground wheel loaders or underground load-haul-dump vehicles, 3D data of the pile shape is needed. However, using common 3D scanners, the scan data is distorted while the wheel loader is moving towards the pile. Existing methods that make use of 3D scan data (for autonomous loading as well as tasks such as mapping, localisation, and object detection) typically assume that each 3D scan is accurate. For autonomous robots moving over rough terrain, it is often the case that the vehicle moves a substantial amount during the acquisition of one 3D scan, in which case the scan data will be distorted. We present a study of auto-loading methods, and how to locate piles in real-world scenarios with nontrivial ground geometry. We have compared how consistently each method performs for live scans acquired in motion, and also how the methods perform with different view points and scan configurations. The system described in this paper uses a novel method for improving the quality of distorted 3D scans made from a vehicle moving over uneven terrain. The proposed method for improving scan quality is capable of increasing the accuracy of point clouds without assuming any specific features of the environment (such as planar walls), without resorting to a “stop-scan-go” approach, and without relying on specialised and expensive hardware. Each new 3D scan is registered to the preceding using the normal-distributions transform (NDT). After each registration, a mini-loop closure is performed with a local, per-scan, graph-based SLAM method. To verify the impact of the quality improvement, we present data that shows how auto-loading methods benefit from the corrected scans. The presented methods are validated on data from an autonomous wheel loader, as well as with simulated data. The proposed scan-correction method increases the accuracy of both the vehicle trajectory and the point cloud. We also show that it increases the reliability of pile-shape measures used to plan an efficient attack pose when performing autonomous loading.

  • 3.
    Amato, G.
    et al.
    Consiglio Nazionale delle Ricerche-Istituto di Scienza e Tecnologie dell'Informazione (CNR-ISTI), Pisa, Italy.
    Bacciu, D.
    Università di Pisa, Pisa, Italy.
    Broxvall, Mathias
    Örebro Universitet, Örebro, Sweden.
    Chessa, S.
    Università di Pisa, Pisa, Italy.
    Coleman, S.
    University of Ulster, Ulster, UK.
    Di Rocco, Maurizio
    Örebro Universitet, Örebro, Sweden.
    Dragone, M.
    Trinity College Dublin, Dublin, Ireland.
    Gallicchio, C.
    Università di Pisa, Pisa, Italy.
    Gennaro, C.
    Consiglio Nazionale delle Ricerche-Istituto di Scienza e Tecnologie dell'Informazione (CNR-ISTI), Pisa, Italy.
    Lozano, H.
    Tecnalia, Madrid, Spain.
    McGinnity, T. M.
    University of Ulster, Ulster, UK.
    Micheli, A.
    Università di Pisa, Pisa, Italy.
    Ray, A. K.
    University of Ulster, Ulster, UK.
    Renteria, A.
    Tecnalia, Madrid, Spain.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Swords, D.
    University College Dublin, Dublin, Ireland.
    Vairo, C.
    Consiglio Nazionale delle Ricerche (CNR)-Istituto di Scienza e Tecnologie dell'Informazione (ISTI), Pisa, Italy.
    Vance, P.
    University of Ulster, Ulster, UK.
    Robotic Ubiquitous Cognitive Ecology for Smart Homes2015In: Journal of Intelligent and Robotic Systems, ISSN 0921-0296, E-ISSN 1573-0409, Vol. 80, p. S57-S81Article in journal (Refereed)
    Abstract [en]

    Robotic ecologies are networks of heterogeneous robotic devices pervasively embedded in everyday environments, where they cooperate to perform complex tasks. While their potential makes them increasingly popular, one fundamental problem is how to make them both autonomous and adaptive, so as to reduce the amount of preparation, pre-programming and human supervision that they require in real world applications. The project RUBICON develops learning solutions which yield cheaper, adaptive and efficient coordination of robotic ecologies. The approach we pursue builds upon a unique combination of methods from cognitive robotics, machine learning, planning and agent-based control, and wireless sensor networks. This paper illustrates the innovations advanced by RUBICON in each of these fronts before describing how the resulting techniques have been integrated and applied to a proof of concept smart home scenario. The resulting system is able to provide useful services and pro-actively assist the users in their activities. RUBICON learns through an incremental and progressive approach driven by the feedback received from its own activities and from the user, while also self-organizing the manner in which it uses available sensors, actuators and other functional components in the process. This paper summarises some of the lessons learned by adopting such an approach and outlines promising directions for future work.

  • 4.
    Crespo, Jonathan
    et al.
    Universidad Carlos III de Madrid, Madrid, Spain.
    Barber, Ramón
    Universidad Carlos III de Madrid, Madrid, Spain.
    Martinez Mozos, Oscar
    Universidad Politécnica de Cartagena, Cartagena, Spain.
    Relational Model for Robotic Semantic Navigation in Indoor Environments2017In: Journal of Intelligent and Robotic Systems, ISSN 0921-0296, E-ISSN 1573-0409, Vol. 86, no 3-4, p. 617-639Article in journal (Refereed)
    Abstract [en]

    The emergence of service robots in our environment raises the need to find systems that help the robots in the task of managing the information from human environments. A semantic model of the environment provides the robot with a representation closer to the human perception, and it improves its human-robot communication system. In addition, a semantic model will improve the capabilities of the robot to carry out high level navigation tasks. This paper presents a semantic relational model that includes conceptual and physical representation of objects and places, utilities of the objects, and semantic relation among objects and places. This model allows the robot to manage the environment and to make queries about the environment in order to do plans for navigation tasks. In addition, this model has several advantages such as conceptual simplicity and flexibility of adaptation to different environments. To test the performance of the proposed semantic model, the output for the semantic inference system is associate to the geometric and topological information of objects and places in order to do the navigation tasks.

  • 5.
    Krug, Robert
    et al.
    Örebro University, School of Science and Technology.
    Dimitrov, Dimitar
    INRIA St Ismier, Rhône-Alpes, France .
    Model predictive motion control based on generalized dynamical movement primitives2014In: Journal of Intelligent and Robotic Systems, ISSN 0921-0296, E-ISSN 1573-0409, Vol. 77, no 1, p. 17-35Article in journal (Refereed)
    Abstract [en]

    In this work, experimental data is used toestimate the free parameters of dynamical systemsintended to model motion profiles for a robotic system.The corresponding regression problem is formedas a constrained non-linear least squares problem.In our method, motions are generated via embeddedoptimization by combining dynamical movementprimitives in a locally optimal way at each time step.Based on this concept, we introduce a model predictivecontrol scheme which allows generalization overmultiple encoded behaviors depending on the currentposition in the state space, while leveraging the abilityto explicitly account for state constraints to the fulfillmentof additional tasks such as obstacle avoidance.We present a numerical evaluation of our approachand a preliminary verification by generating graspingmotions for the anthropomorphic Shadow Robothand/arm platform.

  • 6.
    Ringdahl, Ola
    et al.
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Kurtser, Polina
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
    Edan, Yael
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
    Evaluation of approach strategies for harvesting robots: Case study of sweet pepper harvesting2019In: Journal of Intelligent and Robotic Systems, ISSN 0921-0296, E-ISSN 1573-0409, Vol. 95, no 1, p. 149-164Article in journal (Refereed)
    Abstract [en]

    Robotic harvesters that use visual servoing must choose the best direction from which to approach the fruit to minimize occlusion and avoid obstacles that might interfere with the detection along the approach. This work proposes different approach strategies, compares them in terms of cycle times, and presents a failure analysis methodology of the different approach strategies. The different approach strategies are: in-field assessment by human observers, evaluation based on an overview image using advanced algorithms or remote human observers, or attempting multiple approach directions until the fruit is successfully reached. In the latter approach, each attempt costs time, which is a major bottleneck in bringing harvesting robots into the market. Alternatively, a single approach strategy that only attempts one direction can be applied if the best approach direction is known a-priori. The different approach strategies were evaluated for a case study of sweet pepper harvesting in laboratorial and greenhouse conditions. The first experiment, conducted in a commercial greenhouse, revealed that the fruit approach cycle time increased 8% and 116% for reachable and unreachable fruits respectively when the multiple approach strategy was applied, compared to the single approach strategy. The second experiment measured human observers’ ability to provide insights to approach directions based on overview images taken in both greenhouse and laboratorial conditions. Results revealed that human observers are accurate in detecting unapproachable directions while they tend to miss approachable directions. By detecting fruits that are unreachable (via automatic algorithms or human operators), harvesting cycle times can be significantly shortened leading to improved commercial feasibility of harvesting robots.

    Download full text (pdf)
    Evaluation of approach strategies for harvesting robots: Case study of sweet pepper harvesting
  • 7.
    Zoltan-Csaba, Marton
    et al.
    Institute of Robotics and Mechatronics, German Aerospace Center (DLR), Oberpfaffenhofen, Germany.
    Balint-Benczedi, Ferenc
    Institute of Artificial Intelligence, Universität Bremen, Center for Computing Technologies (TZI), Bremen, Germany.
    Martinez Mozos, Oscar
    School of Computer Science, University of Lincoln, Lincoln, England.
    Blodow, Nico
    Intelligent Autonomous Systems, Technische Universität München, München, Germany.
    Kanezaki, Asako
    Graduate School of Information Science & Technology, The University of Tokyo, Tokyo, Japan.
    Goron, Lucian C.
    Intelligent Autonomous Systems, Technische Universität München, München, Germany.
    Pangercic, Dejan
    Autonomous Technologies Group Robert Bosch LLC, Palo Alto, USA.
    Beetz, Michael
    Institute of Artificial Intelligence, Universität Bremen, Center for Computing Technologies (TZI), Bremen, Germany.
    Part-Based Geometric Categorization and Object Reconstruction in Cluttered Table-Top Scenes2014In: Journal of Intelligent and Robotic Systems, ISSN 0921-0296, E-ISSN 1573-0409, Vol. 76, no 1, p. 35-56Article in journal (Refereed)
    Abstract [en]

    This paper presents an approach for 3D geometry-based object categorization in cluttered table-top scenes. In our method, objects are decomposed into different geometric parts whose spatial arrangement is represented by a graph. The matching and searching of graphs representing the objects is sped up by using a hash table which contains possible spatial configurations of the different parts that constitute the objects. Additive feature descriptors are used to label partially or completely visible object parts. In this work we categorize objects into five geometric shapes: sphere, box, flat, cylindrical, and disk/plate, as these shapes represent the majority of objects found on tables in typical households. Moreover, we reconstruct complete 3D models that include the invisible back-sides of objects as well, in order to facilitate manipulation by domestic service robots. Finally, we present an extensive set of experiments on point clouds of objects using an RGBD camera, and our results highlight the improvements over previous methods.

1 - 7 of 7
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf