oru.sePublications
Change search
Refine search result
1 - 42 of 42
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Agrawal, Vikas
    et al.
    IBM Research, , India.
    Archibald, Christopher
    Mississippi State University, Starkville, United States.
    Bhatt, Mehul
    University of Bremen, Bremen, Germany.
    Bui, Hung Hai
    Laboratory for Natural Language Understanding, Sunnyvale CA, United States.
    Cook, Diane J.
    Washington State University, Pullman WA, United States.
    Cortés, Juan
    University of Toulouse, Toulouse, France.
    Geib, Christopher W.
    Drexel University, Philadelphia PA, United States.
    Gogate, Vibhav
    Department of Computer Science, University of Texas, Dallas, United States.
    Guesgen, Hans W.
    Massey University, Palmerston North, New Zealand.
    Jannach, Dietmar
    Technical university Dortmund, Dortmund, Germany.
    Johanson, Michael
    University of Alberta, Edmonton, Canada.
    Kersting, Kristian
    Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme (IAIS), Sankt Augustin, Germany; The University of Bonn, Bonn, Germany.
    Konidaris, George
    Massachusetts Institute of Technology (MIT), Cambridge MA, United States.
    Kotthoff, Lars
    INSIGHT Centre for Data Analytics, University College Cork, Cork, Ireland.
    Michalowski, Martin
    Adventium Labs, Minneapolis MN, United States.
    Natarajan, Sriraam
    Indiana University, Bloomington IN, United States.
    O’Sullivan, Barry
    INSIGHT Centre for Data Analytics, University College Cork, Cork, Ireland.
    Pickett, Marc
    Naval Research Laboratory, Washington DC, United States.
    Podobnik, Vedran
    Telecommunication Department of the Faculty of Electrical Engineering and Computing, University of University of Zagreb, Zagreb, Croatia.
    Poole, David
    Department of Computer Science, University of British Columbia, Vancouver, Canada.
    Shastri, Lokendra
    Infosys, , India.
    Shehu, Amarda
    George Mason University, Washington, United States.
    Sukthankar, Gita
    University of Central Florida, Orlando FL, United States.
    The AAAI-13 Conference Workshops2013In: The AI Magazine, ISSN 0738-4602, Vol. 34, no 4, p. 108-115Article in journal (Refereed)
    Abstract [en]

    The AAAI-13 Workshop Program, a part of the 27th AAAI Conference on Artificial Intelligence, was held Sunday and Monday, July 14-15, 2013, at the Hyatt Regency Bellevue Hotel in Bellevue, Washington, USA. The program included 12 workshops covering a wide range of topics in artificial intelligence, including Activity Context-Aware System Architectures (WS-13-05); Artificial Intelligence and Robotics Methods in Computational Biology (WS-13-06); Combining Constraint Solving with Mining and Learning (WS-13-07); Computer Poker and Imperfect Information (WS-13-08); Expanding the Boundaries of Health Informatics Using Artificial Intelligence (WS-13-09); Intelligent Robotic Systems (WS-13-10); Intelligent Techniques for Web Personalization and Recommendation (WS-13-11); Learning Rich Representations from Low-Level Sensors (WS-13-12); Plan, Activity,, and Intent Recognition (WS-13-13); Space, Time, and Ambient Intelligence (WS-13-14); Trading Agent Design and Analysis (WS-13-15); and Statistical Relational Artificial Intelligence (WS-13-16)

  • 2.
    Ahtiainen, Juhana
    et al.
    Department of Electrical Engineering and Automation, Aalto University, Espoo, Finland.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Saarinen, Jari
    GIM Ltd., Espoo, Finland.
    Normal Distributions Transform Traversability Maps: LIDAR-Only Approach for Traversability Mapping in Outdoor Environments2017In: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 34, no 3, p. 600-621Article in journal (Refereed)
    Abstract [en]

    Safe and reliable autonomous navigation in unstructured environments remains a challenge for field robots. In particular, operating on vegetated terrain is problematic, because simple purely geometric traversability analysis methods typically classify dense foliage as nontraversable. As traversing through vegetated terrain is often possible and even preferable in some cases (e.g., to avoid executing longer paths), more complex multimodal traversability analysis methods are necessary. In this article, we propose a three-dimensional (3D) traversability mapping algorithm for outdoor environments, able to classify sparsely vegetated areas as traversable, without compromising accuracy on other terrain types. The proposed normal distributions transform traversability mapping (NDT-TM) representation exploits 3D LIDAR sensor data to incrementally expand normal distributions transform occupancy (NDT-OM) maps. In addition to geometrical information, we propose to augment the NDT-OM representation with statistical data of the permeability and reflectivity of each cell. Using these additional features, we train a support-vector machine classifier to discriminate between traversable and nondrivable areas of the NDT-TM maps. We evaluate classifier performance on a set of challenging outdoor environments and note improvements over previous purely geometrical traversability analysis approaches.

  • 3.
    Almqvist, Håkan
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Kucner, Tomasz Piotr
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Learning to detect misaligned point clouds2018In: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 35, no 5, p. 662-677Article in journal (Refereed)
    Abstract [en]

    Matching and merging overlapping point clouds is a common procedure in many applications, including mobile robotics, three-dimensional mapping, and object visualization. However, fully automatic point-cloud matching, without manual verification, is still not possible because no matching algorithms exist today that can provide any certain methods for detecting misaligned point clouds. In this article, we make a comparative evaluation of geometric consistency methods for classifying aligned and nonaligned point-cloud pairs. We also propose a method that combines the results of the evaluated methods to further improve the classification of the point clouds. We compare a range of methods on two data sets from different environments related to mobile robotics and mapping. The results show that methods based on a Normal Distributions Transform representation of the point clouds perform best under the circumstances presented herein.

  • 4.
    Amigoni, Francesco
    et al.
    Politecnico di Milano, Milan, Italy.
    Yu, Wonpil
    Electronics and Telecommunications Research Institute (ETRI), Daejeon, South Korea.
    Andre, Torsten
    University of Klagenfurt, Klagenfurt, Austria.
    Holz, Dirk
    University of Bonn, Bonn, Germany.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Matteucci, Matteo
    Politecnico di Milano, Milan, Italy.
    Moon, Hyungpil
    Sungkyunkwan University, Suwon, South Korea.
    Yokozuka, Masashi
    Nat. Inst. of Advanced Industrial Science and Technology, Tsukuba, Japan.
    Biggs, Geoffrey
    Nat. Inst. of Advanced Industrial Science and Technology, Tsukuba, Japan.
    Madhavan, Raj
    Amrita University, Clarksburg MD, United States of America.
    A Standard for Map Data Representation: IEEE 1873-2015 Facilitates Interoperability Between Robots2018In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 25, no 1, p. 65-76Article in journal (Refereed)
    Abstract [en]

    The availability of environment maps for autonomous robots enables them to complete several tasks. A new IEEE standard, IEEE 1873-2015, Robot Map Data Representation for Navigation (MDR) [15], sponsored by the IEEE Robotics and Automation Society (RAS) and approved by the IEEE Standards Association Standards Board in September 2015, defines a common representation for two-dimensional (2-D) robot maps and is intended to facilitate interoperability among navigating robots. The standard defines an extensible markup language (XML) data format for exchanging maps between different systems. This article illustrates how metric maps, topological maps, and their combinations can be represented according to the standard.

  • 5.
    Asadi, Sahar
    et al.
    Örebro University, School of Science and Technology.
    Fan, Han
    Örebro University, School of Science and Technology.
    Hernandez Bennetts, Victor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Time-dependent gas distribution modelling2017In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 96, p. 157-170Article in journal (Refereed)
    Abstract [en]

    Artificial olfaction can help to address pressing environmental problems due to unwanted gas emissions. Sensor networks and mobile robots equipped with gas sensors can be used for e.g. air pollution monitoring. Key in this context is the ability to derive truthful models of gas distribution from a set of sparse measurements. Most statistical gas distribution modelling methods assume that gas dispersion is a time constant random process. While this assumption approximately holds in some situations, it is necessary to model variations over time in order to enable applications of gas distribution modelling in a wider range of realistic scenarios. Time-invariant approaches cannot model well evolving gas plumes, for example, or major changes in gas dispersion due to a sudden change of the environmental conditions. This paper presents two approaches to gas distribution modelling, which introduce a time-dependency and a relation to a time-scale in generating the gas distribution model either by sub-sampling or by introducing a recency weight that relates measurement and prediction time. We evaluated these approaches in experiments performed in two real environments as well as on several simulated experiments. As expected, the comparison of different sub-sampling strategies revealed that more recent measurements are more informative to derive an estimate of the current gas distribution as long as a sufficient spatial coverage is given. Next, we compared a time-dependent gas distribution modelling approach (TD Kernel DM+V), which includes a recency weight, to the state-of-the-art gas distribution modelling approach (Kernel DM+V), which does not consider sampling times. The results indicate a consistent improvement in the prediction of unseen measurements, particularly in dynamic scenarios. Furthermore, this paper discusses the impact of meta-parameters in model selection and compares the performance of time-dependent GDM in different plume conditions. Finally, we investigated how to set the target time for which the model is created. The results indicate that TD Kernel DM+V performs best when the target time is set to the maximum sampling time in the test set.

  • 6.
    Bhatt, Mehul
    et al.
    SFB/TR 8 Spatial Cognition, University of Bremen, Bremen, Germany.
    Dylla, Frank
    SFB/TR 8 Spatial Cognition, University of Bremen, Bremen, Germany.
    A Qualitative Model of Dynamic Scene Analysis and Interpretation in Ambient Intelligence Systems2009In: International Journal of Robotics and Automation, ISSN 0826-8185, Vol. 24, no 3, p. 235-244Article in journal (Refereed)
    Abstract [en]

    Ambient intelligence environments necessitate representing and reasoning about dynamic spatial scenes and configurations. The ability to perform predictive and explanatory analyses of spatial scenes is crucial towards serving a useful intelligent function within such environments. We present a formal qualitative model that combines existing qualitative theories about space with it formal logic-based calculus suited to modelling dynamic environments, or reasoning about action and change in general. With this approach, it is possible to represent and reason about arbitrary dynamic spatial environments within a unified framework. We clarify and elaborate on our ideas with examples grounded in a smart environment.

  • 7.
    Bhatt, Mehul
    et al.
    Department of Computer Science, La Trobe University, Germany.
    Loke, Seng
    Department of Computer Science, La Trobe University, Germany.
    Modelling Dynamic Spatial Systems in the Situation Calculus2008In: Spatial Cognition and Computation, ISSN 1387-5868, E-ISSN 1573-9252, Vol. 8, no 1-2, p. 86-130Article in journal (Refereed)
    Abstract [en]

    We propose and systematically formalise a dynamical spatial systems approach for the modelling of changing spatial environments. The formalisation adheres to the semantics of the situation calculus and includes a systematic account of key aspects that are necessary to realize a domain-independent qualitative spatial theory that may be utilised across diverse application domains. The spatial theory is primarily derivable from the all-pervasive generic notion of "qualitative spatial calculi" that are representative of differing aspects of space. In addition, the theory also includes aspects, both ontological and phenomenal in nature, that are considered inherent in dynamic spatial systems. Foundational to the formalisation is a causal theory that adheres to the representational and computational semantics of the situation calculus. This foundational theory provides the necessary (general) mechanism required to represent and reason about changing spatial environments and also includes an account of the key fundamental epistemological issues concerning the frame and the ramification problems that arise whilst modelling change within such domains. The main advantage of the proposed approach is that based on the structure and semantics of the proposed framework, fundamental reasoning tasks such as projection and explanation directly follow. Within the specialised spatial reasoning domain, these translate to spatial planning/re-configuration, causal explanation and spatial simulation. Our approach is based on the hypothesis that alternate formalisations of existing qualitative spatial calculi using high-level tools such as the situation calculus are essential for their utilisation in diverse application domains such as intelligent systems, cognitive robotics and event-based GIS.

  • 8.
    Bruno, Barbara
    et al.
    University of Genova, Genova, Italy.
    Chong, Nak Young
    Japan Advanced Institute of Science and Technology, Nomi [Ishikawa], Japan.
    Kamide, Hiroko
    Nagoya University, Nagoya, Japan.
    Kanoria, Sanjeev
    Advinia Health Care Limited LTD, London, UK.
    Lee, Jaeryoung
    Chubu University, Kasugai, Japan.
    Lim, Yuto
    Japan Advanced Institute of Science and Technology, Nomi [Ishikawa], Japan.
    Kumar Pandey, Amit
    SoftBank Robotics.
    Papadopoulos, Chris
    University of Bedfordshire, Luton, UK.
    Papadopoulos, Irena
    Middlesex University Higher Education Corporation, London, UK.
    Pecora, Federico
    Örebro University, School of Science and Technology.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Sgorbissa, Antonio
    University of Genova, Genova, Italy.
    Paving the Way for Culturally Competent Robots: a Position Paper2017In: 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) / [ed] Howard, A; Suzuki, K; Zollo, L, New York: Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 553-560Conference paper (Refereed)
    Abstract [en]

    Cultural competence is a well known requirementfor an effective healthcare, widely investigated in thenursing literature. We claim that personal assistive robotsshould likewise be culturally competent, aware of generalcultural characteristics and of the different forms they take indifferent individuals, and sensitive to cultural differences whileperceiving, reasoning, and acting. Drawing inspiration fromexisting guidelines for culturally competent healthcare and thestate-of-the-art in culturally competent robotics, we identifythe key robot capabilities which enable culturally competentbehaviours and discuss methodologies for their developmentand evaluation.

  • 9.
    Canelhas, Daniel R.
    et al.
    Örebro University, School of Science and Technology.
    Schaffernicht, Erik
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Davison, Andrew J.
    Department of Computing, Imperial College London, London, United Kingdom.
    Compressed Voxel-Based Mapping Using Unsupervised Learning2017In: Robotics, E-ISSN 2218-6581, Vol. 6, no 3, article id 15Article in journal (Refereed)
    Abstract [en]

    In order to deal with the scaling problem of volumetric map representations, we propose spatially local methods for high-ratio compression of 3D maps, represented as truncated signed distance fields. We show that these compressed maps can be used as meaningful descriptors for selective decompression in scenarios relevant to robotic applications. As compression methods, we compare using PCA-derived low-dimensional bases to nonlinear auto-encoder networks. Selecting two application-oriented performance metrics, we evaluate the impact of different compression rates on reconstruction fidelity as well as to the task of map-aided ego-motion estimation. It is demonstrated that lossily reconstructed distance fields used as cost functions for ego-motion estimation can outperform the original maps in challenging scenarios from standard RGB-D (color plus depth) data sets due to the rejection of high-frequency noise content.

  • 10.
    Canelhas, Daniel R.
    et al.
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    From Feature Detection in Truncated Signed Distance Fields to Sparse Stable Scene Graphs2016In: IEEE Robotics and Automation Letters, ISSN 2377-3766, Vol. 1, no 2, p. 1148-1155Article in journal (Refereed)
    Abstract [en]

    With the increased availability of GPUs and multicore CPUs, volumetric map representations are an increasingly viable option for robotic applications. A particularly important representation is the truncated signed distance field (TSDF) that is at the core of recent advances in dense 3D mapping. However, there is relatively little literature exploring the characteristics of 3D feature detection in volumetric representations. In this paper we evaluate the performance of features extracted directly from a 3D TSDF representation. We compare the repeatability of Integral invariant features, specifically designed for volumetric images, to the 3D extensions of Harris and Shi & Tomasi corners. We also study the impact of different methods for obtaining gradients for their computation. We motivate our study with an example application for building sparse stable scene graphs, and present an efficient GPU-parallel algorithm to obtain the graphs, made possible by the combination of TSDF and 3D feature points. Our findings show that while the 3D extensions of 2D corner-detection perform as expected, integral invariants have shortcomings when applied to discrete TSDFs. We conclude with a discussion of the cause for these points of failure that sheds light on possible mitigation strategies.

  • 11.
    Daoutis, Marios
    Örebro University, School of Science and Technology.
    Knowledge based perceptual anchoring: grounding percepts to concepts in cognitive robots2013In: Künstliche Intelligenz, ISSN 0933-1875, E-ISSN 1610-1987, p. 1-4Article in journal (Refereed)
    Abstract [en]

    Perceptual anchoring is the process of creating and maintaining a connection between the sensor data corresponding to a physical object and its symbolic description. It is a subset of the symbol grounding problem, introduced by Harnad (Phys. D, Nonlinear Phenom. 42(1–3):335–346, 1990) and investigated over the past years in several disciplines including robotics. This PhD dissertation focuses on a method for grounding sensor data of physical objects to the corresponding semantic descriptions, in the context of cognitive robots where the challenge is to establish the connection between percepts and concepts referring to objects, their relations and properties. We examine how knowledge representation can be used together with an anchoring framework, so as to complement the meaning of percepts while supporting better linguistic interaction with the use of the corresponding concepts. The proposed method addresses the need to represent and process both perceptual and semantic knowledge, often expressed in different abstraction levels, while originating from different modalities. We then focus on the integration of anchoring with a large scale knowledge base system and with perceptual routines. This integration is applied in a number of studies, where in the context of a smart home, several evaluations spanning from spatial and commonsense reasoning to linguistic interaction and concept acquisition.

  • 12.
    Daoutis, Marios
    et al.
    Örebro University, School of Science and Technology.
    Coradeschi, Silvia
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Towards concept anchoring for cognitive robots2012In: Intelligent Service Robotics, ISSN 1861-2784, Vol. 5, no 4, p. 213-228Article in journal (Refereed)
    Abstract [en]

    We present a model for anchoring categorical conceptual information which originates from physical perception and the web. The model is an extension of the anchoring framework which is used to create and maintain over time semantically grounded sensor information. Using the augmented anchoring framework that employs complex symbolic knowledge from a commonsense knowledge base, we attempt to ground and integrate symbolic and perceptual data that are available on the web. We introduce conceptual anchors which are representations of general, concrete conceptual terms. We show in an example scenario how conceptual anchors can be coherently integrated with perceptual anchors and commonsense information for the acquisition of novel concepts.

  • 13.
    Dubba, Krishna Sandeep Reddy
    et al.
    School of Computing, University of Leeds, Leeds, UK.
    Cohn, Anthony G.
    School of Computing, University of Leeds, Leeds, UK.
    Hogg, David C.
    School of Computing, University of Leeds, Leeds, UK.
    Bhatt, Mehul
    Cognitive Systems, SFB/TR 8 Spatial Cognition, University of Bremen, Bremen, Germany.
    Dylla, Frank
    Cognitive Systems, SFB/TR 8 Spatial Cognition, University of Bremen, Bremen, Germany.
    Learning Relational Event Models from Video2015In: The journal of artificial intelligence research, ISSN 1076-9757, E-ISSN 1943-5037, Vol. 53, p. 41-90Article in journal (Refereed)
    Abstract [en]

    Event models obtained automatically from video can be used in applications ranging from abnormal event detection to content based video retrieval. When multiple agents are involved in the events, characterizing events naturally suggests encoding interactions as relations. Learning event models from this kind of relational spatio-temporal data using relational learning techniques such as Inductive Logic Programming (ILP) hold promise, but have not been successfully applied to very large datasets which result from video data. In this paper, we present a novel framework REMIND (Relational Event Model INDuction) for supervised relational learning of event models from large video datasets using ILP. Efficiency is achieved through the learning from interpretations setting and using a typing system that exploits the type hierarchy of objects in a domain. The use of types also helps prevent over generalization. Furthermore, we also present a type-refining operator and prove that it is optimal. The learned models can be used for recognizing events from previously unseen videos. We also present an extension to the framework by integrating an abduction step that improves the learning performance when there is noise in the input data. The experimental results on several hours of video data from two challenging real world domains (an airport domain and a physical action verbs domain) suggest that the techniques are suitable to real world scenarios.

  • 14.
    Echelmeyer, Wolfgang
    et al.
    University of Reutlingen, Reutlingen, Germany.
    Kirchheim, Alice
    School of Science and Technology, Örebro University, Örebro, Sweden.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Akbiyik, Hülya
    University of Reutlingen, Reutlingen, Germany.
    Bonini, Marco
    University of Reutlingen, Reutlingen, Germany.
    Performance Indicators for Robotics Systems in Logistics Applications2011Conference paper (Refereed)
    Abstract [en]

    The transfer of research results to market-ready products is often a costly and time-consuming process. In order to generate successful products, researchers must cooperate with industrial companies; both the industrial and academic partners need to have a detailed understanding of the requirements of all parties concerned. Academic researchers need to identify the performance indicators for technical systems within a business environment and be able to apply them.

    Inservice logistics today, nearly all standardized mass goods are unloaded manually with one reason for this being the undefined position and orientation of the goods in the carrier. A study regarding the qualitative and quantitative properties of goods that are transported in containers shows that there is a huge economic relevance for autonomous systems. In 2008, more than 8,4 billion Twenty-foot equivalent units (TEU) were imported and unloaded manually at European ports, corresponding to more than 331,000 billion single goods items.

    Besides the economic relevance, the opinion of market participants is an important factor for the success of new systems on the market. The main outcomes of a study regarding the challenges, opportunities and barriers in robotic-logistics, allow for the estimation of the economic efficiency of performance indicators, performance flexibility and soft factors. The economic efficiency of the performance parameters is applied to the parcel robot – a cognitive system to unload parcels autonomously from containers. In the following article, the results of the study are presented and the resultant conclusions discussed.

  • 15.
    Efremova, Natalia
    et al.
    Plekhanov Russian University, Moskow, Russia.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Cognitive Architectures for Optimal Remote Image Representation for Driving a Telepresence Robot2014Conference paper (Refereed)
  • 16.
    Fan, Hongqi
    et al.
    Örebro University, School of Science and Technology. National Laboratory of Science and Technology on Automatic Target Recognition, National University of Defense Technology, Changsha, China.
    Kucner, Tomasz Piotr
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Li, Tiancheng
    School of Sciences, University of Salamanca, Salamanca, Spain.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    A Dual PHD Filter for Effective Occupancy Filtering in a Highly Dynamic Environment2018In: IEEE transactions on intelligent transportation systems (Print), ISSN 1524-9050, E-ISSN 1558-0016, Vol. 19, no 9, p. 2977-2993Article in journal (Refereed)
    Abstract [en]

    Environment monitoring remains a major challenge for mobile robots, especially in densely cluttered or highly populated dynamic environments, where uncertainties originated from environment and sensor significantly challenge the robot's perception. This paper proposes an effective occupancy filtering method called the dual probability hypothesis density (DPHD) filter, which models uncertain phenomena, such as births, deaths, occlusions, false alarms, and miss detections, by using random finite sets. The key insight of our method lies in the connection of the idea of dynamic occupancy with the concepts of the phase space density in gas kinetic and the PHD in multiple target tracking. By modeling the environment as a mixture of static and dynamic parts, the DPHD filter separates the dynamic part from the static one with a unified filtering process, but has a higher computational efficiency than existing Bayesian Occupancy Filters (BOFs). Moreover, an adaptive newborn function and a detection model considering occlusions are proposed to improve the filtering efficiency further. Finally, a hybrid particle implementation of the DPHD filter is proposed, which uses a box particle filter with constant discrete states and an ordinary particle filter with a time-varying number of particles in a continuous state space to process the static part and the dynamic part, respectively. This filter has a linear complexity with respect to the number of grid cells occupied by dynamic obstacles. Real-world experiments on data collected by a lidar at a busy roundabout demonstrate that our approach can handle monitoring of a highly dynamic environment in real time.

  • 17.
    Ferri, Gabriele
    et al.
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Mondini, Alessio
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Manzi, Alessandro
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Mazzolai, Barbara
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Laschi, Cecilia
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Mattoli, Virgilio
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Reggente, Matteo
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Lettere, Marco
    Scuola Superiore Sant'Anna, Pisa, Italy.
    Dario, Paolo.
    Scuola Superiore Sant'Anna, Pisa, Italy.
    DustCart, a Mobile Robot for Urban Environments: Experiments of Pollution Monitoring and Mapping during Autonomous Navigation in Urban Scenarios2010In: Proceedings of ICRA Workshop on Networked and Mobile Robot Olfaction in Natural, Dynamic Environments, 2010Conference paper (Refereed)
    Abstract [en]

    In the framework of DustBot European project, aimed at developing a new multi-robot system for urban hygiene management, we have developed a twowheeled robot: DustCart. DustCart aims at providing a solution to door-to-door garbage collection: the robot, called by a user, navigates autonomously to his/her house; collects the garbage from the user and discharges it in an apposite area. An additional feature of DustCart is the capability to monitor the air pollution by means of an on board Air Monitoring Module (AMM). The AMM integrates sensors to monitor several atmospheric pollutants, such as carbon monoxide (CO), particular matter (PM10), nitrogen dioxide (NO2), ozone (O3) plus temperature (T) and relative humidity (rHu). An Ambient Intelligence platform (AmI) manages the robots’ operations through a wireless connection. AmI is able to collect measurements taken by different robots and to process them to create a pollution distribution map. In this paper we describe the DustCart robot system, focusing on the AMM and on the process of creating the pollutant distribution maps. We report results of experiments of one DustCart robot moving in urban scenarios and producing gas distribution maps using the Kernel DM+V algorithm. These experiments can be considered as one of the first attempts to use robots as mobile monitoring devices that can complement the traditional fixed stations.

  • 18.
    Grosinger, Jasmin
    et al.
    Örebro University, School of Science and Technology.
    Pecora, Federico
    Örebro University, School of Science and Technology.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Making Robots Proactive through Equilibrium Maintenance2016In: 25th International Joint Conference on Artificial Intelligence, 2016Conference paper (Refereed)
  • 19.
    Kiselev, Andrey
    et al.
    Örebro University, School of Science and Technology.
    Kristoffersson, Annica
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    The Effect of Field of View on Social Interaction in Mobile Robotic Telepresence Systems2014In: Proceedings of the 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2014), IEEE conference proceedings, 2014, p. 214-215Conference paper (Refereed)
    Abstract [en]

    One goal of mobile robotic telepresence for social interaction is to design robotic units that are easy to operate for novice users and promote good interaction between people. This paper presents an exploratory study on the effect of camera orientation and field of view on the interaction between a remote and local user. Our findings suggest that limiting the width of the field of view can lead to better interaction quality as it encourages remote users to orient the robot towards local users.

  • 20.
    Kiselev, Andrey
    et al.
    Örebro University, School of Science and Technology.
    Mosiello, Giovanni
    Örebro University, School of Science and Technology. Roma Tre University, Rome, Italy.
    Kristoffersson, Annica
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Semi-Autonomous Cooperative Driving for Mobile Robotic Telepresence Systems2014In: Proceedings of the 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2014), IEEE conference proceedings, 2014, p. 104-104Conference paper (Refereed)
    Abstract [en]

    Mobile robotic telepresence (MRP) has been introduced to allow communication from remote locations. Modern MRP systems offer rich capabilities for human-human interactions. However, simply driving a telepresence robot can become a burden especially for novice users, leaving no room for interaction at all. In this video we introduce a project which aims to incorporate advanced robotic algorithms into manned telepresence robots in a natural way to allow human-robot cooperation for safe driving. It also shows a very first implementation of cooperative driving based on extracting a safe drivable area in real time using the image stream received from the robot.

  • 21.
    Krug, Robert
    et al.
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Kragic, Danica
    Centre for Autonomous Systems, Computer Vision and Active Perception Lab, CSC, KTH Stockholm, Stockholm, Sweden.
    Bekiroglu, Yasemin
    School of Mechanical Engineering, University of Birmingham, Birmingham, United Kingdom.
    Analytic Grasp Success Prediction with Tactile Feedback2016In: 2016 IEEE International Conference on Robotics and Automation, ICRA 2016, New York, USA: IEEE , 2016, p. 165-171Conference paper (Refereed)
    Abstract [en]

    Predicting grasp success is useful for avoiding failures in many robotic applications. Based on reasoning in wrench space, we address the question of how well analytic grasp success prediction works if tactile feedback is incorporated. Tactile information can alleviate contact placement uncertainties and facilitates contact modeling. We introduce a wrench-based classifier and evaluate it on a large set of real grasps. The key finding of this work is that exploiting tactile information allows wrench-based reasoning to perform on a level with existing methods based on learning or simulation. Different from these methods, the suggested approach has no need for training data, requires little modeling effort and is computationally efficient. Furthermore, our method affords task generalization by considering the capabilities of the grasping device and expected disturbance forces/moments in a physically meaningful way.

  • 22.
    Krug, Robert
    et al.
    Örebro University, School of Science and Technology.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Tincani, Vinicio
    University of Pisa, Pisa, Italy.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Mosberger, Rafael
    Örebro University, School of Science and Technology.
    Fantoni, Gualtiero
    University of Pisa, Pisa, Italy.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    The Next Step in Robot Commissioning: Autonomous Picking and Palletizing2016In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 1, no 1, p. 546-553Article in journal (Refereed)
    Abstract [en]

    So far, autonomous order picking (commissioning) systems have not been able to meet the stringent demands regarding speed, safety, and accuracy of real-world warehouse automation, resulting in reliance on human workers. In this letter, we target the next step in autonomous robot commissioning: automatizing the currently manual order picking procedure. To this end, we investigate the use case of autonomous picking and palletizing with a dedicated research platform and discuss lessons learned during testing in simplified warehouse settings. The main theoretical contribution is a novel grasp representation scheme which allows for redundancy in the gripper pose placement. This redundancy is exploited by a local, prioritized kinematic controller which generates reactive manipulator motions on-the-fly. We validated our grasping approach by means of a large set of experiments, which yielded an average grasp acquisition time of 23.5 s at a success rate of 94.7%. Our system is able to autonomously carry out simple order picking tasks in a humansafe manner, and as such serves as an initial step toward future commercial-scale in-house logistics automation solutions.

  • 23.
    Lagriffoul, Fabien
    et al.
    Örebro University, School of Science and Technology.
    Dantam, Neil T.
    Colorado School of Mines, Golden CO, USA.
    Garrett, Caelan
    Massachusetts Institute of Technology, Cambridge MA, USA.
    Akbari, Aliakbar
    Universidad Politécnica de Catalunya, Barcelona, Spain.
    Srivastava, Siddharth
    Arizona State University, Tempe AZ, USA.
    Kavraki, Lydia E.
    Rice University, Houston TX, USA .
    Platform-Independent Benchmarks for Task and Motion Planning2018In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 3, no 4, p. 3765-3772Article in journal (Refereed)
    Abstract [en]

    We present the first platform-independent evaluation method for task and motion planning (TAMP). Previously point, various problems have been used to test individual planners for specific aspects of TAMP. However, no common set of metrics, formats, and problems have been accepted by the community. We propose a set of benchmark problems covering the challenging aspects of TAMP and a planner-independent specification format for these problems. Our objective is to better evaluate and compare TAMP planners, foster communication, and progress within the field, and lay a foundation to better understand this class of planning problems.

  • 24.
    Lagriffoul, Fabien
    et al.
    Örebro University, School of Science and Technology.
    Dimitrov, Dimitar
    Örebro University, School of Science and Technology.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Karlsson, Lars
    Örebro University, School of Science and Technology.
    Constraint propagation on interval bounds for dealing with geometric backtracking2012In: Proceedings of  the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2012), Institute of Electrical and Electronics Engineers (IEEE), 2012, p. 957-964Conference paper (Refereed)
    Abstract [en]

    The combination of task and motion planning presents us with a new problem that we call geometric backtracking. This problem arises from the fact that a single symbolic state or action can be geometrically instantiated in infinitely many ways. When a symbolic action cannot begeometrically validated, we may need to backtrack in thespace of geometric configurations, which greatly increases thecomplexity of the whole planning process. In this paper, weaddress this problem using intervals to represent geometricconfigurations, and constraint propagation techniques to shrinkthese intervals according to the geometric constraints of the problem. After propagation, either (i) the intervals are shrunk, thus reducing the search space in which geometric backtracking may occur, or (ii) the constraints are inconsistent, indicating then infeasibility of the sequence of actions without further effort. We illustrate our approach on scenarios in which a two-arm robot manipulates a set of objects, and report experiments that show how the search space is reduced.

  • 25.
    Lagriffoul, Fabien
    et al.
    Örebro University, School of Science and Technology.
    Karlsson, Lars
    Örebro University, School of Science and Technology.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Constraints on intervals for reducing the search space of geometric configurations2012In: Combining Task and Motion Planning for Real-World Applications (ICAPS workshop) / [ed] Marcello Cirillo, Brian Gerkey, Federico Pecora, Mike Stilman, 2012, p. 5-12Conference paper (Refereed)
  • 26.
    Lowry, Stephanie
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lightweight, Viewpoint-Invariant Visual Place Recognition in Changing Environments2018In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 3, no 2, p. 957-964Article in journal (Refereed)
    Abstract [en]

    This paper presents a viewpoint-invariant place recognition algorithm which is robust to changing environments while requiring only a small memory footprint. It demonstrates that condition-invariant local features can be combined with Vectors of Locally Aggregated Descriptors (VLAD) to reduce high-dimensional representations of images to compact binary signatures while retaining place matching capability across visually dissimilar conditions. This system provides a speed-up of two orders of magnitude over direct feature matching, and outperforms a bag-of-visual-words approach with near-identical computation speed and memory footprint. The experimental results show that single-image place matching from non-aligned images can be achieved in visually changing environments with as few as 256 bits (32 bytes) per image.

  • 27.
    Magnusson, Martin
    et al.
    Örebro University, School of Science and Technology.
    Kucner, Tomasz
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Quantitative Evaluation of Coarse-To-Fine Loading Strategies for Material Rehandling2015In: Proceedings of the IEEE International Conference on Automation Science and Engineering (CASE), New York: IEEE conference proceedings , 2015, p. 450-455Conference paper (Refereed)
    Abstract [en]

    Autonomous handling of piled materials is an emerging topic in automation science and engineering. A central question for material rehandling tasks (transporting materials that have been assembled in piles) is “where to dig, in order to optimise performance”? In particular, we are interested in the application of autonomous wheel loaders to handle piles of gravel. Still, the methodology proposed in this paper relates to granular materials in other applications too. Although initial work on suggesting strategies for where to dig has been done by a few other groups, there has been a lack of structured evaluation of the usefulness of the proposed strategies. In an attempt to further the field, we present a quantitative evaluation of loading strategies; both coarse ones, aiming to maintain a good pile shape over long-term operation; and refined ones, aiming to detect the locally best attack pose for acquiring a good fill grade in the bucket. Using real-world data from a semi-automated test platform, we present an assessment of how previously proposed pile shape measures can be mapped to the amount of material in the bucket after loading. We also present experimental data for long-term strategies, using simulations based on real-world 3D scan data from a production site.

  • 28.
    Mielle, Malcolm
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    SLAM auto-complete: completing a robot map using an emergency map2017In: 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), IEEE conference proceedings, 2017, p. 35-40, article id 8088137Conference paper (Refereed)
    Abstract [en]

    In search and rescue missions, time is an important factor; fast navigation and quickly acquiring situation awareness might be matters of life and death. Hence, the use of robots in such scenarios has been restricted by the time needed to explore and build a map. One way to speed up exploration and mapping is to reason about unknown parts of the environment using prior information. While previous research on using external priors for robot mapping mainly focused on accurate maps or aerial images, such data are not always possible to get, especially indoor. We focus on emergency maps as priors for robot mapping since they are easy to get and already extensively used by firemen in rescue missions. However, those maps can be outdated, information might be missing, and the scales of rooms are typically not consistent.

    We have developed a formulation of graph-based SLAM that incorporates information from an emergency map. The graph-SLAM is optimized using a combination of robust kernels, fusing the emergency map and the robot map into one map, even when faced with scale inaccuracies and inexact start poses.

    We typically have more than 50% of wrong correspondences in the settings studied in this paper, and the method we propose correctly handles them. Experiments in an office environment show that we can handle up to 70% of wrong correspondences and still get the expected result. The robot can navigate and explore while taking into account places it has not yet seen. We demonstrate this in a test scenario and also show that the emergency map is enhanced by adding information not represented such as closed doors or new walls.

  • 29.
    Mosberger, Rafael
    et al.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    A customized vision system for tracking humans wearing reflective safety clothing from industrial vehicles and machinery2014In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 14, no 10, p. 17952-17980Article in journal (Refereed)
    Abstract [en]

    This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions.

  • 30.
    Mosiello, Giovanni
    et al.
    Örebro University, School of Science and Technology. Universitá degli Studi Roma Tre, Rome, Italy.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Using augmented reality to improve usability of the user interface for driving a telepresence robot2013In: Paladyn - Journal of Behavioral Robotics, ISSN 2080-9778, E-ISSN 2081-4836, Vol. 4, no 3, p. 174-181Article in journal (Refereed)
    Abstract [en]

    Mobile Robotic Telepresence (MRP) helps people to communicate in natural ways despite being physically located in different parts of the world. User interfaces of such systems are as critical as the design and functionality of the robot itself for creating conditions for natural interaction. This article presents an exploratory study analysing different robot teleoperation interfaces. The goals of this paper are to investigate the possible effect of using augmented reality as the means to drive a robot, to identify key factors of the user interface in order to improve the user experience through a driving interface, and to minimize interface familiarization time for non-experienced users. The study involved 23 participants whose robot driving attempts via different user interfaces were analysed. The results show that a user interface with an augmented reality interface resulted in better driving experience.

  • 31.
    Persson, Andreas
    et al.
    Örebro University, School of Science and Technology.
    Längkvist, Martin
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Learning Actions to Improve the Perceptual Anchoring of Object2017In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 3, no 76Article in journal (Refereed)
    Abstract [en]

    In this paper, we examine how to ground symbols referring to objects in perceptual data from a robot system by examining object entities and their changes over time. In particular, we approach the challenge by 1) tracking and maintaining object entities over time; and 2) utilizing an artificial neural network to learn the coupling between words referring to actions and movement patterns of tracked object entities. For this purpose, we propose a framework which relies on the notations presented in perceptual anchoring. We further present a practical extension of the notation such that our framework can track and maintain the history of detected object entities. Our approach is evaluated using everyday objects typically found in a home environment. Our object classification module has the possibility to detect and classify over several hundred object categories. We demonstrate how the framework creates and maintains, both in space and time, representations of objects such as 'spoon' and 'coffee mug'. These representations are later used for training of different sequential learning algorithms in order to learn movement actions such as 'pour' and 'stir'. We finally exemplify how learned movements actions, combined with common-sense knowledge, further can be used to improve the anchoring process per se.

  • 32.
    Simoens, Pieter
    et al.
    IDLab, Ghent University – imec, Ghent, Belgium.
    Dragone, Mauro
    Research Institute of Signals, Sensors and Systems (ISSS), Heriot-Watt University, Edinburgh, United Kingdom.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    The Internet of Robotic Things: A review of concept, added value and applications2018In: International Journal of Advanced Robotic Systems, ISSN 1729-8806, E-ISSN 1729-8814, Vol. 15, no 1, article id 1729881418759424Article, review/survey (Refereed)
    Abstract [en]

    The Internet of Robotic Things is an emerging vision that brings together pervasive sensors and objects with robotic and autonomous systems. This survey examines how the merger of robotic and Internet of Things technologies will advance the abilities of both the current Internet of Things and the current robotic systems, thus enabling the creation of new, potentially disruptive services. We discuss some of the new technological challenges created by this merger and conclude that a truly holistic view is needed but currently lacking.

  • 33.
    Stenbäcker, Anna-Karin
    et al.
    Örebro University, School of Humanities, Education and Social Sciences.
    Wester, Maria
    Örebro University, School of Humanities, Education and Social Sciences.
    "En bild är mer än tusen ord": En undersökning av barns möten med bilder i sin skolmiljö2008Independent thesis Basic level (professional degree), 10 credits / 15 HE creditsStudent thesis
    Abstract [sv]

    Sammanfattning

    Syftet med vår uppsats är att undersöka barns möten med bild- och teckensymboler i sin skolmiljö. I uppsatsen har vi utgått från tre frågeställningar:

    Vilka bild- och teckensymboler möter barn i sin skolmiljö?

    Vilken förståelse har barn av de bild- och teckensymboler de möter i skolan?

    Hur arbetar skolan för att vidareutveckla barns skriftspråkliga medvetenhet och läs- och skrivutveckling genom att arbeta med bild- och teckensymboler?

    I vårt arbete har vi använt oss av en kvalitativ metod med en etnografisk ansats, vilket innebär att vi har observerat en skolmiljö och intervjuat tolv barn och en lärare i fråga om bilder i deras skolmiljö. Utifrån detta empiriska material har vi genom bearbetning tagit fram sex teman som vi analyserar utifrån vår teoretiska forskningsbakgrund och utifrån vårt syfte. De teman vi tagit fram är Demokrati genom symboler, Miljö genom symboler, Symboler för att få en överblick, Ordbilder, Alfabetets bild- och teckensymboler och Datorn som bildkälla.

    I vår teoretiska forskningsbakgrund utgår vi från ett analytiskt helhetstänkande vilket innebär att bilder som symboliserar omvärlden är helhetsintryck som sedan kan analyseras del för del. I resultatet diskuterar vi de bilder vi fått under observationen och kommer fram till att det fanns gott om bilder i skolmiljön men att dessa bilder inte används i någon större utsträckning när det gäller vidareutvecklingen av barns skriftspråkliga medvetenhet och läs- och skrivutveckling. Utifrån intervjuerna med barnen tolkar vi det som att många av dem har en god förståelse av de bild- och teckensymboler som de möter.

  • 34.
    Stoyanov, Todor
    et al.
    Örebro University, School of Science and Technology.
    Krug, Robert
    Örebro University, School of Science and Technology.
    Muthusamy, Rajkumar
    Aalto University, Esbo, Finland.
    Kyrki, Ville
    Aalto University, Esbo, Finland.
    Grasp Envelopes: Extracting Constraints on Gripper Postures from Online Reconstructed 3D Models2016In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), New York: Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 885-892Conference paper (Refereed)
    Abstract [en]

    Grasping systems that build upon meticulously planned hand postures rely on precise knowledge of object geometry, mass and frictional properties - assumptions which are often violated in practice. In this work, we propose an alternative solution to the problem of grasp acquisition in simple autonomous pick and place scenarios, by utilizing the concept of grasp envelopes: sets of constraints on gripper postures. We propose a fast method for extracting grasp envelopes for objects that fit within a known shape category, placed in an unknown environment. Our approach is based on grasp envelope primitives, which encode knowledge of human grasping strategies. We use environment models, reconstructed from noisy sensor observations, to refine the grasp envelope primitives and extract bounded envelopes of collision-free gripper postures. Also, we evaluate the envelope extraction procedure both in a stand alone fashion, as well as an integrated component of an autonomous picking system.

  • 35.
    Stoyanov, Todor
    et al.
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Maximum Likelihood Point Cloud Acquisition from a Rotating Laser Scanner on a Moving Platform2009In: Proceedings of the IEEE International Conference on Advanced Robotics (ICAR), IEEE conference proceedings, 2009Conference paper (Refereed)
    Abstract [en]

    This paper describes an approach to acquire locally consistent range data scans from a moving sensor platform. Data from a vertically mounted rotating laser scanner and odometry position estimates are fused and used to estimate maximum likelihood point clouds. An estimation algorithm is applied to reduce the accumulated error after a full rotation of the range finder. A configuration consisting of a SICK laser scanner mounted on a rotational actuator is described and used to evaluate the proposed approach. The data sets analyzed suggest a significant improvement in point cloud consistency, even over a short travel distance.

  • 36.
    Stoyanov, Todor
    et al.
    Örebro University, School of Science and Technology.
    Louloudi, Athanasia
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Comparative evaluation of range sensor accuracy in indoor environments2011In: Proceedings of the 5th European Conference on Mobile Robots, ECMR 2011 / [ed] Achim J. Lilienthal, Tom Duckett, 2011, p. 19-24Conference paper (Refereed)
    Abstract [en]

    3D range sensing is one of the important topics in robotics, as it is often a component in vital autonomous subsystems like collision avoidance, mapping and semantic perception. The development of affordable, high frame rate and precise 3D range sensors is thus of considerable interest. Recent advances in sensing technology have produced several novel sensors that attempt to meet these requirements. This work is concerned with the development of a holistic method for accuracy evaluation of the measurements produced by such devices. A method for comparison of range sensor output to a set of reference distance measurements is proposed. The approach is then used to compare the behavior of three integrated range sensing devices, to that of a standard actuated laser range sensor. Test cases in an uncontrolled indoor environment are performed in order to evaluate the sensors’ performance in a challenging, realistic application scenario.

  • 37.
    Stoyanov, Todor
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Almqvist, Håkan
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    On the Accuracy of the 3D Normal Distributions Transform as a Tool for Spatial Representation2011In: 2011 IEEE International Conference on Robotics and Automation (ICRA), IEEE conference proceedings, 2011Conference paper (Refereed)
    Abstract [en]

    The Three-Dimensional Normal Distributions Transform (3D-NDT) is a spatial modeling technique with applications in point set registration, scan similarity comparison, change detection and path planning. This work concentrates on evaluating three common variations of the 3D-NDT in terms of accuracy of representing sampled semi-structured environments. In a novel approach to spatial representation quality measurement, the 3D geometrical modeling task is formulated as a classification problem and its accuracy is evaluated with standard machine learning performance metrics. In this manner the accuracy of the 3D-NDT variations is shown to be comparable to, and in some cases to outperform that of the standard occupancy grid mapping model.

  • 38.
    Stoyanov, Todor
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Point Set Registration through Minimization of the L-2 Distance between 3D-NDT Models2012In: 2012 IEEE International Conference on Robotics and Automation (ICRA), IEEE conference proceedings, 2012, p. 5196-5201Conference paper (Refereed)
    Abstract [en]

    Point set registration — the task of finding the best fitting alignment between two sets of point samples, is an important problem in mobile robotics. This article proposes a novel registration algorithm, based on the distance between Three- Dimensional Normal Distributions Transforms. 3D-NDT models — a sub-class of Gaussian Mixture Models with uniformly weighted, largely disjoint components, can be quickly computed from range point data. The proposed algorithm constructs 3DNDT representations of the input point sets and then formulates an objective function based on the L2 distance between the considered models. Analytic first and second order derivatives of the objective function are computed and used in a standard Newton method optimization scheme, to obtain the best-fitting transformation. The proposed algorithm is evaluated and shown to be more accurate and faster, compared to a state of the art implementation of the Iterative Closest Point and 3D-NDT Point-to-Distribution algorithms.

  • 39.
    Stoyanov, Todor
    et al.
    Örebro University, School of Science and Technology.
    Vaskevicius, Narunas
    Jacobs University Bremen, Bremen, Germany.
    Mueller, Christian Atanas
    Jacobs University Bremen, Bremen, Germany.
    Fromm, Tobias
    Jacobs University Bremen, Bremen, Germany.
    Krug, Robert
    Örebro University, School of Science and Technology.
    Tincani, Vinicio
    University of Pisa, Pisa, Italy.
    Mojtahedzadeh, Rasoul
    Örebro University, School of Science and Technology.
    Kunaschk, Stefan
    Bremer Institut für Produktion und Logistik (BIBA), Bremen, Germany.
    Ernits, R. Mortensen
    Bremer Institut für Produktion und Logistik (BIBA), Bremen, Germany.
    Canelhas, Daniel R.
    Örebro University, School of Science and Technology.
    Bonilla, Manuell
    University of Pisa, Pisa, Italy.
    Schwertfeger, Soeren
    ShanghaiTech University, Shanghai, China.
    Bonini, Marco
    Reutlingen University, Reutlingen, Germany.
    Halfar, Harry
    Reutlingen University, Reutlingen, Germany.
    Pathak, Kaustubh
    Jacobs University Bremen, Bremen, Germany.
    Rohde, Moritz
    Bremer Institut für Produktion und Logistik (BIBA), Bremen, Germany.
    Fantoni, Gualtiero
    University of Pisa, Pisa, Italy.
    Bicchi, Antonio
    Università di Pisa & Istituto Italiano di Tecnologia, Genova, Italy.
    Birk, Andreas
    Jacobs University, Bremen, Germany.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Echelmeyer, Wolfgang
    Reutlingen University, Reutlingen, Germany.
    No More Heavy Lifting: Robotic Solutions to the Container-Unloading Problem2016In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 23, no 4, p. 94-106Article in journal (Refereed)
  • 40.
    Suchan, Jakob
    et al.
    Spatial Reasoning, EASE CRC: Everyday Activity Science and Engineering, University of Bremen, Bremen, Germany.
    Bhatt, Mehul
    Örebro University, School of Science and Technology. Spatial Reasoning, EASE CRC: Everyday Activity Science and Engineering, University of Bremen, Bremen, Germany.
    Commonsense Scene Semantics for Cognitive Robotics: Towards Grounding Embodied Visuo-Locomotive Interactions2017In: 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 742-750Conference paper (Refereed)
    Abstract [en]

    We present a commonsense, qualitative model for the semantic grounding of embodied visuo-spatial and locomotive interactions. The key contribution is an integrative methodology combining low-level visual processing with high-level, human-centred representations of space and motion rooted in artificial intelligence. We demonstrate practical applicability with examples involving object interactions, and indoor movement.

  • 41.
    Suchan, Jakob
    et al.
    Cognitive Systems, University of Bremen, Bremen, Germany.
    Bhatt, Mehul
    Cognitive Systems, University of Bremen, Bremen, Germany.
    Santos, Paulo E.
    Centro Universitario da FEI, Sâo Paulo, Brazil.
    Perceptual Narratives of Space and Motion for Semantic Interpretation of Visual Data2014In: Computer Vision - ECCV 2014 Workshops: Zurich, Switzerland, September 6-7 and 12, 2014, Proceedings, Part II / [ed] Lourdes Agapito, Michael M. Bronstein, Carsten Rother, Springer , 2014, Vol. 8926, p. 339-354Conference paper (Refereed)
    Abstract [en]

    We propose a commonsense theory of space and motion for the high-level semantic interpretation of dynamic scenes. The theory provides primitives for commonsense representation and reasoning with qualitative spatial relations, depth profiles, and spatio-temporal change; these may be combined with probabilistic methods for modelling and hypothesising event and object relations. The proposed framework has been implemented as a general activity abstraction and reasoning engine, which we demonstrate by generating declaratively grounded visuo-spatial narratives of perceptual input from vision and depth sensors for a bench-mark scenario. Our long-term goal is to provide general tools (integrating different aspects of space, action, and change) necessary for tasks such as real-time human activity interpretation and dynamic sensor control within the purview of cognitive vision, interaction, and control

  • 42.
    Tolt, Gustav
    Örebro University, Department of Technology.
    Fuzzy similarity-based image processing2005Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Computer vision problems require low-level operations, e.g. noise reduction and edge detection, as well as high-level operations, e.g. object recognition and image understanding. Letting a PC carry out all computations is convenient but quite inefficient. One approach for improving the performance of the vision system is to bring as much as possible of the computationally intensive low-level operations closer to the camera using dedicated hardware devices, thus letting the PC focus on high-level tasks. In this thesis we present novel fuzzy techniques for reducing noise, determining edgeness and detecting junctions as well as stereo matching measures for color images, as building blocks of complex vision systems, e.g. for robot motion control or other industrial applications.

    The noise reduction is achieved by evaluating a number of fuzzy rules, each suggesting a particular filtering output. The firing strengths of the rules correspond to the degrees of similarity found among the pixels in the local processing window. The approach for determining edgeness is based on fuzzy rules that combine the estimated gradient magnitude with information about the homogeneity in different parts of the processing window. In this way the response from false edges is suppressed. In the junction detection approach we let the intersection between fuzzy sets represent the similarity between information obtained with different window sizes. The fuzzy sets represent the possible orientations of line segments in the window and non-zero intersections of the fuzzy sets indicate the presence of line segments in the window. The number of line segments characterize the nature of the junction. For the stereo matching measures, the global similarity betwen two pixels is defined in terms of fuzzy conjunctions of local similarities (color and edgeness). The proposed techniques have been designed for hardware implementation, making use of extensive parallelism and primarily simple numerical operations. The performance is shown in a number of experiments, and the strengths and limitations of the techniques are discussed.

1 - 42 of 42
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf