oru.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 62) Show all publications
Pecora, F., Andreasson, H., Mansouri, M. & Petkov, V. (2018). A Loosely-Coupled Approach for Multi-Robot Coordination, Motion Planning and Control. In: : . Paper presented at International Conference on Automated Planning and Scheduling (ICAPS 2018), Delft, The Netherland, June 24-29, 2018.
Open this publication in new window or tab >>A Loosely-Coupled Approach for Multi-Robot Coordination, Motion Planning and Control
2018 (English)Conference paper, Published paper (Refereed)
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-64721 (URN)
Conference
International Conference on Automated Planning and Scheduling (ICAPS 2018), Delft, The Netherland, June 24-29, 2018
Projects
Semantic RobotsILIAD
Funder
Knowledge Foundation, 20140033EU, Horizon 2020, 732737
Available from: 2018-01-31 Created: 2018-01-31 Last updated: 2018-06-11Bibliographically approved
Lowry, S. & Andreasson, H. (2018). Lightweight, Viewpoint-Invariant Visual Place Recognition in Changing Environments. IEEE Robotics and Automation Letters, 3(2), 957-964
Open this publication in new window or tab >>Lightweight, Viewpoint-Invariant Visual Place Recognition in Changing Environments
2018 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 3, no 2, p. 957-964Article in journal (Refereed) Published
Abstract [en]

This paper presents a viewpoint-invariant place recognition algorithm which is robust to changing environments while requiring only a small memory footprint. It demonstrates that condition-invariant local features can be combined with Vectors of Locally Aggregated Descriptors (VLAD) to reduce high-dimensional representations of images to compact binary signatures while retaining place matching capability across visually dissimilar conditions. This system provides a speed-up of two orders of magnitude over direct feature matching, and outperforms a bag-of-visual-words approach with near-identical computation speed and memory footprint. The experimental results show that single-image place matching from non-aligned images can be achieved in visually changing environments with as few as 256 bits (32 bytes) per image.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2018
Keywords
Visual-based navigation, recognition, localization
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-64652 (URN)10.1109/LRA.2018.2793308 (DOI)000424646100015 ()
Note

Funding Agency:

Semantic Robots Research Profile - Swedish Knowledge Foundation

Available from: 2018-01-30 Created: 2018-01-30 Last updated: 2018-02-28Bibliographically approved
Lowry, S. & Andreasson, H. (2018). LOGOS: Local geometric support for high-outlier spatial verification. In: : . Paper presented at IEEE International Conference of Robotics and Automation (ICRA 2018), Brisbane, Australia, May 21-25, 2018 (pp. 7262-7269). IEEE Computer Society
Open this publication in new window or tab >>LOGOS: Local geometric support for high-outlier spatial verification
2018 (English)Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents LOGOS, a method of spatial verification for visual localization that is robust in the presence of a high proportion of outliers. LOGOS uses scale and orientation information from local neighbourhoods of features to determine which points are likely to be inliers. The inlier points can be used for secondary localization verification and pose estimation. LOGOS is demonstrated on a number of benchmark localization datasets and outperforms RANSAC as a method of outlier removal and localization verification in scenarios that require robustness to many outliers.

Place, publisher, year, edition, pages
IEEE Computer Society, 2018
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-68446 (URN)000446394505077 ()
Conference
IEEE International Conference of Robotics and Automation (ICRA 2018), Brisbane, Australia, May 21-25, 2018
Note

Funding Agency:

Semantic Robots Research Profile - Swedish Knowledge Foundation (KKS)

Available from: 2018-08-13 Created: 2018-08-13 Last updated: 2018-10-22Bibliographically approved
Andreasson, H., Adolfsson, D., Stoyanov, T., Magnusson, M. & Lilienthal, A. (2017). Incorporating Ego-motion Uncertainty Estimates in Range Data Registration. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), Vancouver, Canada, September 24–28, 2017 (pp. 1389-1395). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Incorporating Ego-motion Uncertainty Estimates in Range Data Registration
Show others...
2017 (English)In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 1389-1395Conference paper, Published paper (Refereed)
Abstract [en]

Local scan registration approaches commonlyonly utilize ego-motion estimates (e.g. odometry) as aninitial pose guess in an iterative alignment procedure. Thispaper describes a new method to incorporate ego-motionestimates, including uncertainty, into the objective function of aregistration algorithm. The proposed approach is particularlysuited for feature-poor and self-similar environments,which typically present challenges to current state of theart registration algorithms. Experimental evaluation showssignificant improvements in accuracy when using data acquiredby Automatic Guided Vehicles (AGVs) in industrial productionand warehouse environments.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2017
Series
Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems, ISSN 2153-0858, E-ISSN 2153-0866
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-62803 (URN)10.1109/IROS.2017.8202318 (DOI)000426978201108 ()2-s2.0-85041958720 (Scopus ID)978-1-5386-2682-5 (ISBN)978-1-5386-2683-2 (ISBN)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), Vancouver, Canada, September 24–28, 2017
Projects
Semantic RobotsILIAD
Funder
Knowledge FoundationEU, Horizon 2020, 732737
Available from: 2017-11-24 Created: 2017-11-24 Last updated: 2018-04-09Bibliographically approved
Magnusson, M., Kucner, T. P., Gholami Shahbandi, S., Andreasson, H. & Lilienthal, A. (2017). Semi-Supervised 3D Place Categorisation by Descriptor Clustering. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): . Paper presented at 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) Vancouver, Canada, September 24–28, 2017 (pp. 620-625). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Semi-Supervised 3D Place Categorisation by Descriptor Clustering
Show others...
2017 (English)In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 620-625Conference paper, Published paper (Refereed)
Abstract [en]

Place categorisation; i. e., learning to group perception data into categories based on appearance; typically uses supervised learning and either visual or 2D range data.

This paper shows place categorisation from 3D data without any training phase. We show that, by leveraging the NDT histogram descriptor to compactly encode 3D point cloud appearance, in combination with standard clustering techniques, it is possible to classify public indoor data sets with accuracy comparable to, and sometimes better than, previous supervised training methods. We also demonstrate the effectiveness of this approach to outdoor data, with an added benefit of being able to hierarchically categorise places into sub-categories based on a user-selected threshold.

This technique relieves users of providing relevant training data, and only requires them to adjust the sensitivity to the number of place categories, and provide a semantic label to each category after the process is completed.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2017
Series
Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems, ISSN 2153-0858, E-ISSN 2153-0866
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-61903 (URN)10.1109/IROS.2017.8202216 (DOI)000426978201006 ()2-s2.0-85041949592 (Scopus ID)978-1-5386-2682-5 (ISBN)978-1-5386-2683-2 (ISBN)
Conference
2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) Vancouver, Canada, September 24–28, 2017
Projects
ILIAD
Funder
EU, Horizon 2020, 732737
Note

Iliad Project: http://iliad-project.eu

Available from: 2017-10-20 Created: 2017-10-20 Last updated: 2018-04-09Bibliographically approved
Mielle, M., Magnusson, M., Andreasson, H. & Lilienthal, A. J. (2017). SLAM auto-complete: completing a robot map using an emergency map. In: 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR): . Paper presented at 15th IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR 2017), ShanghaiTech University, China, October 11-13, 2017 (pp. 35-40). IEEE conference proceedings, Article ID 8088137.
Open this publication in new window or tab >>SLAM auto-complete: completing a robot map using an emergency map
2017 (English)In: 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), IEEE conference proceedings, 2017, p. 35-40, article id 8088137Conference paper, Published paper (Refereed)
Abstract [en]

In search and rescue missions, time is an important factor; fast navigation and quickly acquiring situation awareness might be matters of life and death. Hence, the use of robots in such scenarios has been restricted by the time needed to explore and build a map. One way to speed up exploration and mapping is to reason about unknown parts of the environment using prior information. While previous research on using external priors for robot mapping mainly focused on accurate maps or aerial images, such data are not always possible to get, especially indoor. We focus on emergency maps as priors for robot mapping since they are easy to get and already extensively used by firemen in rescue missions. However, those maps can be outdated, information might be missing, and the scales of rooms are typically not consistent.

We have developed a formulation of graph-based SLAM that incorporates information from an emergency map. The graph-SLAM is optimized using a combination of robust kernels, fusing the emergency map and the robot map into one map, even when faced with scale inaccuracies and inexact start poses.

We typically have more than 50% of wrong correspondences in the settings studied in this paper, and the method we propose correctly handles them. Experiments in an office environment show that we can handle up to 70% of wrong correspondences and still get the expected result. The robot can navigate and explore while taking into account places it has not yet seen. We demonstrate this in a test scenario and also show that the emergency map is enhanced by adding information not represented such as closed doors or new walls.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2017
Keywords
SLAM, robotics, graph, graph SLAM, emergency map, rescue, exploration, auto complete, SLAM, robotics, graph, graph SLAM, plan de secours, sauvetage, exploration, auto complete
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-62057 (URN)10.1109/SSRR.2017.8088137 (DOI)000426991900007 ()2-s2.0-85040221684 (Scopus ID)978-1-5386-3923-8 (ISBN)978-1-5386-3924-5 (ISBN)
Conference
15th IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR 2017), ShanghaiTech University, China, October 11-13, 2017
Projects
EU H2020 project SmokeBot (ICT- 23-2014 645101)
Funder
Knowledge Foundation, 20140220
Note

Funding Agency:

EU  ICT-23-2014645101

Available from: 2017-11-08 Created: 2017-11-08 Last updated: 2018-03-27Bibliographically approved
Mielle, M., Magnusson, M., Andreasson, H. & Lilienthal, A. (2017). Using emergency maps to add not yet explored places into SLAM. In: : . Paper presented at 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, Canada, September 24-28, 2017.
Open this publication in new window or tab >>Using emergency maps to add not yet explored places into SLAM
2017 (English)Conference paper, Poster (with or without abstract) (Other academic)
Abstract [en]

While using robots in search and rescue missions would help ensure the safety of first responders, a key issue is the time needed by the robot to operate. Even though SLAM is faster and faster, it might still be too slow to enable the use of robots in critical situations. One way to speed up operation time is to use prior information.

We aim at integrating emergency-maps into SLAM to complete the SLAM map with information about not yet explored part of the environment. By integrating prior information, we can speed up exploration time or provide valuable prior information for navigation, for example, in case of sensor blackout/failure. However, while extensively used by firemen in their operations, emergency maps are not easy to integrate in SLAM since they are often not up to date or with non consistent scales.

The main challenge we are tackling is in dealing with the imperfect scale of the rough emergency maps and integrate it with the online SLAM map in addition to challenges due to incorrect matches between these two types of map. We developed a formulation of graph-based SLAM incorporating information from an emergency map into SLAM, and propose a novel optimization process adapted to this formulation.

We extract corners from the emergency map and the SLAM map, in between which we find correspondences using a distance measure. We then build a graph representation associating information from the emergency map and the SLAM map. Corners in the emergency map, corners in the robot map, and robot poses are added as nodes in the graph, while odometry, corner observations, walls in the emergency map, and corner associations are added as edges. To conserve the topology of the emergency map, but correct its possible errors in scale, edges representing the emergency map's walls are given a covariance so that they are easy to extend or shrink but hard to rotate. Correspondences between corners represent a zero transformation for the optimization to match them as close as possible. The graph optimization is done by using a combination robust kernels. We first use the Huber kernel, to converge toward a good solution, followed by Dynamic Covariance Scaling, to handle the remaining errors.

We demonstrate our system in an office environment. We run the SLAM online during the exploration. Using the map enhanced by information from the emergency map, the robot was able to plan the shortest path toward a place it has not yet explored. This capability can be a real asset in complex buildings where exploration can take up a long time. It can also reduce exploration time by avoiding exploration of dead-ends, or search of specific places since the robot knows where it is in the emergency map.

Keywords
Search and Rescue Robots, SLAM, Mapping
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-61905 (URN)
Conference
2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, Canada, September 24-28, 2017
Projects
SmokeBot
Available from: 2017-10-20 Created: 2017-10-20 Last updated: 2018-02-01Bibliographically approved
Rituerto, A., Andreasson, H., Murillo, A. C., Lilienthal, A. & Jesus Guerrero, J. (2016). Building an Enhanced Vocabulary of the Robot Environment with a Ceiling Pointing Camera. Sensors, 16(4), Article ID 493.
Open this publication in new window or tab >>Building an Enhanced Vocabulary of the Robot Environment with a Ceiling Pointing Camera
Show others...
2016 (English)In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 16, no 4, article id 493Article in journal (Refereed) Published
Abstract [en]

Mobile robots are of great help for automatic monitoring tasks in different environments. One of the first tasks that needs to be addressed when creating these kinds of robotic systems is modeling the robot environment. This work proposes a pipeline to build an enhanced visual model of a robot environment indoors. Vision based recognition approaches frequently use quantized feature spaces, commonly known as Bag of Words (BoW) or vocabulary representations. A drawback using standard BoW approaches is that semantic information is not considered as a criteria to create the visual words. To solve this challenging task, this paper studies how to leverage the standard vocabulary construction process to obtain a more meaningful visual vocabulary of the robot work environment using image sequences. We take advantage of spatio-temporal constraints and prior knowledge about the position of the camera. The key contribution of our work is the definition of a new pipeline to create a model of the environment. This pipeline incorporates (1) tracking information to the process of vocabulary construction and (2) geometric cues to the appearance descriptors. Motivated by long term robotic applications, such as the aforementioned monitoring tasks, we focus on a configuration where the robot camera points to the ceiling, which captures more stable regions of the environment. The experimental validation shows how our vocabulary models the environment in more detail than standard vocabulary approaches, without loss of recognition performance. We show different robotic tasks that could benefit of the use of our visual vocabulary approach, such as place recognition or object discovery. For this validation, we use our publicly available data-set.

Place, publisher, year, edition, pages
Basel: MDPI AG, 2016
Keywords
visual vocabulary, computer vision, bag of words, robotics, place recognition, environment description
National Category
Chemical Sciences Computer Sciences
Research subject
Chemistry; Computer Science
Identifiers
urn:nbn:se:oru:diva-50502 (URN)10.3390/s16040493 (DOI)000375153700073 ()2-s2.0-84962921139 (Scopus ID)
Note

Funding Agencies:

Spanish Government 

European Union DPI2015-65962-R

Available from: 2016-05-31 Created: 2016-05-31 Last updated: 2018-01-10Bibliographically approved
Chadalavada, R. T., Andreasson, H., Krug, R. & Lilienthal, A. (2016). Empirical evaluation of human trust in an expressive mobile robot. In: Proceedings of RSS Workshop "Social Trust in Autonomous Robots 2016": . Paper presented at RSS Workshop "Social Trust in Autonomous Robots 2016", June 19, 2016.
Open this publication in new window or tab >>Empirical evaluation of human trust in an expressive mobile robot
2016 (English)In: Proceedings of RSS Workshop "Social Trust in Autonomous Robots 2016", 2016Conference paper, Published paper (Refereed)
Abstract [en]

A mobile robot communicating its intentions using Spatial Augmented Reality (SAR) on the shared floor space makes humans feel safer and more comfortable around the robot. Our previous work [1] and several other works established this fact. We built upon that work by adding an adaptable information and control to the SAR module. An empirical study about how a mobile robot builds trust in humans by communicating its intentions was conducted. A novel way of evaluating that trust is presented and experimentally shown that adaption in SAR module lead to natural interaction and the new evaluation system helped us discover that the comfort levels between human-robot interactions approached those of human-human interactions.

Keywords
Human robot interaction, hri, mobile robot, trust, evaluation
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-55259 (URN)
Conference
RSS Workshop "Social Trust in Autonomous Robots 2016", June 19, 2016
Available from: 2017-02-02 Created: 2017-02-02 Last updated: 2018-03-14Bibliographically approved
Mansouri, M., Andreasson, H. & Pecora, F. (2016). Hybrid Reasoning for Multi-robot Drill Planning in Open-pit Mines. Acta Polytechnica, 56(1), 47-56
Open this publication in new window or tab >>Hybrid Reasoning for Multi-robot Drill Planning in Open-pit Mines
2016 (English)In: Acta Polytechnica, ISSN 1210-2709, E-ISSN 1805-2363, Vol. 56, no 1, p. 47-56Article in journal (Refereed) Published
Abstract [en]

Fleet automation often involves solving several strongly correlated sub-problems, including task allocation, motion planning, and coordination. Solutions need to account for very specific, domaindependent constraints. In addition, several aspects of the overall fleet management problem become known only online. We propose a method for solving the fleet-management problem grounded on a heuristically-guided search in the space of mutually feasible solutions to sub-problems. We focus on a mining application which requires online contingency handling and accommodating many domainspecific constraints. As contingencies occur, efficient reasoning is performed to adjust the plan online for the entire fleet.

Place, publisher, year, edition, pages
Prague, Czech Republic: Czech Technical University in Prague, 2016
Keywords
robot planning, multi-robot coordination, on-line reasoning
National Category
Computer Sciences
Research subject
Computer and Systems Science
Identifiers
urn:nbn:se:oru:diva-51018 (URN)10.14311/APP.2016.56.0047 (DOI)000411572200007 ()2-s2.0-84959316752 (Scopus ID)
Available from: 2016-06-22 Created: 2016-06-22 Last updated: 2018-06-11Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-2953-1564

Search in DiVA

Show all publications