oru.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 18) Show all publications
Alirezaie, M., Längkvist, M., Sioutis, M. & Loutfi, A. (2018). A Symbolic Approach for Explaining Errors in Image Classification Tasks. In: : . Paper presented at 27th International Joint Conference on Artificial Intelligence (IJCAI), Stockholm, Sweden, July 13-19, 2018.
Open this publication in new window or tab >>A Symbolic Approach for Explaining Errors in Image Classification Tasks
2018 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Machine learning algorithms, despite their increasing success in handling object recognition tasks, still seldom perform without error. Often the process of understanding why the algorithm has failed is the task of the human who, using domain knowledge and contextual information, can discover systematic shortcomings in either the data or the algorithm. This paper presents an approach where the process of reasoning about errors emerging from a machine learning framework is automated using symbolic techniques. By utilizing spatial and geometrical reasoning between objects in a scene, the system is able to describe misclassified regions in relation to its context. The system is demonstrated in the remote sensing domain where objects and entities are detected in satellite images.

National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-68000 (URN)
Conference
27th International Joint Conference on Artificial Intelligence (IJCAI), Stockholm, Sweden, July 13-19, 2018
Note

IJCAI Workshop on Learning and Reasoning: Principles & Applications to Everyday Spatial and Temporal Knowledge

Available from: 2018-07-18 Created: 2018-07-18 Last updated: 2018-07-26Bibliographically approved
Längkvist, M., Jendeberg, J., Thunberg, P., Loutfi, A. & Lidén, M. (2018). Computer aided detection of ureteral stones in thin slice computed tomography volumes using Convolutional Neural Networks. Computers in Biology and Medicine, 97, 153-160
Open this publication in new window or tab >>Computer aided detection of ureteral stones in thin slice computed tomography volumes using Convolutional Neural Networks
Show others...
2018 (English)In: Computers in Biology and Medicine, ISSN 0010-4825, E-ISSN 1879-0534, Vol. 97, p. 153-160Article in journal (Refereed) Published
Abstract [en]

Computed tomography (CT) is the method of choice for diagnosing ureteral stones - kidney stones that obstruct the ureter. The purpose of this study is to develop a computer aided detection (CAD) algorithm for identifying a ureteral stone in thin slice CT volumes. The challenge in CAD for urinary stones lies in the similarity in shape and intensity of stones with non-stone structures and how to efficiently deal with large high-resolution CT volumes. We address these challenges by using a Convolutional Neural Network (CNN) that works directly on the high resolution CT volumes. The method is evaluated on a large data base of 465 clinically acquired high-resolution CT volumes of the urinary tract with labeling of ureteral stones performed by a radiologist. The best model using 2.5D input data and anatomical information achieved a sensitivity of 100% and an average of 2.68 false-positives per patient on a test set of 88 scans.

Place, publisher, year, edition, pages
Elsevier, 2018
Keywords
Computer aided detection, Ureteral stone, Convolutional neural networks, Computed tomography, Training set selection, False positive reduction
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:oru:diva-67139 (URN)10.1016/j.compbiomed.2018.04.021 (DOI)000435623700015 ()29730498 (PubMedID)2-s2.0-85046800526 (Scopus ID)
Note

Funding Agencies:

Nyckelfonden  OLL-597511 

Vinnova under the project "Interactive Deep Learning for 3D image analysis"  

Available from: 2018-06-04 Created: 2018-06-04 Last updated: 2018-08-30Bibliographically approved
Lidén, M., Jendeberg, J., Längkvist, M., Loutfi, A. & Thunberg, P. (2018). Discrimination between distal ureteral stones and pelvic phleboliths in CT using a deep neural network: more than local features needed. In: : . Paper presented at European Congress of Radiology (ECR) 2018, Vienna, Austria, 28 Feb.-4 Mar., 2018.
Open this publication in new window or tab >>Discrimination between distal ureteral stones and pelvic phleboliths in CT using a deep neural network: more than local features needed
Show others...
2018 (English)Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

Purpose: To develop a deep learning method for assisting radiologists in the discrimination between distal ureteral stones and pelvic phleboliths in thin slice CT images, and to evaluate whether this differentiation is possible using only local features.

Methods and materials: A limited field-of-view image data bank was retrospectively created, consisting of 5x5x5 cm selections from 1 mm thick unenhanced CT images centered around 218 pelvis phleboliths and 267 distal ureteral stones in 336 patients. 50 stones and 50 phleboliths formed a validation cohort and the remainder a training cohort. Ground truth was established by a radiologist using the complete CT examination during inclusion.The limited field-of-view CT stacks were independently reviewed and classified as containing a distal ureteral stone or a phlebolith by seven radiologists. Each cropped stack consisted of 50 slices (5x5 cm field-of-view) and was displayed in a standard PACS reading environment. A convolutional neural network using three perpendicular images (2.5D-CNN) from the limited field-of-view CT stacks was trained for classification.

Results: The 2.5D-CNN obtained 89% accuracy (95% confidence interval 81%-94%) for the classification in the unseen validation cohort while the accuracy of radiologists reviewing the same cohort was 86% (range 76%-91%). There was no statistically significant difference between 2.5D-CNN and radiologists.

Conclusion: The 2.5D-CNN achieved radiologist level classification accuracy between distal ureteral stones and pelvic phleboliths when only using the local features. The mean accuracy of 86% for radiologists using limited field-of-view indicates that distant anatomical information that helps identifying the ureter’s course is needed.

National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:oru:diva-67372 (URN)
Conference
European Congress of Radiology (ECR) 2018, Vienna, Austria, 28 Feb.-4 Mar., 2018
Available from: 2018-06-20 Created: 2018-06-20 Last updated: 2018-06-20Bibliographically approved
Alirezaie, M., Kiselev, A., Längkvist, M., Klügl, F. & Loutfi, A. (2017). An Ontology-Based Reasoning Framework for Querying Satellite Images for Disaster Monitoring. Sensors, 17(11), Article ID 2545.
Open this publication in new window or tab >>An Ontology-Based Reasoning Framework for Querying Satellite Images for Disaster Monitoring
Show others...
2017 (English)In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 17, no 11, article id 2545Article in journal, Editorial material (Refereed) Published
Abstract [en]

This paper presents a framework in which satellite images are classified and augmented with additional semantic information to enable queries about what can be found on the map at a particular location, but also about paths that can be taken. This is achieved by a reasoning framework based on qualitative spatial reasoning that is able to find answers to high level queries that may vary on the current situation. This framework called SemCityMap, provides the full pipeline from enriching the raw image data with rudimentary labels to the integration of a knowledge representation and reasoning methods to user interfaces for high level querying. To illustrate the utility of SemCityMap in a disaster scenario, we use an urban environment—central Stockholm—in combination with a flood simulation. We show that the system provides useful answers to high-level queries also with respect to the current flood status. Examples of such queries concern path planning for vehicles or retrieval of safe regions such as “find all regions close to schools and far from the flooded area”. The particular advantage of our approach lies in the fact that ontological information and reasoning is explicitly integrated so that queries can be formulated in a natural way using concepts on appropriate level of abstraction, including additional constraints.

Place, publisher, year, edition, pages
M D P I AG, 2017
Keywords
satellite imagery data; natural hazards; ontology; reasoning; path finding
National Category
Computer Systems
Research subject
Computer and Systems Science
Identifiers
urn:nbn:se:oru:diva-62134 (URN)10.3390/s17112545 (DOI)000416790500107 ()29113073 (PubMedID)2-s2.0-85033372857 (Scopus ID)
Projects
Semantic Robot
Available from: 2017-11-05 Created: 2017-11-05 Last updated: 2018-01-03Bibliographically approved
Alirezaie, M., Kiselev, A., Klügl, F., Längkvist, M. & Loutfi, A. (2017). Exploiting Context and Semantics for UAV Path-finding in an Urban Setting. In: Emanuele Bastianelli, Mathieu d'Aquin, Daniele Nardi (Ed.), Proceedings of the 1st International Workshop on Application of Semantic Web technologies in Robotics (AnSWeR 2017), Portoroz, Slovenia, May 29th, 2017: . Paper presented at International Workshop on Application of Semantic Web technologies in Robotics co-located with 14th Extended Semantic Web Conference (ESWC), Portoroz, Slovenia, 28th May-1st June, 2017 (pp. 11-20). Technical University Aachen
Open this publication in new window or tab >>Exploiting Context and Semantics for UAV Path-finding in an Urban Setting
Show others...
2017 (English)In: Proceedings of the 1st International Workshop on Application of Semantic Web technologies in Robotics (AnSWeR 2017), Portoroz, Slovenia, May 29th, 2017 / [ed] Emanuele Bastianelli, Mathieu d'Aquin, Daniele Nardi, Technical University Aachen , 2017, p. 11-20Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we propose an ontology pattern that represents paths in a geo-representation model to be used in an aerial path planning processes. This pattern provides semantics related to constraints (i.e., ight forbidden zones) in a path planning problem in order to generate collision free paths. Our proposed approach has been applied on an ontology containing geo-regions extracted from satellite imagery data from a large urban city as an illustrative example.

Place, publisher, year, edition, pages
Technical University Aachen, 2017
Series
CEUR Workshop Proceedings, ISSN 1613-0073 ; 1935
Keywords
Semantic Web for Robotics, Representation and reasoning for Robotics, Ontology Design Pattern, Path Planning
National Category
Engineering and Technology Computer Sciences
Research subject
Computer and Systems Science
Identifiers
urn:nbn:se:oru:diva-64603 (URN)2-s2.0-85030752502 (Scopus ID)
Conference
International Workshop on Application of Semantic Web technologies in Robotics co-located with 14th Extended Semantic Web Conference (ESWC), Portoroz, Slovenia, 28th May-1st June, 2017
Projects
Semantic Robot
Available from: 2018-01-29 Created: 2018-01-29 Last updated: 2018-09-10Bibliographically approved
Persson, A., Längkvist, M. & Loutfi, A. (2017). Learning Actions to Improve the Perceptual Anchoring of Object. Frontiers in Robotics and AI, 3(76)
Open this publication in new window or tab >>Learning Actions to Improve the Perceptual Anchoring of Object
2017 (English)In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 3, no 76Article in journal (Refereed) Published
Abstract [en]

In this paper, we examine how to ground symbols referring to objects in perceptual data from a robot system by examining object entities and their changes over time. In particular, we approach the challenge by 1) tracking and maintaining object entities over time; and 2) utilizing an artificial neural network to learn the coupling between words referring to actions and movement patterns of tracked object entities. For this purpose, we propose a framework which relies on the notations presented in perceptual anchoring. We further present a practical extension of the notation such that our framework can track and maintain the history of detected object entities. Our approach is evaluated using everyday objects typically found in a home environment. Our object classification module has the possibility to detect and classify over several hundred object categories. We demonstrate how the framework creates and maintains, both in space and time, representations of objects such as 'spoon' and 'coffee mug'. These representations are later used for training of different sequential learning algorithms in order to learn movement actions such as 'pour' and 'stir'. We finally exemplify how learned movements actions, combined with common-sense knowledge, further can be used to improve the anchoring process per se.

Place, publisher, year, edition, pages
Lausanne: Frontiers Media, 2017
Keywords
Perceptual anchoring, symbol grounding, action learning, sequential learning algorithms, common-sense knowledge, object classification, object tracking
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-54025 (URN)10.3389/frobt.2016.00076 (DOI)000392981800001 ()
Projects
Chist-Era ReGround project
Funder
Swedish Research Council, 2016-05321
Note

Funding Agency:

Chist-Era ReGround project

Available from: 2016-12-18 Created: 2016-12-18 Last updated: 2018-11-29Bibliographically approved
Längkvist, M., Kiselev, A., Alirezaie, M. & Loutfi, A. (2016). Classification and Segmentation of Satellite Orthoimagery Using Convolutional Neural Networks. Remote Sensing, 8(4), Article ID 329.
Open this publication in new window or tab >>Classification and Segmentation of Satellite Orthoimagery Using Convolutional Neural Networks
2016 (English)In: Remote Sensing, ISSN 2072-4292, E-ISSN 2072-4292, Vol. 8, no 4, article id 329Article in journal (Refereed) Published
Abstract [en]

The availability of high-resolution remote sensing (HRRS) data has opened up the possibility for new interesting applications, such as per-pixel classification of individual objects in greater detail. This paper shows how a convolutional neural network (CNN) can be applied to multispectral orthoimagery and a digital surface model (DSM) of a small city for a full, fast and accurate per-pixel classification. The predicted low-level pixel classes are then used to improve the high-level segmentation. Various design choices of the CNN architecture are evaluated and analyzed. The investigated land area is fully manually labeled into five categories (vegetation, ground, roads, buildings and water), and the classification accuracy is compared to other per-pixel classification works on other land areas that have a similar choice of categories. The results of the full classification and segmentation on selected segments of the map show that CNNs are a viable tool for solving both the segmentation and object recognition task for remote sensing data.

Place, publisher, year, edition, pages
Basel: MDPI AG, 2016
Keywords
remote sensing, orthoimagery, convolutional neural network, per-pixel classification, segmentation, region merging
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-50501 (URN)10.3390/rs8040329 (DOI)000375156500062 ()
Funder
Knowledge Foundation, 20140033
Available from: 2016-05-31 Created: 2016-05-31 Last updated: 2018-07-13Bibliographically approved
Längkvist, M., Alirezaie, M., Kiselev, A. & Loutfi, A. (2016). Interactive Learning with Convolutional Neural Networks for Image Labeling. In: International Joint Conference on Artificial Intelligence (IJCAI): . Paper presented at International Joint Conference on Artificial Intelligence (IJCAI), New York, USA, 9-15th July, 2016.
Open this publication in new window or tab >>Interactive Learning with Convolutional Neural Networks for Image Labeling
2016 (English)In: International Joint Conference on Artificial Intelligence (IJCAI), 2016Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

Recently, deep learning models, such as Convolutional Neural Networks, have shown to give good performance for various computer vision tasks. A pre-requisite for such models is to have access to lots of labeled data since the most successful ones are trained with supervised learning. The process of labeling data is expensive, time-consuming, tedious, and sometimes subjective, which can result in falsely labeled data, which has a negative effect on both the training and the validation. In this work, we propose a human-in-the-loop intelligent system that allows the agent and the human to collabo- rate to simultaneously solve the problem of labeling data and at the same time perform scene labeling of an unlabeled image data set with minimal guidance by a human teacher. We evaluate the proposed in- teractive learning system by comparing the labeled data set from the system to the human-provided labels. The results show that the learning system is capable of almost completely label an entire image data set starting from a few labeled examples provided by the human teacher.

National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-52116 (URN)
Conference
International Joint Conference on Artificial Intelligence (IJCAI), New York, USA, 9-15th July, 2016
Available from: 2016-09-12 Created: 2016-09-12 Last updated: 2018-01-10Bibliographically approved
Alirezaie, M., Längkvist, M., Kiselev, A. & Loutfi, A. (2016). Open GeoSpatial Data as a Source of Ground Truth for Automated Labelling of Satellite Images. In: Krzysztof Janowicz et al. (Ed.), SDW 2016: Spatial Data on the Web, Proceedings. Paper presented at The 9th International Conference on Geographic Information Science (GIScience 2016), Montreal, Canada, September 27-30, 2016 (pp. 5-8). CEUR Workshop Proceedings
Open this publication in new window or tab >>Open GeoSpatial Data as a Source of Ground Truth for Automated Labelling of Satellite Images
2016 (English)In: SDW 2016: Spatial Data on the Web, Proceedings / [ed] Krzysztof Janowicz et al., CEUR Workshop Proceedings , 2016, p. 5-8Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
CEUR Workshop Proceedings, 2016
Series
CEUR Workshop Proceedings, ISSN 1613-0073 ; 1777
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-54223 (URN)
Conference
The 9th International Conference on Geographic Information Science (GIScience 2016), Montreal, Canada, September 27-30, 2016
Available from: 2017-01-02 Created: 2017-01-02 Last updated: 2018-01-13Bibliographically approved
Längkvist, M. & Loutfi, A. (2015). Learning feature representations with a cost-relevant sparse autoencoder. International Journal of Neural Systems, 25(1), 1450034
Open this publication in new window or tab >>Learning feature representations with a cost-relevant sparse autoencoder
2015 (English)In: International Journal of Neural Systems, ISSN 0129-0657, E-ISSN 1793-6462, Vol. 25, no 1, p. 1450034-Article in journal (Refereed) Published
Abstract [en]

There is an increasing interest in the machine learning community to automatically learn feature representations directly from the (unlabeled) data instead of using hand-designed features. The autoencoder is one method that can be used for this purpose. However, for data sets with a high degree of noise, a large amount of the representational capacity in the autoencoder is used to minimize the reconstruction error for these noisy inputs. This paper proposes a method that improves the feature learning process by focusing on the task relevant information in the data. This selective attention is achieved by weighting the reconstruction error and reducing the influence of noisy inputs during the learning process. The proposed model is trained on a number of publicly available image data sets and the test error rate is compared to a standard sparse autoencoder and other methods, such as the denoising autoencoder and contractive autoencoder.

Keywords
Sparse autoencoder; unsupervised feature learning; weighted cost function
National Category
Other Engineering and Technologies Computer Engineering
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-40063 (URN)10.1142/S0129065714500348 (DOI)000347965500005 ()25515941 (PubMedID)
Available from: 2014-12-29 Created: 2014-12-29 Last updated: 2018-06-26Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-0579-7181

Search in DiVA

Show all publications