oru.sePublications
Change search
Refine search result
1 - 18 of 18
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Alirezaie, Marjan
    et al.
    Örebro University, School of Science and Technology.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Klügl, Franziska
    Örebro University, School of Science and Technology. Örebro University, School of Law, Psychology and Social Work.
    Längkvist, Martin
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Exploiting Context and Semantics for UAV Path-finding in an Urban Setting2017In: Proceedings of the 1st International Workshop on Application of Semantic Web technologies in Robotics (AnSWeR 2017), Portoroz, Slovenia, May 29th, 2017 / [ed] Emanuele Bastianelli, Mathieu d'Aquin, Daniele Nardi, Technical University Aachen , 2017, p. 11-20Conference paper (Refereed)
    Abstract [en]

    In this paper we propose an ontology pattern that represents paths in a geo-representation model to be used in an aerial path planning processes. This pattern provides semantics related to constraints (i.e., ight forbidden zones) in a path planning problem in order to generate collision free paths. Our proposed approach has been applied on an ontology containing geo-regions extracted from satellite imagery data from a large urban city as an illustrative example.

  • 2.
    Alirezaie, Marjan
    et al.
    Örebro University, School of Science and Technology.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Längkvist, Martin
    Örebro University, School of Science and Technology.
    Klügl, Franziska
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    An Ontology-Based Reasoning Framework for Querying Satellite Images for Disaster Monitoring2017In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 17, no 11, article id 2545Article in journal (Refereed)
    Abstract [en]

    This paper presents a framework in which satellite images are classified and augmented with additional semantic information to enable queries about what can be found on the map at a particular location, but also about paths that can be taken. This is achieved by a reasoning framework based on qualitative spatial reasoning that is able to find answers to high level queries that may vary on the current situation. This framework called SemCityMap, provides the full pipeline from enriching the raw image data with rudimentary labels to the integration of a knowledge representation and reasoning methods to user interfaces for high level querying. To illustrate the utility of SemCityMap in a disaster scenario, we use an urban environment—central Stockholm—in combination with a flood simulation. We show that the system provides useful answers to high-level queries also with respect to the current flood status. Examples of such queries concern path planning for vehicles or retrieval of safe regions such as “find all regions close to schools and far from the flooded area”. The particular advantage of our approach lies in the fact that ontological information and reasoning is explicitly integrated so that queries can be formulated in a natural way using concepts on appropriate level of abstraction, including additional constraints.

  • 3.
    Alirezaie, Marjan
    et al.
    Örebro University, School of Science and Technology.
    Längkvist, Martin
    Örebro University, School of Science and Technology.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Open GeoSpatial Data as a Source of Ground Truth for Automated Labelling of Satellite Images2016In: SDW 2016: Spatial Data on the Web, Proceedings / [ed] Krzysztof Janowicz et al., CEUR Workshop Proceedings , 2016, p. 5-8Conference paper (Refereed)
  • 4.
    Alirezaie, Marjan
    et al.
    Örebro University, School of Science and Technology.
    Längkvist, Martin
    Örebro University, School of Science and Technology.
    Sioutis, Michael
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    A Symbolic Approach for Explaining Errors in Image Classification Tasks2018Conference paper (Refereed)
    Abstract [en]

    Machine learning algorithms, despite their increasing success in handling object recognition tasks, still seldom perform without error. Often the process of understanding why the algorithm has failed is the task of the human who, using domain knowledge and contextual information, can discover systematic shortcomings in either the data or the algorithm. This paper presents an approach where the process of reasoning about errors emerging from a machine learning framework is automated using symbolic techniques. By utilizing spatial and geometrical reasoning between objects in a scene, the system is able to describe misclassified regions in relation to its context. The system is demonstrated in the remote sensing domain where objects and entities are detected in satellite images.

  • 5.
    Lidén, Mats
    et al.
    Örebro University, School of Medical Sciences.
    Jendeberg, Johan
    Örebro University, School of Medical Sciences.
    Längkvist, Martin
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Thunberg, Per
    Örebro University, School of Medical Sciences.
    Discrimination between distal ureteral stones and pelvic phleboliths in CT using a deep neural network: more than local features needed2018Conference paper (Refereed)
    Abstract [en]

    Purpose: To develop a deep learning method for assisting radiologists in the discrimination between distal ureteral stones and pelvic phleboliths in thin slice CT images, and to evaluate whether this differentiation is possible using only local features.

    Methods and materials: A limited field-of-view image data bank was retrospectively created, consisting of 5x5x5 cm selections from 1 mm thick unenhanced CT images centered around 218 pelvis phleboliths and 267 distal ureteral stones in 336 patients. 50 stones and 50 phleboliths formed a validation cohort and the remainder a training cohort. Ground truth was established by a radiologist using the complete CT examination during inclusion.The limited field-of-view CT stacks were independently reviewed and classified as containing a distal ureteral stone or a phlebolith by seven radiologists. Each cropped stack consisted of 50 slices (5x5 cm field-of-view) and was displayed in a standard PACS reading environment. A convolutional neural network using three perpendicular images (2.5D-CNN) from the limited field-of-view CT stacks was trained for classification.

    Results: The 2.5D-CNN obtained 89% accuracy (95% confidence interval 81%-94%) for the classification in the unseen validation cohort while the accuracy of radiologists reviewing the same cohort was 86% (range 76%-91%). There was no statistically significant difference between 2.5D-CNN and radiologists.

    Conclusion: The 2.5D-CNN achieved radiologist level classification accuracy between distal ureteral stones and pelvic phleboliths when only using the local features. The mean accuracy of 86% for radiologists using limited field-of-view indicates that distant anatomical information that helps identifying the ureter’s course is needed.

  • 6.
    Längkvist, Martin
    Örebro University, School of Science and Technology.
    Modeling time-series with deep networks2014Doctoral thesis, comprehensive summary (Other academic)
    List of papers
    1. A review of unsupervised feature learning and deep learning for time-series modeling
    Open this publication in new window or tab >>A review of unsupervised feature learning and deep learning for time-series modeling
    2014 (English)In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 42, no 1, p. 11-24Article, review/survey (Refereed) Published
    Abstract [en]

    This paper gives a review of the recent developments in deep learning and unsupervised feature learning for time-series problems. While these techniques have shown promise for modeling static data, such as computer vision, applying them to time-series data is gaining increasing attention. This paper overviews the particular challenges present in time-series data and provides a review of the works that have either applied time-series data to unsupervised feature learning algorithms or alternatively have contributed to modifications of feature learning algorithms to take into account the challenges present in time-series data.

    Place, publisher, year, edition, pages
    Elsevier, 2014
    Keywords
    Time-series, Unsupervised feature learning, Deep learning
    National Category
    Computer Sciences
    Research subject
    Computer Science
    Identifiers
    urn:nbn:se:oru:diva-34597 (URN)10.1016/j.patrec.2014.01.008 (DOI)000333451300002 ()2-s2.0-84894359867 (Scopus ID)
    Available from: 2014-04-07 Created: 2014-04-07 Last updated: 2018-01-11Bibliographically approved
    2. Sleep stage classification using unsupervised feature learning
    Open this publication in new window or tab >>Sleep stage classification using unsupervised feature learning
    2012 (English)In: Advances in Artificial Neural Systems, ISSN 1687-7594, E-ISSN 1687-7608, p. 107046-Article in journal (Refereed) Published
    Abstract [en]

    Most attempts at training computers for the difficult and time-consuming task of sleep stage classification involve a feature extraction step. Due to the complexity of multimodal sleep data, the size of the feature space can grow to the extent that it is also necessary to include a feature selection step. In this paper, we propose the use of an unsupervised feature learning architecture called deep belief nets (DBNs) and show how to apply it to sleep data in order to eliminate the use of handmade features. Using a postprocessing step of hidden Markov model (HMM) to accurately capture sleep stage switching, we compare our results to a feature-based approach. A study of anomaly detection with the application to home environment data collection is also presented. The results using raw data with a deep architecture, such as the DBN, were comparable to a feature-based approach when validated on clinical datasets.

    Place, publisher, year, edition, pages
    Hindawi Publishing Corporation, 2012
    National Category
    Engineering and Technology Computer Sciences
    Research subject
    Computer and Systems Science
    Identifiers
    urn:nbn:se:oru:diva-24199 (URN)10.1155/2012/107046 (DOI)
    Available from: 2012-08-02 Created: 2012-08-02 Last updated: 2018-01-12Bibliographically approved
    3. Fast Classification of Meat Spoilage Markers Using Nanostructured ZnO Thin Films and Unsupervised Feature Learning
    Open this publication in new window or tab >>Fast Classification of Meat Spoilage Markers Using Nanostructured ZnO Thin Films and Unsupervised Feature Learning
    2013 (English)In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 13, no 2, p. 1578-1592Article in journal (Refereed) Published
    Abstract [en]

    This paper investigates a rapid and accurate detection system for spoilage in meat. We use unsupervised feature learning techniques (stacked restricted Boltzmann machines and auto-encoders) that consider only the transient response from undoped zinc oxide, manganese-doped zinc oxide, and fluorine-doped zinc oxide in order to classify three categories: the type of thin film that is used, the type of gas, and the approximate ppm-level of the gas. These models mainly offer the advantage that features are learned from data instead of being hand-designed. We compare our results to a feature-based approach using samples with various ppm level of ethanol and trimethylamine (TMA) that are good markers for meat spoilage. The result is that deep networks give a better and faster classification than the feature-based approach, and we thus conclude that the fine-tuning of our deep models are more efficient for this kind of multi-label classification task.

    Keywords
    electronic nose, sensor material, representational learning, fast multi-label classification
    National Category
    Computer Sciences
    Research subject
    Computer Science
    Identifiers
    urn:nbn:se:oru:diva-34598 (URN)10.3390/s130201578 (DOI)000315403300012 ()2-s2.0-84873853951 (Scopus ID)
    Funder
    VINNOVA, INT/SWD/VINN/P-04/2011
    Note

    Fuding agency: Department of Science & Technology, India 

    Available from: 2014-04-07 Created: 2014-04-07 Last updated: 2018-01-11Bibliographically approved
    4. Learning feature representations with a cost-relevant sparse autoencoder
    Open this publication in new window or tab >>Learning feature representations with a cost-relevant sparse autoencoder
    2015 (English)In: International Journal of Neural Systems, ISSN 0129-0657, E-ISSN 1793-6462, Vol. 25, no 1, p. 1450034-Article in journal (Refereed) Published
    Abstract [en]

    There is an increasing interest in the machine learning community to automatically learn feature representations directly from the (unlabeled) data instead of using hand-designed features. The autoencoder is one method that can be used for this purpose. However, for data sets with a high degree of noise, a large amount of the representational capacity in the autoencoder is used to minimize the reconstruction error for these noisy inputs. This paper proposes a method that improves the feature learning process by focusing on the task relevant information in the data. This selective attention is achieved by weighting the reconstruction error and reducing the influence of noisy inputs during the learning process. The proposed model is trained on a number of publicly available image data sets and the test error rate is compared to a standard sparse autoencoder and other methods, such as the denoising autoencoder and contractive autoencoder.

    Keywords
    Sparse autoencoder; unsupervised feature learning; weighted cost function
    National Category
    Other Engineering and Technologies Computer Engineering
    Research subject
    Computer Science
    Identifiers
    urn:nbn:se:oru:diva-40063 (URN)10.1142/S0129065714500348 (DOI)000347965500005 ()25515941 (PubMedID)
    Available from: 2014-12-29 Created: 2014-12-29 Last updated: 2018-06-26Bibliographically approved
    5. Selective attention auto-encoder for automatic sleep staging
    Open this publication in new window or tab >>Selective attention auto-encoder for automatic sleep staging
    2014 (English)Manuscript (preprint) (Other academic)
    National Category
    Computer Sciences
    Research subject
    Computer Science
    Identifiers
    urn:nbn:se:oru:diva-42935 (URN)
    Available from: 2015-02-25 Created: 2015-02-25 Last updated: 2018-04-05Bibliographically approved
  • 7.
    Längkvist, Martin
    et al.
    Örebro University, School of Science and Technology.
    Alirezaie, Marjan
    Örebro University, School of Science and Technology.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Interactive Learning with Convolutional Neural Networks for Image Labeling2016In: International Joint Conference on Artificial Intelligence (IJCAI), 2016Conference paper (Refereed)
    Abstract [en]

    Recently, deep learning models, such as Convolutional Neural Networks, have shown to give good performance for various computer vision tasks. A pre-requisite for such models is to have access to lots of labeled data since the most successful ones are trained with supervised learning. The process of labeling data is expensive, time-consuming, tedious, and sometimes subjective, which can result in falsely labeled data, which has a negative effect on both the training and the validation. In this work, we propose a human-in-the-loop intelligent system that allows the agent and the human to collabo- rate to simultaneously solve the problem of labeling data and at the same time perform scene labeling of an unlabeled image data set with minimal guidance by a human teacher. We evaluate the proposed in- teractive learning system by comparing the labeled data set from the system to the human-provided labels. The results show that the learning system is capable of almost completely label an entire image data set starting from a few labeled examples provided by the human teacher.

  • 8.
    Längkvist, Martin
    et al.
    Örebro University, School of Science and Technology.
    Coradeschi, Silvia
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Rayappan, John Bosco Balaguru
    SASTRA University, Thanjavur, India.
    Fast Classification of Meat Spoilage Markers Using Nanostructured ZnO Thin Films and Unsupervised Feature Learning2013In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 13, no 2, p. 1578-1592Article in journal (Refereed)
    Abstract [en]

    This paper investigates a rapid and accurate detection system for spoilage in meat. We use unsupervised feature learning techniques (stacked restricted Boltzmann machines and auto-encoders) that consider only the transient response from undoped zinc oxide, manganese-doped zinc oxide, and fluorine-doped zinc oxide in order to classify three categories: the type of thin film that is used, the type of gas, and the approximate ppm-level of the gas. These models mainly offer the advantage that features are learned from data instead of being hand-designed. We compare our results to a feature-based approach using samples with various ppm level of ethanol and trimethylamine (TMA) that are good markers for meat spoilage. The result is that deep networks give a better and faster classification than the feature-based approach, and we thus conclude that the fine-tuning of our deep models are more efficient for this kind of multi-label classification task.

  • 9.
    Längkvist, Martin
    et al.
    Örebro University, School of Science and Technology.
    Jendeberg, Johan
    Örebro University, School of Medical Sciences. Department of Radiology, Faculty of Health and Medical Sciences, Örebro University, Örebro, Sweden.
    Thunberg, Per
    Örebro University, School of Medical Sciences. Department of Medical Physics, Faculty of Health and Medical Sciences, Örebro University, Örebro, Sweden.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Lidén, Mats
    Örebro University, School of Medical Sciences. Department of Radiology, Faculty of Health and Medical Sciences, Örebro University, Örebro, Sweden.
    Computer aided detection of ureteral stones in thin slice computed tomography volumes using Convolutional Neural Networks2018In: Computers in Biology and Medicine, ISSN 0010-4825, E-ISSN 1879-0534, Vol. 97, p. 153-160Article in journal (Refereed)
    Abstract [en]

    Computed tomography (CT) is the method of choice for diagnosing ureteral stones - kidney stones that obstruct the ureter. The purpose of this study is to develop a computer aided detection (CAD) algorithm for identifying a ureteral stone in thin slice CT volumes. The challenge in CAD for urinary stones lies in the similarity in shape and intensity of stones with non-stone structures and how to efficiently deal with large high-resolution CT volumes. We address these challenges by using a Convolutional Neural Network (CNN) that works directly on the high resolution CT volumes. The method is evaluated on a large data base of 465 clinically acquired high-resolution CT volumes of the urinary tract with labeling of ureteral stones performed by a radiologist. The best model using 2.5D input data and anatomical information achieved a sensitivity of 100% and an average of 2.68 false-positives per patient on a test set of 88 scans.

  • 10.
    Längkvist, Martin
    et al.
    Örebro University, School of Science and Technology.
    Karlsson, Lars
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    A review of unsupervised feature learning and deep learning for time-series modeling2014In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 42, no 1, p. 11-24Article, review/survey (Refereed)
    Abstract [en]

    This paper gives a review of the recent developments in deep learning and unsupervised feature learning for time-series problems. While these techniques have shown promise for modeling static data, such as computer vision, applying them to time-series data is gaining increasing attention. This paper overviews the particular challenges present in time-series data and provides a review of the works that have either applied time-series data to unsupervised feature learning algorithms or alternatively have contributed to modifications of feature learning algorithms to take into account the challenges present in time-series data.

  • 11.
    Längkvist, Martin
    et al.
    Örebro University, School of Science and Technology.
    Karlsson, Lars
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Sleep stage classification using unsupervised feature learning2012In: Advances in Artificial Neural Systems, ISSN 1687-7594, E-ISSN 1687-7608, p. 107046-Article in journal (Refereed)
    Abstract [en]

    Most attempts at training computers for the difficult and time-consuming task of sleep stage classification involve a feature extraction step. Due to the complexity of multimodal sleep data, the size of the feature space can grow to the extent that it is also necessary to include a feature selection step. In this paper, we propose the use of an unsupervised feature learning architecture called deep belief nets (DBNs) and show how to apply it to sleep data in order to eliminate the use of handmade features. Using a postprocessing step of hidden Markov model (HMM) to accurately capture sleep stage switching, we compare our results to a feature-based approach. A study of anomaly detection with the application to home environment data collection is also presented. The results using raw data with a deep architecture, such as the DBN, were comparable to a feature-based approach when validated on clinical datasets.

  • 12.
    Längkvist, Martin
    et al.
    Örebro University, School of Science and Technology.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Alirezaie, Marjan
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Classification and Segmentation of Satellite Orthoimagery Using Convolutional Neural Networks2016In: Remote Sensing, ISSN 2072-4292, E-ISSN 2072-4292, Vol. 8, no 4, article id 329Article in journal (Refereed)
    Abstract [en]

    The availability of high-resolution remote sensing (HRRS) data has opened up the possibility for new interesting applications, such as per-pixel classification of individual objects in greater detail. This paper shows how a convolutional neural network (CNN) can be applied to multispectral orthoimagery and a digital surface model (DSM) of a small city for a full, fast and accurate per-pixel classification. The predicted low-level pixel classes are then used to improve the high-level segmentation. Various design choices of the CNN architecture are evaluated and analyzed. The investigated land area is fully manually labeled into five categories (vegetation, ground, roads, buildings and water), and the classification accuracy is compared to other per-pixel classification works on other land areas that have a similar choice of categories. The results of the full classification and segmentation on selected segments of the map show that CNNs are a viable tool for solving both the segmentation and object recognition task for remote sensing data.

  • 13.
    Längkvist, Martin
    et al.
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Learning feature representations with a cost-relevant sparse autoencoder2015In: International Journal of Neural Systems, ISSN 0129-0657, E-ISSN 1793-6462, Vol. 25, no 1, p. 1450034-Article in journal (Refereed)
    Abstract [en]

    There is an increasing interest in the machine learning community to automatically learn feature representations directly from the (unlabeled) data instead of using hand-designed features. The autoencoder is one method that can be used for this purpose. However, for data sets with a high degree of noise, a large amount of the representational capacity in the autoencoder is used to minimize the reconstruction error for these noisy inputs. This paper proposes a method that improves the feature learning process by focusing on the task relevant information in the data. This selective attention is achieved by weighting the reconstruction error and reducing the influence of noisy inputs during the learning process. The proposed model is trained on a number of publicly available image data sets and the test error rate is compared to a standard sparse autoencoder and other methods, such as the denoising autoencoder and contractive autoencoder.

  • 14.
    Längkvist, Martin
    et al.
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Learning Representations with a Dynamic Objective Sparse Autoencoder2012Conference paper (Refereed)
    Abstract [en]

    The main objective of an auto-encoder is to reconstruct the input signals via a feature representation of latent variables. The number of latent variables defines the representational capacity limit of the model. For data sets where some or all signals contain noise there is an unnecessary amount of capacity spent on trying to reconstruct these signals. One solution is to increase the number of hidden units to increase the capacity so that there will be enough capacity to capture the valuable information. Another solution is to pre-process the signals or perform a manual signal selection. In this paper, we propose a method that will dynamically change the objective function depending on the current performance of the model. This is done by weighting the objective function individually for each input unit in order to guide the feature leaning and decrease the influence that problematic signals have on the learning of features. We evaluate our method on various multidimensional time-series data sets and handwritten digit recognition data sets and compare our results with a standard sparse auto-encoder.

  • 15.
    Längkvist, Martin
    et al.
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Not all signals are created equal: Dynamic objective auto-encoder for multivariate data2012Conference paper (Other academic)
  • 16.
    Längkvist, Martin
    et al.
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Unsupervised feature learning for electronic nose data applied to Bacteria Identification in Blood.2011Conference paper (Refereed)
    Abstract [en]

    Electronic nose (e-nose) data represents multivariate time-series from an array of chemical gas sensors exposed to a gas. This data is a new data set for usewith deep learning methods, and is highly suitable since e-nose data is complexand difficult to interpret for human experts. Furthermore, this data set presentsa number of interesting challenges for deep learning architectures per se. In this work we present a first study of e-nose data classification using deep learningwhen testing for the presence of bacteria in blood and agar solutions. We showin this study that deep learning outperforms hand-selected strategy based methods which has been previously tried with the same data set.

  • 17.
    Längkvist, Martin
    et al.
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Karlsson, Lars
    Örebro University, School of Science and Technology.
    Selective attention auto-encoder for automatic sleep staging2014Manuscript (preprint) (Other academic)
  • 18.
    Persson, Andreas
    et al.
    Örebro University, School of Science and Technology.
    Längkvist, Martin
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Learning Actions to Improve the Perceptual Anchoring of Object2017In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 3, no 76Article in journal (Refereed)
    Abstract [en]

    In this paper, we examine how to ground symbols referring to objects in perceptual data from a robot system by examining object entities and their changes over time. In particular, we approach the challenge by 1) tracking and maintaining object entities over time; and 2) utilizing an artificial neural network to learn the coupling between words referring to actions and movement patterns of tracked object entities. For this purpose, we propose a framework which relies on the notations presented in perceptual anchoring. We further present a practical extension of the notation such that our framework can track and maintain the history of detected object entities. Our approach is evaluated using everyday objects typically found in a home environment. Our object classification module has the possibility to detect and classify over several hundred object categories. We demonstrate how the framework creates and maintains, both in space and time, representations of objects such as 'spoon' and 'coffee mug'. These representations are later used for training of different sequential learning algorithms in order to learn movement actions such as 'pour' and 'stir'. We finally exemplify how learned movements actions, combined with common-sense knowledge, further can be used to improve the anchoring process per se.

1 - 18 of 18
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf