To Örebro University

oru.seÖrebro University Publications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Performance Comparison of Two Deep Learning Algorithms in Detecting Similarities Between Manual Integration Test Cases
Örebro University, School of Science and Technology. Product Development Unit Radio, Production Test Development, Ericsson AB, Kumla, Sweden. (AASS Machine Perception and Interaction group)ORCID iD: 0000-0003-3054-0051
School of Innovation, Design and Engineering, Mälardalen University, Västerås, Sweden. (Division of Networked and Embedded Systems at Mälardalen University)ORCID iD: 0000-0003-0073-1674
Global Artificial Intelligence Accelerator (GAIA), Ericsson AB, Stockholm, Sweden; School of Innovation, Design and Engineering, Mälardalen University, Västerås, Sweden. (Software testing laboratory at Mälardalen University)ORCID iD: 0000-0002-8724-9049
Global Artificial Intelligence Accelerator (GAIA), Ericsson AB, Stockholm, Sweden.
Show others and affiliations
2020 (English)In: The Fifteenth International Conference on Software Engineering Advances, International Academy, Research and Industry Association (IARIA) , 2020, p. 90-97Conference paper, Published paper (Refereed)
Abstract [en]

Software testing is still heavily dependent on human judgment since a large portion of testing artifacts, such as requirements and test cases are written in a natural text by experts. Identifying and classifying relevant test cases in large test suites is a challenging and also time-consuming task. Moreover, to optimize the testing process test cases should be distinguished based on their properties, such as their dependencies and similarities. Knowing the mentioned properties at an early stage of the testing process can be utilized for several test optimization purposes, such as test case selection, prioritization, scheduling,and also parallel test execution. In this paper, we apply, evaluate, and compare the performance of two deep learning algorithmsto detect the similarities between manual integration test cases. The feasibility of the mentioned algorithms is later examined in a Telecom domain by analyzing the test specifications of five different products in the product development unit at Ericsson AB in Sweden. The empirical evaluation indicates that utilizing deep learning algorithms for finding the similarities between manual integration test cases can lead to outstanding results.

Place, publisher, year, edition, pages
International Academy, Research and Industry Association (IARIA) , 2020. p. 90-97
Series
International Conference on Software Engineering Advances, E-ISSN 2308-4235
Keywords [en]
Natural Language Processing, Deep Learning, Software Testing, Semantic Analysis, Test Optimization
National Category
Computer Systems
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:oru:diva-88921ISBN: 978-1-61208-827-3 (electronic)OAI: oai:DiVA.org:oru-88921DiVA, id: diva2:1522081
Conference
The Fifteenth International Conference on Software Engineering Advances (ICSEA 2020), Porto, Portugal, October 18-22, 2020
Projects
TESTOMAT Project - The Next Level of Test AutomationAvailable from: 2021-01-25 Created: 2021-01-25 Last updated: 2023-10-05Bibliographically approved
In thesis
1. AI-Based Methods For Improved Testing of Radio Base Stations: A Case Study Towards Intelligent Manufacturing
Open this publication in new window or tab >>AI-Based Methods For Improved Testing of Radio Base Stations: A Case Study Towards Intelligent Manufacturing
2023 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Testing of complex systems may often require the use of tailored-made solutions, expensive testing equipment, large computing capacity, and manual implementation work due to domain uniqueness. The aforementioned test resources are expensive and time-consuming, which makes them good candidates to optimize. A radio base station (RBS) is a complex system. Upon the arrival of new RBS generations, new testing challenges have been introduced that traditional methods cannot cope with. In order to optimize the test process of RBSs, product quality and production efficiency can be studied.

Despite that AI techniques are valuable tools for monitoring behavioral changes in various applications, there have not been sufficient research efforts spent on the use of intelligent manufacturing in already existing factories and production lines. The concept of intelligent manufacturing involves the whole system development life-cycle, such as design, production, and maintenance. Available literature about optimization and integration of industrial applications using AI techniques has not resulted in common solutions due to the complexity of the real-world applications, which have their own unique characteristics, e.g., multivariate, non-linear, non-stationary, multi-modal, class imbalance; making it challenging to find generalizable solutions. This licentiate thesis aims to bridge the gap between theoretical approaches and the implementation of real industrial applications. 

In this licentiate thesis, two questions are explored, namely how well AI techniques can perform and optimize fault detection and fault prediction on the production of RBSs, as well as how to modify learning algorithms in order to perform transfer learning between different products. These questions are addressed by using different AI techniques for test optimization purposes and are examined in three empirical studies focused on parallel test execution, fault detection and prediction, and automated fault localization. For the parallel test execution study, two different approaches were used to find and cluster semantically similar test cases and propose their execution in parallel. For this purpose, Levenshstein distance and two NLP techniques are compared. The results show that cluster-based test scenarios can be automatically generated from requirement specifications and the execution of semantically similar tests can reduce the number of tests by 95\% in the study case if executed in parallel. 

Study number two investigates the possibility of predicting testing performance outcomes by analyzing anomalies in the test process and classifying them by their compliance with dynamic test limits instead of fixed limits. The performance measures can be modeled using historical data through regression techniques and the classification of the anomalies is learned using support vector machines and convolutional neural networks. The results show good agreement between the actual and predicted learned model, where the root-mean-square error reaches 0.00073. Furthermore, this approach can automatically label the incoming tests according to the dynamic limits, making it possible to predict errors in an early stage of the process. This study contributes to product quality by monitoring the test measurements beyond fixed limits and contributes to making a more efficient testing process by detecting faults before they are measured. Moreover, study two considers the possibility of using transfer learning due to an insufficient number of anomalies in a single product. 

The last study focuses on root cause analysis by analyzing test dependencies between test measurements using two known correlation-based methods and mutual information to find strength associations between measurements. The contributions of this study are twofold. First, test dependencies between measurements can be found using Pearson and Spearman correlation and MI; and their dependencies can be linear or higher order. Second, by clustering the associated tests, redundant tests are found, which could be used to update the test execution sequence and choose to execute only the relevant tests, hence, making a more efficient production process by saving test time.

Place, publisher, year, edition, pages
Örebro: Örebro University, 2023. p. 34
Series
Örebro Studies in Technology, ISSN 1650-8580 ; 102
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-108714 (URN)
Presentation
2023-10-02, Örebro universitet, Prismahuset, Hörsal P1, Fakultetsgatan 1, Örebro, 13:15 (English)
Opponent
Supervisors
Funder
Knowledge Foundation, 20190128Vinnova, D-RODS (2023-00244)
Available from: 2023-10-05 Created: 2023-10-03 Last updated: 2023-10-05Bibliographically approved

Open Access in DiVA

Performance Comparison of Two Deep Learning Algorithms in Detecting Similarities Between Manual Integration Test Cases(465 kB)242 downloads
File information
File name FULLTEXT01.pdfFile size 465 kBChecksum SHA-512
d3e0a12746c6b788df365b0fb194929052a0a5a895ec4e237e4b656f5bb5f93fe52b1b7e08d34ae7040642e1ee0c5ec1e39ffcbe3d3d3fac8f94cc0049d8193b
Type fulltextMimetype application/pdf

Other links

Think Mind: Performance Comparison of Two Deep Learning Algorithms in Detecting Similarities Between Manual Integration Test Cases

Authority records

Landin, CristinaLängkvist, MartinLoutfi, Amy

Search in DiVA

By author/editor
Landin, CristinaHatvani, LeoTahvili, SaharLängkvist, MartinLoutfi, Amy
By organisation
School of Science and Technology
Computer Systems

Search outside of DiVA

GoogleGoogle Scholar
Total: 242 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 337 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf