To Örebro University

oru.seÖrebro University Publications
Change search
Refine search result
1 - 6 of 6
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Landin, Cristina
    Örebro University, School of Science and Technology. Ericsson AB.
    AI-Based Methods For Improved Testing of Radio Base Stations: A Case Study Towards Intelligent Manufacturing2023Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Testing of complex systems may often require the use of tailored-made solutions, expensive testing equipment, large computing capacity, and manual implementation work due to domain uniqueness. The aforementioned test resources are expensive and time-consuming, which makes them good candidates to optimize. A radio base station (RBS) is a complex system. Upon the arrival of new RBS generations, new testing challenges have been introduced that traditional methods cannot cope with. In order to optimize the test process of RBSs, product quality and production efficiency can be studied.

    Despite that AI techniques are valuable tools for monitoring behavioral changes in various applications, there have not been sufficient research efforts spent on the use of intelligent manufacturing in already existing factories and production lines. The concept of intelligent manufacturing involves the whole system development life-cycle, such as design, production, and maintenance. Available literature about optimization and integration of industrial applications using AI techniques has not resulted in common solutions due to the complexity of the real-world applications, which have their own unique characteristics, e.g., multivariate, non-linear, non-stationary, multi-modal, class imbalance; making it challenging to find generalizable solutions. This licentiate thesis aims to bridge the gap between theoretical approaches and the implementation of real industrial applications. 

    In this licentiate thesis, two questions are explored, namely how well AI techniques can perform and optimize fault detection and fault prediction on the production of RBSs, as well as how to modify learning algorithms in order to perform transfer learning between different products. These questions are addressed by using different AI techniques for test optimization purposes and are examined in three empirical studies focused on parallel test execution, fault detection and prediction, and automated fault localization. For the parallel test execution study, two different approaches were used to find and cluster semantically similar test cases and propose their execution in parallel. For this purpose, Levenshstein distance and two NLP techniques are compared. The results show that cluster-based test scenarios can be automatically generated from requirement specifications and the execution of semantically similar tests can reduce the number of tests by 95\% in the study case if executed in parallel. 

    Study number two investigates the possibility of predicting testing performance outcomes by analyzing anomalies in the test process and classifying them by their compliance with dynamic test limits instead of fixed limits. The performance measures can be modeled using historical data through regression techniques and the classification of the anomalies is learned using support vector machines and convolutional neural networks. The results show good agreement between the actual and predicted learned model, where the root-mean-square error reaches 0.00073. Furthermore, this approach can automatically label the incoming tests according to the dynamic limits, making it possible to predict errors in an early stage of the process. This study contributes to product quality by monitoring the test measurements beyond fixed limits and contributes to making a more efficient testing process by detecting faults before they are measured. Moreover, study two considers the possibility of using transfer learning due to an insufficient number of anomalies in a single product. 

    The last study focuses on root cause analysis by analyzing test dependencies between test measurements using two known correlation-based methods and mutual information to find strength associations between measurements. The contributions of this study are twofold. First, test dependencies between measurements can be found using Pearson and Spearman correlation and MI; and their dependencies can be linear or higher order. Second, by clustering the associated tests, redundant tests are found, which could be used to update the test execution sequence and choose to execute only the relevant tests, hence, making a more efficient production process by saving test time.

    List of papers
    1. Cluster-Based Parallel Testing Using Semantic Analysis
    Open this publication in new window or tab >>Cluster-Based Parallel Testing Using Semantic Analysis
    Show others...
    2020 (English)In: 2020 IEEE International Conference On Artificial Intelligence Testing (AITest), IEEE, 2020, p. 99-106Conference paper, Published paper (Refereed)
    Abstract [en]

    Finding a balance between testing goals and testing resources can be considered as a most challenging issue, therefore test optimization plays a vital role in the area of software testing. Several parameters such as the objectives of the tests, test cases similarities and dependencies between test cases need to be considered, before attempting any optimization approach. However, analyzing corresponding testing artifacts (e.g. requirement specification, test cases) for capturing the mentioned parameters is a complicated task especially in a manual testing procedure, where the test cases are documented as a natural text written by a human. Thus, utilizing artificial intelligence techniques in the process of analyzing complex and sometimes ambiguous test data, is considered to be working in different industries. Test scheduling is one of the most popular and practical ways to optimize the testing process. Having a group of test cases which are required the same system setup, installation or testing the same functionality can lead to a more efficient testing process. In this paper, we propose, apply and evaluate a natural language processing-based approach that derives test cases' similarities directly from their test specification. The proposed approach utilizes the Levenshtein distance and converts each test case into a string. Test cases are then grouped into several clusters based on their similarities. Finally, a set of cluster-based parallel test scheduling strategies are proposed for execution. The feasibility of the proposed approach is studied by an empirical evaluation that has been performed on a Telecom use-case at Ericsson in Sweden and indicates promising results.

    Place, publisher, year, edition, pages
    IEEE, 2020
    Series
    IEEE International Conference on Artificial Intelligence Testing (AITest)
    Keywords
    Software Testing, Natural Language Processing, Test Optimization, Semantic Similarity, Clustering
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:oru:diva-88654 (URN)10.1109/AITEST49225.2020.00022 (DOI)000583824000015 ()2-s2.0-85092313008 (Scopus ID)978-1-7281-6984-2 (ISBN)
    Conference
    2nd IEEE International Conference on Artificial Intelligence Testing (AITest 2020), Oxford, United Kingdom, August 3-6, 2020
    Funder
    Knowledge FoundationVinnova
    Available from: 2021-01-19 Created: 2021-01-19 Last updated: 2023-10-05Bibliographically approved
    2. Performance Comparison of Two Deep Learning Algorithms in Detecting Similarities Between Manual Integration Test Cases
    Open this publication in new window or tab >>Performance Comparison of Two Deep Learning Algorithms in Detecting Similarities Between Manual Integration Test Cases
    Show others...
    2020 (English)In: The Fifteenth International Conference on Software Engineering Advances, International Academy, Research and Industry Association (IARIA) , 2020, p. 90-97Conference paper, Published paper (Refereed)
    Abstract [en]

    Software testing is still heavily dependent on human judgment since a large portion of testing artifacts, such as requirements and test cases are written in a natural text by experts. Identifying and classifying relevant test cases in large test suites is a challenging and also time-consuming task. Moreover, to optimize the testing process test cases should be distinguished based on their properties, such as their dependencies and similarities. Knowing the mentioned properties at an early stage of the testing process can be utilized for several test optimization purposes, such as test case selection, prioritization, scheduling,and also parallel test execution. In this paper, we apply, evaluate, and compare the performance of two deep learning algorithmsto detect the similarities between manual integration test cases. The feasibility of the mentioned algorithms is later examined in a Telecom domain by analyzing the test specifications of five different products in the product development unit at Ericsson AB in Sweden. The empirical evaluation indicates that utilizing deep learning algorithms for finding the similarities between manual integration test cases can lead to outstanding results.

    Place, publisher, year, edition, pages
    International Academy, Research and Industry Association (IARIA), 2020
    Series
    International Conference on Software Engineering Advances, E-ISSN 2308-4235
    Keywords
    Natural Language Processing, Deep Learning, Software Testing, Semantic Analysis, Test Optimization
    National Category
    Computer Systems
    Research subject
    Computer Science
    Identifiers
    urn:nbn:se:oru:diva-88921 (URN)978-1-61208-827-3 (ISBN)
    Conference
    The Fifteenth International Conference on Software Engineering Advances (ICSEA 2020), Porto, Portugal, October 18-22, 2020
    Projects
    TESTOMAT Project - The Next Level of Test Automation
    Available from: 2021-01-25 Created: 2021-01-25 Last updated: 2023-10-05Bibliographically approved
    3. A Dynamic Threshold Based Approach for Detecting the Test Limits
    Open this publication in new window or tab >>A Dynamic Threshold Based Approach for Detecting the Test Limits
    2021 (English)In: Sixteenth International Conference on Software Engineering Advances (ICSEA 2021) / [ed] Lugi Lavazza; Hironori Washizaki; Herwig Mannert, International Academy, Research, and Industry Association (IARIA) , 2021, p. 71-80Conference paper, Published paper (Refereed)
    Abstract [en]

    Finding a balance between meeting the testing goals and testing resources is always a challenging task. Therefore, employing Machine Learning (ML) techniques for test optimization purposes has received a great deal of attention. However, utilizing ML techniques requires frequently large volumes of data to obtain reliable results. Since the data gathering is hard and also expensive, reducing unnecessary failure or retest in a testing process might end up minimizing the testing resources. Final test yield is a proper performance metric to measure the potential risks influencing certain failure rates. Typically, production determines the yield’s minimum threshold based on an empirical value given by the subject matter experts. However, those thresholds cannot monitor the yield’s fluctuations beyond the acceptable thresholds, which might cause potential failures in consecutive tests. Furthermore, defining the empirical thresholds as either too tight or too loose in production is one of the main causes of yield dropping in the testing process. In this paper, we propose an ML-based solution that detects the divergent yield points based on the prediction and raises a flag depending on the yield class to the testers when a divergent point is above a data-driven threshold. This flexibility enables engineers to have a quantifiable tool to measure to what extend the different changes in the production process are affecting the product performance and execute actions before they occur. The feasibility of the proposed solution is studied by an empirical evaluation, which has been performed on a Telecom use-case at Ericsson in Sweden and tested in two of the latest radio technologies, 4G and 5G.

    Place, publisher, year, edition, pages
    International Academy, Research, and Industry Association (IARIA), 2021
    Keywords
    Software Testing, Test Optimization, Machine Learning, Regression Analysis, Imbalanced Learning
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:oru:diva-108707 (URN)9781612088945 (ISBN)
    Conference
    The Sixteenth International Conference on Software Engineering Advances (ICSEA 2021), Barcelona, Spain, October 3-7, 2021
    Funder
    Vinnova, D_RODS (2023-00244)Knowledge Foundation, 20190128
    Available from: 2023-10-03 Created: 2023-10-03 Last updated: 2023-10-05Bibliographically approved
    4. An Intelligent Monitoring Algorithm to Detect Dependencies between Test Cases in the Manual Integration Process
    Open this publication in new window or tab >>An Intelligent Monitoring Algorithm to Detect Dependencies between Test Cases in the Manual Integration Process
    2023 (English)In: 2023 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW), IEEE, 2023, p. 353-360Conference paper, Published paper (Refereed)
    Abstract [en]

    Finding a balance between meeting test coverage and minimizing the testing resources is always a challenging task both in software (SW) and hardware (HW) testing. Therefore, employing machine learning (ML) techniques for test optimization purposes has received a great deal of attention. However, utilizing machine learning techniques frequently requires large volumes of valuable data to be trained. Although, the data gathering is hard and also expensive, manual data analysis takes most of the time in order to locate the source of failure once they have been produced in the so-called fault localization. Moreover, by applying ML techniques to historical production test data, relevant and irrelevant features can be found using strength association, such as correlation- and mutual information-based methods. In this paper, we use production data records of 100 units of a 5G radio product containing more than 7000 test results. The obtained results show that insightful information can be found after clustering the test results by their strength association, most linear and monotonic, which would otherwise be challenging to identify by traditional manual data analysis methods.

    Place, publisher, year, edition, pages
    IEEE, 2023
    Series
    IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW, ISSN 2159-4848
    Keywords
    Terms Test Optimization, Machine Learning, Fault Localization, Dependence Analysis, Mutual Information
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:oru:diva-107727 (URN)10.1109/ICSTW58534.2023.00066 (DOI)001009223100052 ()2-s2.0-85163076493 (Scopus ID)9798350333350 (ISBN)9798350333367 (ISBN)
    Conference
    16th IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW 2023), Dublin, Ireland, April 16-20, 2023
    Funder
    Knowledge FoundationVinnova
    Available from: 2023-08-28 Created: 2023-08-28 Last updated: 2023-10-05Bibliographically approved
    5. Time Series Anomaly Detection using Convolutional Neural Networks in the Manufacturing Process of RAN
    Open this publication in new window or tab >>Time Series Anomaly Detection using Convolutional Neural Networks in the Manufacturing Process of RAN
    2023 (English)In: 2023 IEEE International Conference On Artificial Intelligence Testing (AITest), IEEE, 2023, p. 90-98Conference paper, Published paper (Refereed)
    Abstract [en]

    The traditional approach of categorizing test results as “Pass” or “Fail” based on fixed thresholds can be labor-intensive and lead to dropping test data. This paper presents a framework to enhance the semi-automated software testing process by detecting deviations in executed data and alerting when anomalous inputs fall outside data-driven thresholds. In detail, the proposed solution utilizes classification with convolutional neural networks and prediction modeling using linear regression, Ridge regression, Lasso regression, and XGBoost. The study also explores transfer learning in a highly correlated use case. Empirical evaluation at a leading Telecom company validates the effectiveness of the approach, showcasing its potential to improve testing efficiency and accuracy. Despite its significance, limitations include the need for further research in different domains and industries to generalize the findings, as well as the potential biases introduced by the selected machine learning models. Overall, this study contributes to the field of semi-automated software testing and highlights the benefits of leveraging data-driven thresholds and machine learning techniques for enhanced software quality assurance processes.

    Place, publisher, year, edition, pages
    IEEE, 2023
    Series
    IEEE International Conference on Artificial Intelligence Testing, ISSN 2835-3552, E-ISSN 2835-3560
    Keywords
    Software Testing, Test Optimization, Machine Learning, Imbalanced Learning, Moving Block Bootstrap
    National Category
    Computer Sciences
    Research subject
    Computer Science
    Identifiers
    urn:nbn:se:oru:diva-108703 (URN)10.1109/AITest58265.2023.00023 (DOI)001062490100014 ()2-s2.0-85172254244 (Scopus ID)9798350336306 (ISBN)9798350336290 (ISBN)
    Conference
    5th IEEE International Conference on Artificial Intelligence Testing (AITest 2023), Athens, Greece, July 17-20, 2023
    Funder
    Knowledge Foundation, 20190128Vinnova, D-RODS (2023-00244)
    Available from: 2023-10-03 Created: 2023-10-03 Last updated: 2023-10-10Bibliographically approved
    Download full text (pdf)
    Introductory chapter
  • 2.
    Landin, Cristina
    et al.
    Örebro University, School of Science and Technology. Product Development Unit Radio, Production Test Development, Ericsson AB, Kumla, Sweden.
    Hatvani, Leo
    School of Innovation, Design and Engineering, Mälardalen University, Västerås, Sweden.
    Tahvili, Sahar
    Global Artificial Intelligence Accelerator (GAIA), Ericsson AB, Stockholm, Sweden; School of Innovation, Design and Engineering, Mälardalen University, Västerås, Sweden.
    Haggren, Hugo
    Global Artificial Intelligence Accelerator (GAIA), Ericsson AB, Stockholm, Sweden.
    Längkvist, Martin
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Håkansson, Anne
    School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden.
    Performance Comparison of Two Deep Learning Algorithms in Detecting Similarities Between Manual Integration Test Cases2020In: The Fifteenth International Conference on Software Engineering Advances, International Academy, Research and Industry Association (IARIA) , 2020, p. 90-97Conference paper (Refereed)
    Abstract [en]

    Software testing is still heavily dependent on human judgment since a large portion of testing artifacts, such as requirements and test cases are written in a natural text by experts. Identifying and classifying relevant test cases in large test suites is a challenging and also time-consuming task. Moreover, to optimize the testing process test cases should be distinguished based on their properties, such as their dependencies and similarities. Knowing the mentioned properties at an early stage of the testing process can be utilized for several test optimization purposes, such as test case selection, prioritization, scheduling,and also parallel test execution. In this paper, we apply, evaluate, and compare the performance of two deep learning algorithmsto detect the similarities between manual integration test cases. The feasibility of the mentioned algorithms is later examined in a Telecom domain by analyzing the test specifications of five different products in the product development unit at Ericsson AB in Sweden. The empirical evaluation indicates that utilizing deep learning algorithms for finding the similarities between manual integration test cases can lead to outstanding results.

    Download full text (pdf)
    Performance Comparison of Two Deep Learning Algorithms in Detecting Similarities Between Manual Integration Test Cases
  • 3.
    Landin, Cristina
    et al.
    Örebro University, School of Science and Technology.
    Liu, Jie
    Product Development Unit, Cloud RAN Development Support, Ericsson AB, Stockholm, Sweden; Technical University of Berlin, Berlin, Germany.
    Katsarou, Katerina
    Technical University of Berlin, Berlin, Germany.
    Tahvili, Sahar
    Product Development Unit, Cloud RAN Development Support, Ericsson AB, Stockholm, Sweden; Innovation and Product Realisation, Mälardalens University, Eskilstuna, Sweden.
    Time Series Anomaly Detection using Convolutional Neural Networks in the Manufacturing Process of RAN2023In: 2023 IEEE International Conference On Artificial Intelligence Testing (AITest), IEEE, 2023, p. 90-98Conference paper (Refereed)
    Abstract [en]

    The traditional approach of categorizing test results as “Pass” or “Fail” based on fixed thresholds can be labor-intensive and lead to dropping test data. This paper presents a framework to enhance the semi-automated software testing process by detecting deviations in executed data and alerting when anomalous inputs fall outside data-driven thresholds. In detail, the proposed solution utilizes classification with convolutional neural networks and prediction modeling using linear regression, Ridge regression, Lasso regression, and XGBoost. The study also explores transfer learning in a highly correlated use case. Empirical evaluation at a leading Telecom company validates the effectiveness of the approach, showcasing its potential to improve testing efficiency and accuracy. Despite its significance, limitations include the need for further research in different domains and industries to generalize the findings, as well as the potential biases introduced by the selected machine learning models. Overall, this study contributes to the field of semi-automated software testing and highlights the benefits of leveraging data-driven thresholds and machine learning techniques for enhanced software quality assurance processes.

  • 4.
    Landin, Cristina
    et al.
    Örebro University, School of Science and Technology.
    Liu, Jie
    Product Development Unit, Cloud RAN, Integration and Test, Ericsson AB, Stockholm, Sweden; Technical University of Berlin, Germany.
    Tahvili, Sahar
    Product Development Unit, Cloud RAN, Integration and Test, Ericsson AB, Stockholm, Sweden; M¨alardalen University, Product Realization, School of Innovation, Design and Engineering, Eskilstuna, Sweden.
    A Dynamic Threshold Based Approach for Detecting the Test Limits2021In: Sixteenth International Conference on Software Engineering Advances (ICSEA 2021) / [ed] Lugi Lavazza; Hironori Washizaki; Herwig Mannert, International Academy, Research, and Industry Association (IARIA) , 2021, p. 71-80Conference paper (Refereed)
    Abstract [en]

    Finding a balance between meeting the testing goals and testing resources is always a challenging task. Therefore, employing Machine Learning (ML) techniques for test optimization purposes has received a great deal of attention. However, utilizing ML techniques requires frequently large volumes of data to obtain reliable results. Since the data gathering is hard and also expensive, reducing unnecessary failure or retest in a testing process might end up minimizing the testing resources. Final test yield is a proper performance metric to measure the potential risks influencing certain failure rates. Typically, production determines the yield’s minimum threshold based on an empirical value given by the subject matter experts. However, those thresholds cannot monitor the yield’s fluctuations beyond the acceptable thresholds, which might cause potential failures in consecutive tests. Furthermore, defining the empirical thresholds as either too tight or too loose in production is one of the main causes of yield dropping in the testing process. In this paper, we propose an ML-based solution that detects the divergent yield points based on the prediction and raises a flag depending on the yield class to the testers when a divergent point is above a data-driven threshold. This flexibility enables engineers to have a quantifiable tool to measure to what extend the different changes in the production process are affecting the product performance and execute actions before they occur. The feasibility of the proposed solution is studied by an empirical evaluation, which has been performed on a Telecom use-case at Ericsson in Sweden and tested in two of the latest radio technologies, 4G and 5G.

    Download full text (pdf)
    A Dynamic Threshold Based Approach for Detecting the Test Limits
  • 5.
    Landin, Cristina
    et al.
    Örebro University, School of Science and Technology.
    Tahvili, Sahar
    Global Artificial Intelligence Accelerator (GAIA), Ericsson AB, Stockholm, Sweden; School of Innovation, Design and Engineering, Mälardalen University, Västerås, Sweden.
    Haggren, Hugo
    Global Artificial Intelligence Accelerator (GAIA), Ericsson AB, Stockholm, Sweden.
    Längkvist, Martin
    Örebro University, School of Science and Technology.
    Muhammad, Auwn
    Global Artificial Intelligence Accelerator (GAIA), Ericsson AB, Stockholm, Sweden.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Cluster-Based Parallel Testing Using Semantic Analysis2020In: 2020 IEEE International Conference On Artificial Intelligence Testing (AITest), IEEE, 2020, p. 99-106Conference paper (Refereed)
    Abstract [en]

    Finding a balance between testing goals and testing resources can be considered as a most challenging issue, therefore test optimization plays a vital role in the area of software testing. Several parameters such as the objectives of the tests, test cases similarities and dependencies between test cases need to be considered, before attempting any optimization approach. However, analyzing corresponding testing artifacts (e.g. requirement specification, test cases) for capturing the mentioned parameters is a complicated task especially in a manual testing procedure, where the test cases are documented as a natural text written by a human. Thus, utilizing artificial intelligence techniques in the process of analyzing complex and sometimes ambiguous test data, is considered to be working in different industries. Test scheduling is one of the most popular and practical ways to optimize the testing process. Having a group of test cases which are required the same system setup, installation or testing the same functionality can lead to a more efficient testing process. In this paper, we propose, apply and evaluate a natural language processing-based approach that derives test cases' similarities directly from their test specification. The proposed approach utilizes the Levenshtein distance and converts each test case into a string. Test cases are then grouped into several clusters based on their similarities. Finally, a set of cluster-based parallel test scheduling strategies are proposed for execution. The feasibility of the proposed approach is studied by an empirical evaluation that has been performed on a Telecom use-case at Ericsson in Sweden and indicates promising results.

  • 6.
    Landin, Cristina
    et al.
    Örebro University, School of Science and Technology. Product Development Unit Radio, Production Test Development, Ericsson AB, Kumla, Sweden.
    Zhao, Xinrong
    Department of Mathematical Science, Chalmers University, Gothenburg, Sweden.
    Längkvist, Martin
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    An Intelligent Monitoring Algorithm to Detect Dependencies between Test Cases in the Manual Integration Process2023In: 2023 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW), IEEE, 2023, p. 353-360Conference paper (Refereed)
    Abstract [en]

    Finding a balance between meeting test coverage and minimizing the testing resources is always a challenging task both in software (SW) and hardware (HW) testing. Therefore, employing machine learning (ML) techniques for test optimization purposes has received a great deal of attention. However, utilizing machine learning techniques frequently requires large volumes of valuable data to be trained. Although, the data gathering is hard and also expensive, manual data analysis takes most of the time in order to locate the source of failure once they have been produced in the so-called fault localization. Moreover, by applying ML techniques to historical production test data, relevant and irrelevant features can be found using strength association, such as correlation- and mutual information-based methods. In this paper, we use production data records of 100 units of a 5G radio product containing more than 7000 test results. The obtained results show that insightful information can be found after clustering the test results by their strength association, most linear and monotonic, which would otherwise be challenging to identify by traditional manual data analysis methods.

1 - 6 of 6
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf