Towards optimal learning: Investigating the impact of different model updating strategies in federated learning
2024 (English)In: Expert systems with applications, ISSN 0957-4174, E-ISSN 1873-6793, Vol. 249, no Part A, article id 123553Article in journal (Refereed) Published
Abstract [en]
With rising data security concerns, privacy preserving machine learning (ML) methods have become a key research topic. Federated learning (FL) is one such approach which has gained a lot of attention recently as it offers greater data security in ML tasks. Substantial research has already been done on different aggregation methods, personalized FL algorithms etc. However, insufficient work has been done to identify the effects different model update strategies (concurrent FL, incremental FL, etc.) have on federated model performance. This paper presents results of extensive FL simulations run on multiple datasets with different conditions in order to determine the efficiency of 4 different FL model update strategies: concurrent, semi -concurrent, incremental, and cyclic -incremental. We have found that incremental updating methods offer more reliable FL models in cases where data is distributed both evenly and unevenly between edge nodes, especially when the number of data samples across all edge nodes is small.
Place, publisher, year, edition, pages
Elsevier, 2024. Vol. 249, no Part A, article id 123553
Keywords [en]
Federated learning, Deep learning, Edge computing, FL model update strategies, Data distribution between edge nodes
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:oru:diva-113134DOI: 10.1016/j.eswa.2024.123553ISI: 001195520000001Scopus ID: 2-s2.0-85186267554OAI: oai:DiVA.org:oru-113134DiVA, id: diva2:1851512
Funder
EU, Horizon 2020, 875351
Note
Funding agency:
Ministry of Science, Technological Development and Innovation of the Republic of Serbia 451-03-66/2024-03/200125 451-03-65/2024-03/200125
2024-04-152024-04-152024-04-15Bibliographically approved