Toggle Main Menu Toggle Search

Open Access padlockePrints

Analysis of deep learning under adversarial attacks in Hierarchical Federated Learning

Lookup NU author(s): Dr Duaa Alqattan, Professor Raj Ranjan, Dr Varun OjhaORCiD

Downloads


Licence

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND).


Abstract

Hierarchical Federated Learning (HFL) extends traditional Federated Learning (FL) by introducing multi-level aggregation in which model updates pass through clients, edge servers, and a global server. While this hierarchical structure enhances scalability, it also increases vulnerability to adversarial attacks—such as data poisoning and model poisoning—that disrupt learning by introducing discrepancies at the edge server level. These discrepancies propagate through aggregation, affecting model consistency and overall integrity. Existing studies on adversarial behaviour in FL primarily rely on single-metric approaches—such as cosine similarity or Euclidean distance—to assess model discrepancies and filter out anomalous updates. However, these methods fail to capture the diverse ways adversarial attacks influence model updates, particularly in highly heterogeneous data environments and hierarchical structures. Attackers can exploit the limitations of single-metric defences by crafting updates that seem benign under one metric while remaining anomalous under another. Moreover, prior studies have not systematically analysed how model discrepancies evolve over time, vary across regions, or affect clustering structures in HFL architectures. To address these limitations, we propose the Model Discrepancy Score (MDS), a multi-metric framework that integrates Dissimilarity, Distance, Uncorrelation, and Divergence to provide a comprehensive analysis of how adversarial activity affects model discrepancies. Through temporal, spatial, and clustering analyses, we examine how attacks affect model discrepancies at the edge server level in 3LHFL and 4LHFL architectures and evaluate MDS’s ability to distinguish between benign and malicious servers. Our results show that while 4LHFL effectively mitigates discrepancies in regional attack scenarios, it struggles with distributed attacks due to additional aggregation layers that obscure distinguishable discrepancy patterns over time, across regions, and within clustering structures. Factors influencing detection include data heterogeneity, attack sophistication, and hierarchical aggregation depth. These findings highlight the limitations of single-metric approaches and emphasize the need for multi-metric strategies such as MDS to enhance HFL security.


Publication metadata

Author(s): Alqattan DS, Snasel V, Ranjan R, Ojha V

Publication type: Article

Publication status: Published

Journal: High-Confidence Computing

Year: 2025

Volume: 5

Issue: 4

Print publication date: 01/12/2025

Online publication date: 08/04/2025

Acceptance date: 24/03/2025

Date deposited: 09/10/2025

ISSN (electronic): 2667-2952

Publisher: Elsevier BV

URL: https://doi.org/10.1016/j.hcc.2025.100321

DOI: 10.1016/j.hcc.2025.100321


Altmetrics

Altmetrics provided by Altmetric


Funding

Funder referenceFunder name
EPSRC-funded project National Edge AI Hub for Real Data: Edge Intelligence for Cyber-disturbances and Data Quality (EP/Y028813/1)
Technical and Vocational Training Corporation (TVTC) through the Saudi Arabian Culture Bureau (SACB) in the United Kingdom

Share