ARGO: Overcoming hardware dependence in distributed learning
Mobile devices offer a valuable resource for distributed learning alongside traditional computers, encouraging energy efficiency and privacy through local computations. However, the hardware limitations of these devices makes it impossible to use classical SGD for industry-grade machine learning models (with a very large number of parameters). Moreover, they are intermittently available and susceptible to failures. To address these challenges, we introduce ARGO, an algorithm that combines adaptive workload schemes with Byzantine resilience mechanisms, as well as dynamic device participation. Our theoretical analysis demonstrates linear convergence for strongly convex losses and sub-linear convergence for non-convex losses, without assuming specific dataset partitioning (for potential data heterogeneity). Our formal analysis highlights the interplay between convergence properties, hardware capabilities, Byzantine impact, and standard factors such as mini-batch size and learning rate. Through extensive evaluations, we show that ARGO outperforms standard SGD in terms of convergence speed and accuracy, and most importantly, thrives when classical SGD is not possible due to hardware limitations.
2-s2.0-85219579134
2025-07-01
168
107778
REVIEWED
EPFL