Basmadjian, RobertBoubouh, KarimBoussetta, AmineGuerraoui, RachidMaurer, Alexandre2022-09-142022-09-142022-09-142022-06-2810.1145/3538637.3538863https://infoscience.epfl.ch/handle/20.500.14299/190802Many fields make use nowadays of machine learning (ML) enhanced applications for cost optimization, scheduling or forecasting, in- cluding the energy sector. However, these very ML algorithms consume a significant amount of energy, sometimes going against the whole purpose of their employment. To this day, solutions for an energy-efficient execution of these algorithms have not been addressed adequately. In this paper, we demonstrate the advantage of executing ML algorithms on mobile devices (ARM) over a stan- dard server machine (RISC), from the perspective of energy. To do so, we first propose a novel methodology to quantify the amount of energy consumed by an ML algorithm. Then, we compare the energy consumption of existing algorithms running on mobile de- vices and server machines. To motivate running ML algorithms on mobile devices, we also propose a new peer-to-peer personalized ML algorithm (P3) that shows better convergence properties than related works, and provably converging to a ball centered at a criti- cal point of a non-convex cost function, under mild assumptions. Most importantly, we show that running the P3 algorithm on mo- bile devices is extremely energy-efficient, consuming 2700x, 200x and 20x less energy than centralized learning algorithms for 10, 100, and 300 peers respectively. Finally, unlike centralized learning algorithms, the proposed P2P algorithm can generate personalized models, and does not have issues of single-point-of-failure nor data privacy. Thus, we give evidence on the supremacy of our proposed P3 algorithm over the other state-of-the-art centralized ML ones.On the advantages of P2P ML on mobile devicestext::conference output::conference proceedings::conference paper