Adaptive Participant Selection in Heterogeneous Federated Learning
Federated learning (FL) is a distributed machine learning technique developed and pursued in part to address data privacy and security issues. Participant selection is critical to determine the latency of the training process in a heterogeneous FL architecture, where users with different hardware setups and wireless channel conditions communicate with their base station (BS) to participate in the FL training process. Many solutions have been designed to jointly consider computational and uploading latency of different users to select the most suitable participants such that the straggler problem can be avoided. However, none of these solutions consider the waiting time of a participant, which refers to the latency of a participant waiting for the wireless channel to be available, and the waiting time could significantly affect the latency of the training process, especially when a huge number of participants are involved in the training process and share the wireless channel in the time-division duplexing manner to upload their local FL models. In this paper, we consider not only the computational and uploading latency but also the waiting time (which is estimated based on an M/G/1 queueing model) of a participant to select suitable participants. We formulate an optimization problem to maximize the number of selected participants, while ensuring the selected participants can upload their local models before the deadline in a global iteration. The Latency awarE pARticipant selectioN (LEARN) algorithm is proposed to efficiently solve the problem and the performance of LEARN is validated via extensive simulations.