Adaptive Federated Learning on Non-IID Data with Resource Constraint

July 2021
IEEE Transactions on Computers

Federated learning (FL) has been widely recognized as a promising approach by enabling individual end-devices to cooperatively train a global model without exposing their own data. One of the key challenges in FL is the non-independent and identically distributed (Non-IID) data across the clients, which decreases the efficiency of stochastic gradient descent (SGD) based training process. Moreover, clients with different data distributions may cause bias to the global model update, resulting in a degraded model accuracy. To tackle the Non-IID problem in FL, we aim to optimize the local training process and global aggregation simultaneously. For local training, we analyse the effect of hyperparameters (e.g., the batch size, the number of local updates) on the training performance of FL. Guided by the toy example and theoretical analysis, we are motivated to mitigate the negative impacts incurred by Non-IID data via selecting a subset of participants and adaptively adjust their batch size. A deep reinforcement learning based approach has been proposed to adaptively control the training of local models and the phase of global aggregation. Extensive experiments on different datasets show that our method can improve the model accuracy by up to 30%, as compared to the state-of-the-art approaches.