Andrew Geng, Pin-Yu Chen
IEEE SaTML 2024
Ensuring group fairness in federated learning (FL) presents unique challenges due to data heterogeneity and communication constraints. We propose Kernel Fair Federated Learning (KFFL), a novel algorithmic framework that incorporates group fairness into FL models using the Kernel Hilbert-Schmidt Independence Criterion (KHSIC) as a fairness regularizer. To address scalability, KFFL approximates the KHSIC with random feature maps, significantly reducing computational and communication overhead while achieving group fairness. To address the resulting non-convex composite optimization problem, we propose FedProxGrad, a federated proximal gradient algorithm that guarantees convergence. Through experiments on standard benchmark datasets across both IID and Non-IID settings for regression and classification tasks, KFFL demonstrates its ability to balance accuracy and fairness effectively, outperforming existing methods by comprehensively exploring the accuracy–fairness trade-offs. Furthermore, we introduce KFFL-TD, a time-delayed variant that further reduces communication rounds, enhancing efficiency in decentralized environments. Code is available at github.com/Huzaifa-Arif/KFFL.
Andrew Geng, Pin-Yu Chen
IEEE SaTML 2024
Benjamin N. Grosof
AAAI-SS 1993
Anurag Ajay, Seungwook Han, et al.
NeurIPS 2023
Paula Harder, Venkatesh Ramesh, et al.
EGU 2023