When Federated Learning Meets Other Learning Algorithms: From Model Fusion to Federated X Learning
Federated learning is a new learning paradigm that decouples data collection from model training via multi-party computation and model aggregation. As a flexible learning setting, federated learning has the potential to integrate with other learning frameworks. We conduct a focused survey of federated learning in conjunction with other learning algorithms. Specifically, we explore various learning algorithms to improve the vanilla federated averaging algorithm and review model fusion methods such as adaptive aggregation, regularization, clustered methods, and Bayesian methods. Following the emerging trends, we also discuss federated learning in the intersection with other learning paradigms, referred to as federated x learning, where x includes multitask learning, meta-learning, transfer learning, unsupervised learning, and reinforcement learning. This survey reviews related state of the art, challenges, and future directions.
Federated learning with other learning algorithms: categorization, conjunctions and representative methods. [View SVG]
The taxonomy scheme is organized as follows.
- Federated Model Fusion. We categorize the major improvements to the pioneering FedAvg model aggregation algorithm into four subclasses (i.e., adaptive/attentive methods, regularization methods, clustered methods, and Bayesian methods), together with a special focus on fairness.
- Federated Learning Paradigms. We investigate how the various learning paradigms are fitted into the federated learning setting. The learning paradigms include some key supervised learning scenarios such as transfer learning, multitask and meta-learning, and learning algorithms beyond supervised learning such as semi-supervised learning, unsupervised learning, and reinforcement learning.
Please check out the paper for details.
Emerging trends in federated learning: From model fusion to federated x learning.
S. Ji, T. Saravirta, S. Pan, G. Long, and A. Walid.
arXiv preprint arXiv:2102.12920, 2021.