A Multi-Head Federated Continual Learning Approach for Improved Flexibility and Robustness in Edge Environments

Chunlu Chen, Kevin I-Kai Wang, Peng Li, Kouichi Sakurai


In the rapidly evolving field of machine learning, the adoption of traditional approaches often encounters limitations, such as increased computational costs and the challenge of catastrophic forgetting, particularly when models undergo retraining with new datasets. This issue is especially pronounced in environments that require the ability to swiftly adapt to changing data landscapes. Continual learning emerges as a pivotal solution to these challenges, empowering models to assimilate new information while preserving the knowledge acquired from previous learning phases. Despite its benefits, the continual learning process's inherent need to retain prior knowledge introduces a potential risk for information leakage.

Addressing these challenges, we propose a Federated Continual Learning (FCL) framework with a multi-head neural network model. This approach blends the privacy-preserving capabilities of Federated Learning (FL) with the adaptability of continual learning, ensuring both data privacy and continuous learning in edge computing environments. Moreover, this framework enhances our approach to adversarial training, as the constant influx of diverse and complex training data allows the model to improve its understanding and adaptability, thereby strengthening its defenses against adversarial threats.
Our system features a architecture with dedicated fully-connected layers for each task, ensuring that unique features pertinent to each task are accurately captured and preserved over the model's lifetime. Data undergoes processing through these task-specific layers before a final label is determined, based on the highest prediction value. This method exploits the model's full range of knowledge, significantly boosting prediction accuracy. We have conducted thorough evaluations of our FCL framework on two benchmark datasets, MNIST and CIFAR-10, with the results clearly validating the effectiveness of our approach.


Federated Learning; Continual Learning; Adversarial Learning; Catastrophic Forgetting; Security Systems

Full Text:



  • There are currently no refbacks.