Do Pre-trained Models Benefit Equally in Continual Learning?

Paper Project GitHub

Teaser

Given performance of two CL algorithms trained from scratch
If we initialize them from a ImageNet pre-trained RN18, is it (1) or (2)?

Most people would probably go for (2)...
It’s the other way around!

Abstract

Existing work on continual learning (CL) is primarily devoted to developing algorithms for models trained from scratch. Despite their encouraging performance on contrived benchmarks, these algorithms show dramatic performance drops in real-world scenarios. Therefore, this paper advocates the systematic introduction of pre-training to CL, which is a general recipe for transferring knowledge to downstream tasks but is substantially missing in the CL community. Our investigation reveals the multifaceted complexity of exploiting pre-trained models for CL, along three different axes, pre-trained models, CL algorithms, and CL scenarios. Perhaps most intriguingly, improvements in CL algorithms from pre-training are very inconsistent an underperforming algorithm could become competitive and even state-of-the-art when all algorithms start from a pre-trained model. This indicates that the current paradigm, where all CL methods are compared in from-scratch training, is not well reflective of the true CL objective and desired progress. In addition, we make several other important observations, including that CL algorithms that exert less regularization benefit more from a pre-trained model; and that a stronger pre-trained model such as CLIP does not guarantee a better improvement. Based on these findings, we introduce a simple yet effective baseline that employs minimum regularization and leverages the more beneficial pre-trained model, coupled with a two-stage training pipeline. We recommend including this strong baseline in the future development of CL algorithms, due to its demonstrated state-of-the-art performance.


Benefits from pre-trained models vary dramatically depening on different CL methods.
(a) CL algorithms trained from scratch fail on Split CUB200 , a more complex dataset than Split CIFAR100 , which necessitates the use of pre-trained models (denoted as ‘+ RN18’) that dramatically increase the accuracy of a wide spectrum of algorithms. (b) Different CL algorithms receive vastly different benefits from pre-trained models, and the superiority between algorithms changes. These findings suggest that it is critical for the community to develop CL algorithms with a pre-trained model and understand their behaviors.

Investigation of Pre-trained Models from Three Axes

Different models, different CL methods, and different CL scenarios for investigation



  1. Inconsistency of method superiority between from-scratch training & pre-training

    Rankings between CL methods change when a pre-trained model is deployed
  2. Replay-based methods benefit more from pre-trained models
    Replay-based methods benefit more from a pre-trained model



  1. Self-supervised fine-tuning decreases forgetting
  2. CLIP enjoys less forgetting compared with ImageNet pre-trained ResNet


  3. ImageNet ResNet50 outperforms the CLIP counterpart




  1. Performances in online CIL are better than CIL

    Rankings between CL methods change when a pre-trained model is deployed

Strong ER Baseline with Two-stage Training



  1. Fine-tuning the model after streaming offline on samples in ER as the second stage.

  2. Simple yet strong baseline outperforms best-performing methods.


Citation

@InProceedings{Lee_2023_WACV,
   author = {Lee, Kuan-Ying and Zhong, Yuanyi and Wang, Yu-Xiong},
   title = {Do Pre-Trained Models Benefit Equally in Continual Learning?},
   booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
   month = {January},
   year = {2023},
   pages = {6485-6493}
}

Acknowledgement -- website template adopted from Jon Barron