A. Boiarov, K. Khabarlak and I. Yastrebov., International conference on neural information processing, 2023
Meta-learning methods aim to build learning algorithms capable of quickly adapting to new tasks in low-data regime. One of the most difficult benchmarks of such algorithms is a one-shot learning problem. In this setting many algorithms face uncertainties associated with limited amount of training samples, which may result in overfitting. This problem can be resolved by providing additional information to the model. One of the most efficient ways to do this is multi-task learning.
In this paper we investigate the modification of a standard meta-learning pipeline. The proposed method simultaneously utilizes information from several meta-training tasks in a common loss function. The impact of these tasks in the loss function is controlled by a per task weight. Proper optimization of the weights can have big influence on training and the final quality of the model. We propose and investigate the use of methods from the family of Simultaneous Perturbation Stochastic Approximation (SPSA) for optimization of meta-train tasks weights. We also demonstrate superiority of stochastic approximation in comparison to gradient-based method.
The proposed Multi-Task Modification can be applied to almost all meta-learning methods. We study applications of this modification on Model-Agnostic Meta-Learning and Prototypical Network algorithms on CIFAR-FS, FC100, miniImageNet and tieredImageNet one-shot learning benchmarks. During these experiments Multi-Task Modification has demonstrated improvement over original methods. SPSA-Tracking algorithm first adapted in this paper for multi-task weight optimization shows the largest accuracy boost that is competitive to the state-of-the-art meta-learning methods.