Meta-learning optimizes the hyperparameters of a training procedure, such as its initialization, kernel, or learning rate, based on data sampled from a number of auxiliary tasks. A key underlying assumption is that the auxiliary tasks-known as meta-Training tasks-share the same generating distribution as the tasks to be encountered at deployment time-known as meta-Test tasks. This may, however, not be the case when the test environment differ from the meta-Training conditions. To address shifts in task generating distribution between meta-Training and meta-Testing phases, this paper introduces weighted free energy minimization (WFEM) for transfer meta-learning. We instantiate the proposed approach for non-parametric Bayesian regression and classification via Gaussian Processes (GPs). The method is validated on a toy sinusoidal regression problem, as well as on classification using miniImagenet and CUB data sets, through comparison with standard meta-learning of GP priors as implemented by PACOH.