[edit]
Feature Distribution Matching for Federated Domain Generalization
Proceedings of The 14th Asian Conference on Machine
Learning, PMLR 189:942-957, 2023.
Abstract
Multi-source domain adaptation has been intensively
studied. The distribution shift in features inherent
to specific domains causes the negative transfer
problem, degrading a model’s generality to unseen
tasks. In Federated Learning (FL), learned model
parameters are shared to train a global model that
leverages the underlying knowledge across client
models trained on separate data
domains. Nonetheless, the data confidentiality of FL
hinders the effectiveness of traditional domain
adaptation methods that require prior knowledge of
different domain data. We propose a new federated
domain generalization method called Federated
Knowledge Alignment (FedKA). FedKA leverages feature
distribution matching in a global workspace such
that the global model can learn domain-invariant
client features under the constraint of unknown
client data. FedKA employs a federated voting
mechanism that generates target domain pseudo-labels
based on the consensus from clients to facilitate
global model fine-tuning. We performed extensive
experiments, including an ablation study, to
evaluate the effectiveness of the proposed method in
both image and text classification tasks using
different model architectures. The empirical results
show that FedKA achieves performance gains of 8.8%
and 3.5% in Digit-Five and Office-Caltech10,
respectively, and a gain of 0.7% in Amazon Review
with extremely limited training data. Moreover, we
studied the effectiveness of FedKA in alleviating
the negative transfer of FL based on a new criterion
called Group Effect. The results show that FedKA can
reduce negative transfer, improving the performance
gain via model aggregation by 4 times.