Access & Terms of Use
embargoed access
Embargoed until 2025-02-23
Copyright: Cao, Yuanjiang
Embargoed until 2025-02-23
Copyright: Cao, Yuanjiang
Altmetric
Abstract
Deep neural models have achieved impressive success and progress in the last decade.
However, high-quality models require a large amount of data, parameters as well as
computation power. This originates from the curse of dimensionality and poor
out-of-distribution generalization of current probabilistic models. Current machine
learning models requires data points to be independently identically distributed
which is often not satisfied in real-world applications. This mismatch damages the
direct application of classic learning models on out-of-distribution data. In this
dissertation, we propose to explore this issue from three perspectives. First, we
explore the impact of distribution perturbation under the adversarial attack, which
validates the sensitivity of deep learning models under even small distribution shifts.
To increase the robustness of our system, we propose a detection model in the
recommendation system scenario. The second problem we investigate is the domain
adaptation. Specifically, we study how to learn good representations to map samples
from one domain to another domain in the image transfer setting. Finally, we
probe into the domain generalization setting where a model aims to achieve better
performance under multiple domains. We study the meta-learning models to learn
directly from multi-task settings to explore a way to learn representation under a
large distribution shift