Object detection plays a critical role in various computer vision applications, encompassing
domains like autonomous vehicles, object tracking, and scene understanding. These applica-
tions rely on detectors that generate bounding boxes around known object categories, and
the outputs of these detectors are subsequently utilized by downstream systems. In practice,
supervised training is the predominant approach for training object detectors, wherein labeled
data is used to train the models.
However, the effectiveness of these detectors in real-world scenarios hinges on the extent
to which the training data distribution can adequately represent all potential test scenarios.
In many cases, this assumption does not hold true. For instance, a model will be typically
trained under a single environmental condition but at the test time, it can encounter a much
more diverse condition. Such discrepancies often occur as acquiring training data that covers
diverse environmental conditions can be challenging. This disparity between the training
and test distributions, commonly referred to as the domain shift deteriorates the detectorâ  s
performance.
In the literature, various methods have been employed to mitigate the domain shift issue.
One approach involves unsupervised domain adaptation techniques, where the model is
adapted to perform well on the target domain by leveraging unlabeled images from that do-
main. Another avenue of research is domain generalization, which aims to train models that
can generalize effectively across multiple target domains without direct access to data in that
particular domain.
In this thesis, we propose unsupervised domain adaptation and domain generalization meth-
ods to alleviate domain shift. First, we introduce an attention-based module to obtain local
object regions in the single-stage detectors. Here we show the efï¬ cacy of a gradual transition
from global image features adaptation to local region adaptation. While this work mainly
focuses on appearance shifts due to illumination or weather change, in our second work,
we show that the gap introduced due to differences in the camera setup and parameters is
non-negligible, as well. Hence, we propose a method to learn a set of homographies that
allow us to learn robust features to bring two domains closer under such shifts. Both of these
works have access to unlabelled data in the target domain, but sometimes even unlabeled
data is scarce. To tackle this, in our third work, we propose a domain generalization method
by leveraging image and text-aligned feature embeddings. We estimate the visual features of the
target domain based on the textual prompt describing the domain.
EPFL_TH9805.pdf
n/a
openaccess
copyright
59.8 MB
Adobe PDF
34bf7cbd374a22c3a09424a44527ac8b