icon picker
DG




Copy of Table
Paper Name
Categories
Type
Recent issues
Motivations
Contributions
Target
Evaluation List
Conf./Jour.
Year
Link
1
Exploiting Domain-specific Features to Enhance DG
Domain-invariant features
Theoretical
DG
Only consider domain-invariant
Ignore domain-specific
Extend beyond invariance view
Disentangle and joint learning
Confirms that domain-specific is essential.

Preserve domain-invariant info.
t-NSE to visualize the features → domain-invariant still makes mistake.

NIPS
2
A simple feature augmentation for DG
Data Augmentation
Architecture Design
Engineering
DG
Relying on image-space data aug.
Limited data diversity.
Require careful augmentation.
Where to add SFA?
Noise type for SFA.
Hyper-parameters.
Visualize using t-SNE.
Incorporating methods.
CVPR
3
Domain-invariant Disentangled Network for Generalizable Object Detection
Domain-invariant features
Architecture Design
Object Detection
Engineering
DG
Object detection has seldom being explored.
Effectiveness of each component.
Hyper-parameters.
Visualization (for fun/ no meaning).
4
Cross-domain Semantic Segmentation via Domain-invariant Iterative
Architecture Design
Domain-invariant features
Engineering
DG
5
Domain Generalization via Entropy Regularization
Domain-invariant features
Loss Function Design
DG
Theoretical
General task
Can only guarantee features have invariant marginal distributions.
invariance of conditional distributions more important.
Ensure conditional invariance → entropy regularization.

Different weighting factors.
Deeper Network.
Class imbalance.
Feature visualization (t-SNE).
NIPS
2020
6
Model-based Domain Generalization
Domain-invariant features
Loss Function Design
Theoretical
General task
DG
Capture inter-domain variation.

1st learn transformation map data → enforce invariance.
Re-formulate the domain generalization problem → semi-infinite constrained optimization problem.
NIPS
2021
7
Learning to learn single Domain Generalization
Data Augmentation
Loss Function Design
Engineering
General task
DG
Only 1 source domain, many unseen domains.
Leverage adversarial training.
Create fictitious, challenging data.
Use meta-learning scheme.
Wasserstein Autoencoder (WAE).

Features (t-SNE) visualization.
Hyper-parameters tuning.
Loss function validation.
Meta vs. Without Meta.
CVPR
2020
8
Domain Generalization with Mixstyle
Architecture Design
Mixing
Theoretical
General task
DG
Where to apply?
Mixing vs. Replacing
Random vs. fixed shuffle at multiple layers.
Hyper-parameters.
ICLR
2021
9
Learning to diversify for Single Domain Generalization
Data Augmentation
Engineering
General task
DG
Visualize (t-SNE) target features.
Hyper-parameters.
ICCV
2021
10
Gradient matching for Domain Generalization
Loss Function Design
Domain-invariant features
Theoretical
General task
DG
Tracking GIP.
Random Grouping → domains show no shifting → no focus on learning matching? → bigger domain shift, better Fish.
Hyper-parameters.
Ablation on pretrained-models.
ICLR
2022
11
A Fourier-based Framework for Domain Generalization
Data Augmentation
Architecture Design
Engineering
General task
DG
Phase component → high-level semantics
Magnitude component → low-level semantics
Fourier-based data augmentation.
Co-teacher regularization.

Different components impact: AM, a2o_co-teacher, o2a_co-teacher, Teacher (turn on/off components).
Other choice of Fourier-based data augmentation (AM vs. AS).

CVPR
2021
12
Progressive Domain Expansion Network for Single Domain Generalization.
Object Detection
DG
Engineering
Limited generalization performance gains
Lack appropriate safety and effectiveness constraints.

Domain expansion network.
Generated domain → progressively expanded.
Contrastive learning → learn cross-domain invariant representation.
Visualize (t-SNE) feature space.
Tuning hyper-parameters.

CVPR
2021
13
SWAD: Domain Generalization by seeking flat minima
Architecture Design
Theoretical
DG
General task
Simply minimizing ERM / complex, non-convex loss landscape → not sufficient.
Flat minima leads to robustness against the loss landscape shift.
Use dense stochastic weight averaging (D-SWA) → make the loss landscape flatter.

Local flatness analysis
Loss surface visualization
Validation accuracy/rounds
Different components.
NIPS
2021
14
Causality Inspired representation learning for DG
Domain-invariant features
Data Augmentation
Architecture Design
Engineering
General task
DG
Remove components.
Visualize attention map
Independence of causal representation.
Representation Importance (
)
Hyper-parameter sentivity.
CVPR
2023
15
SelfReg: Self-supervised Contrastive Regularization for Domain Generalization
Self-supervised Learning
Loss Function Design
Engineering
DG
General task
Require sampling of the negative data pair.
CL performance depends on quality/quantity of negative data pairs.
Only use positive data pairs → resolve problems caused by negative data pair sampling.
Self-supervised Contrastive Learning.
Class-specific domain perturbation layer → apply mixup augmentation (only positive pairs are used).
Visualize (t-SNE) the latent spaces
Different dissimilarity losses (logot only / feature only).
Use to visualize where network focuses on.
Removing each components (Losses, Mixup, CDPL, SWA, IDCL).
ICCV
2021
16
C-Mixup: Improving Generalization in Regression
Mixing
Data Augmentation
Regression
DG
Theoretical
Systematic analysis of mixup in regression remained unexplored.
Can result in incorrect labels.
Adjust the sampling probability based on the similarity of the labels.
Mixing on input data and label.
c
Generalization gap / Epochs.
Pair-wise divergence (averaging over class / domains).
Compatibility of C-Mixup (Integrate with other algorithms).
C-Mixup vs. other distance metrics.
Difference hyper-parameters (e.g., bandwidth).
NIPS
2022
17
Instance-Aware Domain Generalization for Face Anti-Spoofing
Data Augmentation
Architecture Design
DG
Engineering
Artificial domain labels are coarse-grained and subjective, which cannot reflect real domain accurately.
Focus on domain-level alignment, not fine-grained enough to ensure that learned representations are insensitive to domain-style.
Align features on instance-level.

Dynamic Kernel Generator
Categorical Style Assembly.
Asymmetric Instance Adaptive Whitening.
Remove components.
Different losses (replace).
Different style augmentation.
Different kernel designs.
18
RobustNet: Improving Domain Generalization in Urban-Scene Segmentation via Instance Selective Whitening.
Architecture Design
DG
Segmentation
Collecting multi-domain dataset is costly and labor-intensive.
Performance highly depends on the number of source datasets.
Exploit instance normalization layers → feature covariance contains domain-specific style such as texture and color.
Whitening transformation removes feature correlation and makes each feature have unit variance → eliminates domain-specific style information → may improve, but not fully explored DG

Instance selective whitening.
Whitening loss.
19
Disentangled Prompt Representation for Domain Generalization
Data Augmentation
Data Generation
DG
Engineering
General task
20
PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization
Data Augmentation
Data Generation
DG
Engineering
General task
Data of source and target domain are not accessible.
Only target task definition is given.

Large-scale vision language could shed light on this challenging source-free domain generalization.

21
Prompt-Driven Dynamic Object-Centric Learning for Single Domain Generalization
Data Augmentation
Data Generation
DG
Engineering
General task
2 existing approaches for single domain generalization: data augmentation + feature disentanglement. Those methods mainly focus on static network.
Static networks lack the capability to dynamically adapt to the diverse variations in different visual scenes, which limits the representation power of the models.
Each image may have its unique characteristics (e.g., variations in lighting conditions, object appearances, scene structures).
Object-centric representations robust to variations in appearance, context, scene complexity.

Dynamic Learning approach for Single Domain Generalization.
A prompt-based object-centric gating module is designed to perceive object-centric features of objects.
Leverage multi-modal features of CLIP (prompts describe different domain scenes).
Slot-Attention multi-modal fusion module → fuse the linguistic/visual features → extract effective object-centric representations.
→ Generate the gating masks → dynamically select relevant object-centric features to improve generalization ability.
22
Disentangled Prompt Representation for Domain Generalization
Data Augmentation
Data Generation
DG
Engineering
General task
Large-scale pre-trained models greatly enhance domain generalization.
Pre-trained Visual Foundation Model (VFM): trained by utilizing large-scale (image, text) pairs → rich in semantic information of prior knowledge.
VFMs are able to encode semantic meanings of visual descriptions (regardless of styles).
Fine-tuning pre-trained foundation models with new datasets → achieve better results on downstream tasks with few training samples.

ISSUES:
Existing prompt tuning methods tune the foundation model to generate domain and task-specific features, whereas domain generalization requires the model to generate domain-invariant features that works well across different unseen domains → Crucial to develop prompts that can guide the foundation model in disentangling invariant features across all domains.
Fully leverage a distinctive aspect of VFM (controllable and flexible language prompt).
Text prompt plays a vital role → guide the disentanglement of image feature.
Text modality in VFM can be more easily disentangled (rich in semantic information and interpretable).

prompt tuning framework for DG with LLM-assist text prompt disentanglement + text-guided visual representation disentanglement model.
Domain-invariant + domain-specific descriptions are first generated with LLM (for prompt tuning to learn disentangled textual features).
Learned disentangled textual features → guide the learning of domain-invariant and domain-specific visual features.
To classify images from unseen domains → leveraging domain-specific knowledge from similar seen domains is essential → domain-specific prototypes will be selected for images from different unseen domain.
23
STYLIP: Multi-Scale Style-Conditioned Prompt Learning for CLIP-based Domain Generalization
Data Augmentation
Data Generation
DG
Engineering
General task
24
Unknown Prompt, the only Lacuna: Unveiling CLIP’s Potential for Open Domain Generalization
Data Augmentation
Data Generation
DG
Engineering
General task
Key research gaps in using CLIP for Open DG (Unseen domain may contains new labels/categories):
Prompt design:
Multi-class classification over one-against-all recourse for ODG.
Domain-agnostic visual embeddings.
Unify the classification of known classes and outliers using CLIP → unknown-class prompt.
Gather training data → generate pseudo-open images that semantically distinct from existing categories → opt to pre-trained conditional diffusional model.

25
Learning Domain Invariant Prompt for Vision-Language Models
Data Augmentation
Data Generation
DG
Engineering
General task
26
Towards Principled Disentanglement for Domain Generalization
Self-supervised Learning
Data Generation
DG
Engineering
General task
Spurious correlation.
First, diversify the inter-class variation = modeling potential seen/unseen variations.
Then, disentangle constrained DG.
Principled constrained learning formulation based on disentanglement → theoretical guarantees on empirical duality gap.
Promotes semantic invariance via constrained optimization setup.
Controllable/interpretable data generation.
CVPR
2022
27
Towards Unsupervised Domain Generalization
Manually labeled data can be costly or unavailable.
Unlabeled data can be more accessible.
Contrastive learning only learns robust representations against pre-defined perturbation (under IID).
Unsupervised learning discriminative representations.
Select valid source of negative samples according to the similarity among domains.
Big differences: 1) domain-related features → discriminative enough, 2) boost variance across domains.
How unsupervised learning enhances the generalization ability of models:

CVPR
2022
28
PCL: Proxy-based Contrastive Learning for Domain Generalization
CVPR
2022
29
Style Neophile: Constantly Seeking Novel Styles for Domain Generalization
CVPR
2022
30
Compound Domain Generalization via Meta-Knowledge Encoding
CVPR
2022
31
Ensemble of Averages: Improving Model Selection and Boosting Performance in Domain Generalization
NIPS
2022
32
Unsupervised Domain Generalization by Learning a Bridge Across Domains
CVPR
2022
33
CLIP the Gap: A Single Domain Generalization Approach for Object Detection
CVPR
2023
34
Exact Feature Distribution Matching for Arbitrary Style Transfer and Domain Generalization
CVPR
2022
35
Domain Generalization via Shuffled Style Assembly for Face Anti-Spoofing
CVPR
2022
36
DNA: Domain Generalization with Diversified Neural Averaging
ICML
2022
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
There are no rows in this table

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.